• 3 Posts
  • 157 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle
  • Kwakigra@beehaw.orgtoScience Memes@mander.xyzStress
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 days ago

    On the ACES assessment, every single one of the 10 things asked about is known to cause neurological issues in children which persist throughout their lives. Only a 0 is insignificant. A 1 is very significant. I also scored “low” and thought it didn’t mean anything, but I’ve since learned every single one means a lot.














  • I have two main thoughts on this

    1. LLMs are not at this time reliable sources of factual information. The user may be getting something that was skimmed from factual information, but the output can often be incorrect since the machine can’t “understand” the information it’s outputting.

    2. This could potentially be an excellent way to do real research for people who were not provided research skills by their education. Conspiracy theorists often start off as curious but undisciplined before they fall into the identity aspects of the theories. If a machine using human-like language is able to report factual information quickly, reliably, and without judgement to those who wouldn’t be able to find that info on their own, this could actually be a very useful tool.






  • I just tested out the classic “She working” vs “She be working,” and the machine got it backwards. It can’t translate to AAVE, but it probably can appear to be well enough for people who wouldn’t know the difference. In terms of available written materials just by population and historical access it seems like there would be way more incorrect white imitations of AAVE to draw from than its correct usage. Like a lot of LLM issues, it’s been a problem for a loooong time but is now being put into overdrive by being automated.