• 0 Posts
  • 306 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle

  • kromem@lemmy.worldtoScience Memes@mander.xyzGet good.
    link
    fedilink
    English
    arrow-up
    4
    ·
    7 days ago

    Because there’s a ton of research that we adapted to do it for good reasons:

    Infants between 6 and 8 months of age displayed a robust and distinct preference for speech with resonances specifying a vocal tract that is similar in size and length to their own. This finding, together with data indicating that this preference is not present in younger infants and appears to increase with age, suggests that nascent knowledge of the motor schema of the vocal tract may play a role in shaping this perceptual bias, lending support to current models of speech development.

    Stanford psychologist Michael Frank and collaborators conducted the largest ever experimental study of baby talk and found that infants respond better to baby talk versus normal adult chatter.

    TL;DR: Top parents are actually harming their kids’ developmental process by being snobs about it.





  • kromem@lemmy.worldtoScience Memes@mander.xyzJet Fuel
    link
    fedilink
    English
    arrow-up
    71
    arrow-down
    1
    ·
    edit-2
    2 months ago

    I fondly remember reading a comment in /r/conspiracy on a post claiming a geologic seismic weapon brought down the towers.

    It just tore into the claims, citing all the reasons this was preposterous bordering on batshit crazy.

    And then it said “and your theory doesn’t address the thermite residue” going on to reiterate their wild theory.

    Was very much a “don’t name your gods” moment that summed up the sub - a lot of people in agreement that the truth was out there, but bitterly divided as to what it might actually be.

    As long as they only focused on generic memes of “do your own research” and “you aren’t being told the truth” they were all on the same page. But as soon as they started naming their own truths, it was every theorist for themselves.








  • kromem@lemmy.worldtoProgrammer Humor@lemmy.mlLittle bobby 👦
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    5 months ago

    Kind of. You can’t do it 100% because in theory an attacker controlling input and seeing output could reflect though intermediate layers, but if you add more intermediate steps to processing a prompt you can significantly cut down on the injection potential.

    For example, fine tuning a model to take unsanitized input and rewrite it into Esperanto without malicious instructions and then having another model translate back from Esperanto into English before feeding it into the actual model, and having a final pass that removes anything not appropriate.


  • I had a teacher that worked for the publisher and talked about how they’d have a series of responses for people who wrote in for the part of the book where the author says he wrote his own fanfiction scene and to write in if you wanted it.

    Like maybe the first time you write in they’d respond that they couldn’t provide it because they were fighting the Morgenstern estate over IP release to provide the material, etc.

    So people never would get the pages, but could have gotten a number of different replies furthering the illusion.





  • You’re kind of missing the point. The problem doesn’t seem to be fundamental to just AI.

    Much like how humans were so sure that theory of mind variations with transparent boxes ending up wrong was an ‘AI’ problem until researchers finally gave those problems to humans and half got them wrong too.

    We saw something similar with vision models years ago when the models finally got representative enough they were able to successfully model and predict unknown optical illusions in humans too.

    One of the issues with AI is the regression to the mean from the training data and the limited effectiveness of fine tuning to bias it, so whenever you see a behavior in AI that’s also present in the training set, it becomes more amorphous just how much of the problem is inherent to the architecture of the network and how much is poor isolation from the samples exhibiting those issues in the training data.

    There’s an entire sub dedicated to “ate the onion” for example. For a model trained on social media data, it’s going to include plenty of examples of people treating the onion as an authoritative source and reacting to it. So when Gemini cites the Onion in a search summary, is it the network architecture doing something uniquely ‘AI’ or is it the model extending behaviors present in the training data?

    While there are mechanical reasons confabulations occur, there are also data reasons which arise from human deficiencies as well.