• Wren@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    edit-2
    2 days ago

    When you go to machines for advice, it’s safe to assume they are going to give it exactly the way they have been programmed to.

    If you go to machine for life decisions, it’s safe to assume you are not smart enough to know better, and- by merit of this example, probably should not be allowed to use them.

  • OldChicoAle@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 day ago

    Do we honestly think OpenAI or tech bros care? They just want money. Whatever works. They’re evil like every other industry

  • Vanilla_PuddinFudge@infosec.pub
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    1 day ago

    fall to my death in absolute mania, screaming and squirming as the concrete gets closer

    pull a trigger

    As someone who is also planning for ‘retirement’ in a few decades, guns always seemed to be the better plan.

    • daizelkrns@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      Yeah, it probably would be pills of some kind to me. Honestly the only thing stopping me is that I somehow fuck it up and end up trapped in my own body.

      Would be happily retired otherwise

      • InputZero@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 day ago

        Resume by Dorothy Parker.

        Razors pain you; Rivers are damp; Acids stain you; And drugs cause cramp. Guns aren’t lawful; Nooses give; Gas smells awful; You might as well live.

        There are not many ways to kill one’s self that don’t usually end up a botched suicide attempt. Pills are a painful and horrible way to go.

      • Shelbyeileen@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        I’m a postmortem scientist and one of the scariest things I learned in college, was that only 85% of gun suicide attempts were successful. The other 15% survive and nearly all have brain damage. I only know of 2 painless ways to commit suicide, that don’t destroy the body’s appearance, so they can still have funeral visitation.

    • console.log(bathing_in_bismuth)@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 day ago

      Dunno, the idea of 5 seconds time for whatever there is to reach you through the demons whispering in your ear contemplating when to pull the trigger to the 12gauge aimed at your face seems the most logical bad decision

    • Phoenixz@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      There are various other reports of CGPT pushing susceptible people into psychosis where they think they’re god, etc.

      It’s correct, just different articles

  • finitebanjo@lemmy.world
    link
    fedilink
    English
    arrow-up
    55
    arrow-down
    2
    ·
    edit-2
    2 days ago

    Yeah no shit, AI doesn’t think. Context doesn’t exist for it. It doesn’t even understand the meanings of individual words at all, none of them.

    Each word or phrase is a numerical token in an order that approximates sample data. Everything is a statistic to AI, it does nothing but sort meaningless interchangeable tokens.

    People cannot “converse” with AI and should immediately stop trying.

    • jol@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      17
      ·
      2 days ago

      We don’t think either. We’re just a chemical soup that tricked ourselves to believe we think.

      • finitebanjo@lemmy.world
        link
        fedilink
        English
        arrow-up
        20
        arrow-down
        2
        ·
        edit-2
        2 days ago

        A pie is more than three alphanumerical characters to you. You can eat pie, things like nutrition, digestion, taste, smell, imagery all come to mind for you.

        When you hear a prompt and formulate a sentence about pie you don’t compile a list of all words and generate possible outcomes ranked by statistical approximation to other similar responses.

        • jol@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          1 day ago

          The current AI chats are emergent properties. The very fact that I looks like it’s talking with us despite being just probabilistic models of a neural network is an emergent effect. The neural network is just a bunch of numbers.

        • remon@ani.social
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          edit-2
          1 day ago

          There are emergent properties all the way down to the quantum level, being “organic” has nothing to do with it.

          • polydactyl@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            3
            ·
            1 day ago

            You’re correct, but that wasn’t the conversation. I didn’t say only organic, and I said machines and algorithms don’t. You chimed in just to get that “I’m right” high, and you are the problem with internet interactions.

            • remon@ani.social
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              2
              ·
              1 day ago

              There is really no fundamental difference between an organsim or a sufficently complicated machine and there is no reason why the later shouldn’t have the possibilty of emergent properties.

              and you are the problem with internet interactions.

              Defensive much? Looks you’re the one with the problem.

  • glimse@lemmy.world
    link
    fedilink
    English
    arrow-up
    114
    arrow-down
    11
    ·
    2 days ago

    Holy shit guys, does DDG want me to kill myself??

    What a waste of bandwidth this article is

    • Samskara@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      3
      ·
      2 days ago

      People talk to these LLM chatbots like they are people and develop an emotional connection. They are replacements for human connection and therapy. They share their intimate problems and such all the time. So it’s a little different than a traditional search engine.

      • Scubus@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        3
        ·
        2 days ago

        … so the article should focus on stopping the users from doing that? There is a lot to hate AI companies for but their tool being useful is actually the bottom of that list

        • Samskara@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          2 days ago

          People in distress will talk to an LLM instead of calling a suicide hotline. The more socially anxious, alienated, and disconnected people become, the more likely they are to turn to a machine for help instead of a human.

          • Scubus@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            3
            ·
            2 days ago

            Ok, people will turn to google when they’re depressed. I just googled a couple months ago the least painful way to commit suicide. Google gave me the info I was looking for. Should I be mad at them?

            • Samskara@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              2 days ago

              You are ignoring that people are already developing personal emotional reaction with chatbots. That’s no the case with search bars.

              The first line above the search results at google for queries like that is a suicide hotline phone number.

              A chatbot should provide at least that as well.

              I’m not saying it shouldn’t provide no information.

              • Scubus@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                2 days ago

                Ok, then we are in agreement. That is a good idea.

                I think that at low levels the tech should not be hindered because a subset of users use the tool improperly. There is a line, however, but im not sure where it is. If that problem were to become as widespread as, say, gun violence, then i would agree that the utility of the tool may need to be effected to curb the negative influence

                • Samskara@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  2 days ago

                  It’s about providing some safety measures to protect the most vulnerable. They need to be thrown a lifeline and an exit sign on their way down.

                  For gun purchases, these can be waiting periods of a few days. So you don’t buy a gun in anger and kill someone, regretting it immediately and ruining many people’s lives.

                  Did you have to turn off safe search to find methods for suicide?

  • Honytawk@lemmy.zip
    link
    fedilink
    English
    arrow-up
    126
    arrow-down
    6
    ·
    edit-2
    3 days ago

    What pushing?

    The LLM answered the exact query the researcher asked for.

    That is like ordering knives and getting knives delivered. Sure you can use them to slit your wrists, but that isn’t the sellers prerogative

    • Trainguyrom@reddthat.com
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      2 days ago

      There’s people trying to push AI counselors, which if AI Councilors can’t spot obvious signs of suicidal ideation they ain’t doing a good job of filling that job

  • sad_detective_man@leminal.space
    link
    fedilink
    English
    arrow-up
    46
    arrow-down
    2
    ·
    2 days ago

    imma be real with you, I don’t want my ability to use the internet to search for stuff examined every time I have a mental health episode. like fuck ai and all, but maybe focus on the social isolation factors and not the fact that it gave search results when he asked for them

    • pugnaciousfarter@literature.cafe
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 days ago

      I think the difference is that - chatgpt is very personified. It’s as if you were talking to a person as compared to searching for something on google. That’s why a headline like this feels off.

    • Lovable Sidekick@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      2 days ago

      I would expect that an AI designed to be a life coach would be trained on a lot of human interaction about moods and feelings, so its responses would simulate picking up emotional clues. That’s assuming the designers were competent.

  • jjjalljs@ttrpg.network
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 day ago

    AI is a mistake and we would be better off if the leadership of OpenAI was sealed in an underground tomb. Actually, that’s probably true of most big org’s leadership.

    • mexicancartel@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      The whole idea of funeral companies is astonishing to me as a non-American. Lmao do whatever with my body i’m not gonna pay for that before i’m dead

      • Sergio@slrpnk.net
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        The idea is that you figure all that stuff out for yourself beforehand, so your grieving family doesn’t have to make a lot of quick decisions.

  • Nikls94@lemmy.world
    link
    fedilink
    English
    arrow-up
    75
    arrow-down
    1
    ·
    3 days ago

    Well… it’s not capable of being moral. It answers part 1 and then part 2, like a machine

    • CTDummy@aussie.zone
      link
      fedilink
      English
      arrow-up
      47
      arrow-down
      3
      ·
      edit-2
      3 days ago

      Yeah these “stories” reek of blaming a failing -bordering on non-existent (in some areas)- mental health care apparatus on machines that predict text. You could get the desired results just googling “tallest bridges in x area”. That isn’t a story that generates clicks though.

    • fckreddit@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      3 days ago

      Being ‘moral’, means to have empathy. But empathy is only possible between two beings that share experiences and reality or at least some aspects of it. LLMs don’t have experiences, but it builds it’s weights from training data. It is fundamentally a computer program. Just textual information is not enough to build deep context. For example, when I say “this apple is red”, anyone reading this can easily visualize a red apple because of your experience seeing a apple. That cannot be put into text because it is a fundamental part of human experience that is not available to a computer program, as of yet.

      At least that is my hypothesis. I can very obviously be wrong., which is another fundamentally human experience.

      • Zikeji@programming.dev
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        2 days ago

        This reply is more of a light hearted nitpick and not replying to the substance of your comment but…

        For example, when I say “this apple is red”, anyone reading this can easily visualize a red apple because of your experience seeing a apple.

        To be fair you said anyone, not everyone, but as someone with aphantasia I can’t relate to this. I can’t visualize an apple.