• Wololo@lemmy.world
    link
    fedilink
    English
    arrow-up
    46
    ·
    1 year ago

    I literally broke down into tears doing this one night. Was running something that would take hours to complete and noticed an issue at maybe 11pm. Tried to troubleshoot and could not for the life of me figured it out. Thought to myself, surely chatgpt can help me figure this out quickly. Fast forward to 3am, work night: “no, as stated several times prior, this will not resolve the issue, it causes it to X, Y, Z, when it should be A, B, C. Do you not understand the issue?”

    “I apologize for any misunderstanding. You are correct, this will not cause the program to A, B, C. You should… Inserts the same response it’s been giving me for several hours

    It was at that moment that I realized these large language models might not currently be as advanced as people make them out to be

    • tweeks@feddit.nl
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Might I ask if you were using Chat-GPT 3 or 4? I had this as well, got send into circles for hours, with 3. Then I used 4.

      Only two bloody messages back and forth and I got my solution.

      • Wololo@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        If I remember correctly it should have been gpt-4, of course, there is always a chance it was 3.5

        Since then I’ve learned much better ways to kind of manipulate it into answering my questions more precisely, and that seems to do the trick

          • tweeks@feddit.nl
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Yes, and 4 has access to several custom plugins, live web browsing (temporarily disabled though) and a Python Interpreter (soft launch, as I can use it but did not see a release post yet). All in beta though.

    • twitterfluechtling@lemmy.pathoris.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      They are trained to give answers which sound convincing on a first glance, for simple questions in most fields that strongly correlates with the correct answer. So, asking something simple on a topic I have no clue has a high likelihood to yield the answer I’m looking for.

      The problem is, if I have no clue, the only way to know if I exceeded the “really simple” ralm is by trying the answer and failing, because chatgpt has no concept of verifying it’s own answers or identifying its own limitations, or even to “learn” from it’s mistakes, as such.

      I do know some very similar humans, though: Very assertive, selling guesses and opinions as facts, overestimating themselves, never backing down. ChatGPT might replace tech-CEOs or politicians 😁

      • Wololo@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        It’s entirely possible! I remember listening to a podcast on AI, where they mentioned someone once asked the questions “which mammal lays the largest eggs” to which the ai responded with elephants, and proceeded to argue with the user that it was right and he was wrong.

        It has become a lot easier as I’ve learned how to kind of coach it in the direction I want, pointing out obvious errors and showing it what I’m really looking to do.

        Ai is a great tool, when it works. As the technology improves I’m sure it will rapidly get better.

        • blargh1111@lemmy.one
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          So is the answer a platypus? I think that’s the only mammal that lays eggs, but now I’m wondering about echidnas.

    • kicksystem@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      Oh yeah. I was learning some Haskell with the “help” of GPT4. It send me down a super frustrating rabbit hole where in the end I concluded that I knew Haskell better than GPT4 and it was wrong from the very start 🤷‍♂️

      • Wololo@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        When you end up resorting to saying things like “wow, this is wonderful, but… It breaks my code into a million tiny pieces” Or “for the love of God do you have any idea what you’re actually doing?” It’s a sign that perhaps stack overflow is still your best (and only) ally