• ALoafOfBread@lemmy.ml
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    10
    ·
    edit-2
    19 days ago

    Most of the article is paywalled, but the main points seem to be that AI work is less creative/lower quality & people spend more time fixing it than they would have making it.

    That has not been my experience. On the one hand ‘less creative’ - that’s true, I don’t think LLMs can be creative. But they can summarize information or rephrase/expand on things I say based on provided context. So I spend much less time on formatting and draft creation for text based documents. I can have an agent draft things for me and then I just tidy up.

    As for low quality work products, again, not my experience. I use agentic AI regularly to automate simple but repetitive business tasks that would take me much longer to write code to automate. I am not an engineer, I am an analyst/consultant. I can code some things, but it is often not worth the time investment (many tasks are one-offs, etc).

    A friend of mine made an AI agent using an agent that can interpret pictures of charts and find supporting data in our databases (to find out what other teams referenced for their analyses)and/or make a copy of the chart and make modifications to it. Or it can create seaborn charts from text descriptions using data from our database. Now a team of non-technical users can make seaborn charts without having to know python. That is pretty powerful in terms of saving time & expanding productivity.

    It’s easy to shit on the tech, but it has legitimately useful applications that help productivity.

    Edit: downvote if you want, but it is ignorant to say that LLMs only produce garbage. It very much depends on the user and on the application.

    • quetzaldilla@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      19 days ago

      AI made a $2M mistake at the public accounting firm I worked at.

      Management responded by blaming and firing an entire team for not double-checking the AI output, even though it was literally impossible for them to do so due to the volume of the output and lack of experience.

      This will be you, sooner or later.

      • ALoafOfBread@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        4
        ·
        19 days ago

        I understand your perspective, but I do review the code. I also do extensive testing. I don’t use packages I’m unfamiliar with. I still read the docs. I don’t run code I don’t understand.

        Again, the quality of the output really comes down to the user and the application. It is still faster for me to do what I’ve outlined above and it makes automating some tasks worth it in terms of ROI that otherwise wouldn’t be.

        • quetzaldilla@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          12 days ago

          I may be a tax professional today, but I’ve been passionate about computers since before computers had GUI.

          Your computer most likely uses plugins that I wrote back when I was obsessed with programming, especially if you use Linux systems.

          Basically, I was one of those who wrote the documentation you depend on when you are troubleshooting your AI code.

          To do your vibe coding, you depend on tools that people like me designed and maintained over decades. And once we go, these tools will break and you will be helpless if others do not step up with the same level of passion and intellectual curiosity that is critical to problem solving processes.

          AI cannot simply build new tools because it lacks the intellectual curiosity that drives humans to understand their environment, identify problems, and work collaboratively with others.

          AI is not intelligent. It is only capable of looking backwards, not forward. It is a performative regurgitation of information gathered and synthesized by others, and it does that very poorly.

          All LLMs are hallucinating and going psychotic even when there’s billions of dollars being thrown at the problem, because it is a problem that cannot be fixed because of the very nature of how LLMs are put together.

          My advice to you is to reduce your AI usage and practice making things from scratch.

          Not just coding, but other processes too like baking, cooking, repairing objects, writing, painting, etc.

          Thinking and learning are literally the best parts of being human, and all AI will ever teach you is how to avoid thinking and it will steal all the opportunities for you to genuinely learn something new.

          Because even after a twenty year hiatus, I can probably do whatever coding you are doing-- but I can do it without relying on AI.

          • ALoafOfBread@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            12 days ago

            I am also learning to code and have been for years casually. In fact, I can program basic stuff in a couple languages.

            I do not believe I am harming a single person by making scripts for myself and small team of highly non-technical folks by using AI that allows me to produce better code while still being able to do my day job (which is not programming).

            In fact, I learn a lot by using AI and having to do research. I learn more than I would on my own because being able to use LLMs to assist me has allowed me to take on projects that wouldn’t make sense for me to attempt if I had to spend weeks to get to the same outcome.

            I am well aware of how LLMs work. I have read numerous articles, white papers, talked to ML engineers, etc. I know their limitations pretty well. I think they are much less capable than almost all laypeople I have spoken to about it.

            All that is to say: LLMs have some legitimate use cases. They are wasteful, inaccurate, etc etc. But they have use cases. People are hyperbolic in both directions on them.

            • quetzaldilla@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              11 days ago

              When experts across multiple disciples, all across the world, are sounding the alarms about AI that is not being hyperbolic.

              You are basically learning from something incapable of teaching you the fundamentals, and who will never challenge you and help you refine your own critical thinking and research skills like a mentor or a professor will.

              When you use AI, all you are really doing is helping your employer make a profit and lay the groundwork to justify firing you and your colleagues down the road.

              You know what I made with my programming and coding skills?

              Patches and updates so you could have more fun playing with your computer, making digital art apps more streamlined and easy to use, adding accessibility features for the disabled.

              AI is just there to make you more “efficient” so you can make someone else a profit, all while the billionaire owners continue to rape the planet and everything in it.

              You and I are on the same team as far as I’m concerned. I’m trying to look out for you, not fight you.

      • ALoafOfBread@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        3
        ·
        19 days ago

        No, literally nothing like what I said. It could still be garbage if you didn’t understand or review the output. That’s why you understand and review the output.

    • Eheran@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      19 days ago

      Lemmy is mostly anti LLM, hence the downvotes, regardless of how you use it.