My short response. Yes.

  • nutsack@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    6
    ·
    6 hours ago

    marx talked about it. with sufficient automation, the value of Labor collapses. under socialism, this is a good thing. under capitalism it’s a bad thing.

    • pineapple@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 hour ago

      THIS!

      If ai takes all our jobs the only way forward is communism, otherwise the working class will collapse and the capitalist class will collapse alongside.

  • QuinnyCoded@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    6 hours ago

    No. I think we’re essentially where AI will stop improving in the LLM department, image/video generation might get better though.

    I assume within 5 years CEOS will stop advertising AI generated on stuff, but things like shitty t shirts and stuff will still have AI, they just won’t be marketed as. Back in the day things were marketed as plastic as a positive thing before slowly becoming a negative selling point, I assume AI will be similar.

    other than phones. There’s no other improvement they can market like gimmicks or nostalgia bait.

  • lightnsfw@reddthat.com
    link
    fedilink
    arrow-up
    11
    ·
    16 hours ago

    No, it’s going to be bad in really stupid ways that aren’t as cool as what happens when it goes bad in the movies.

  • DJKJuicy@sh.itjust.works
    link
    fedilink
    arrow-up
    10
    ·
    18 hours ago

    If/when we actually achieve Artificial Intelligence, then maybe it would be a concern.

    What we have today are LLMs which are big dumb parrots that just say things back to you that match a pattern. There is no actual intelligence.

    Calling our current LLMs “Artificial Intelligence” is just marketing. LLMs have been possible for a while but until recently we just didn’t have the processing power at the scale we have now.

    Once everyone realizes they’ve been falling for a marketing campaign and that we’re not very much closer to AI than we were before LLMs blew up, then LLMs will just become what they actually are: a tool that enhances human intelligence.

    I could be wrong though. If so, I, for one, welcome our new AI overlords.

      • DJKJuicy@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        arrow-down
        2
        ·
        16 hours ago

        I don’t think we’re any closer to AGI due to LLMs. If you take away all the marketing misdirection, to achieve AGI you would have to have artificial rational thought.

        LLMs have no rational thought. They just don’t. That’s not how they’re designed.

        Again, I could be wrong. If so, I was always in support of the machines.

        • SkaveRat@discuss.tchncs.de
          link
          fedilink
          arrow-up
          3
          ·
          16 hours ago

          I don’t think we’re any closer to AGI

          never said we did. Just that LLMs are included in the very broad definition that is “AI”

          • demonquark@lemmy.ml
            link
            fedilink
            arrow-up
            3
            ·
            7 hours ago

            Tbf, the phrase “as the movies say”, makes it reasonable to assume that OP meant AGI. Not the broad definition of AI.

            I mean, when is the last time you saw a movie about the dangers of the k-nearest neighbor algorithm?

  • Jhex@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    19 hours ago

    what movie?

    Terminator? no, our level of AI is ridiculously far from that

    The Big Short? yes, that bubble is going to pop and bring the world economy down

  • sylver_dragon@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    21 hours ago

    Short answer, no.

    Long answer: We are a long way off from having anything close to the movie villain level of AI. Maybe we’re getting close to the paperclip manufacturing AI problem, but I’d argue that even that is often way overblown. The reason I say this is that such arguments are quite hand-wavy about leaps in capability which would be required for those things to become a problem. The most obvious of which is making the leap from controlling the devices an AI is intentionally hooked up to, to devices it’s not. And it also needs to make that jump without anyone noticing and asking, “hey, what’s all this then?” As someone who works in cybersecurity for a company which does physical manufacturing, I can see how it would get missed for a while (companies love to under-spend on cybersecurity). But eventually enough odd behavior gets picked up. And the routers and firewalls between manufacturing and anything else do tend to be the one place companies actually spend on cybersecurity. When your manufacturing downtime losses are measured in millions per hour, getting a few million a year for NDR tends to go over much better. And no, I don’t expect the AI to hack the cybersecurity, it first needs to develop that capability. AI training processes require a lot of time failing at doing something, that training is going to get noticed. AI isn’t magically good at anything, and while the learning process can be much faster, that speed is going to lead to a shit-ton of noise on the network. And guess what, we have AI and automation running on our behalf as well. And those are trained to shutdown rogue devices attacking the cybersecurity infrastructure.

    “Oh wait, but the AI would be sneaky, slow and stealty!” Why would it? What would it have in it’s currently existing model which would say “be slow and sneaky”? It wouldn’t, you don’t train AI models to do things which you don’t need them to do. A paperclip optimizing AI wouldn’t be trained on using network penetration tools. That’s so far outside the need of the model that the only thing it could introduce is more hallucinations and problems. And given all the Frankenstein’s Monster stories we have built and are going to build around AI, as soon as we see anything resembling an AI reaching out for abilities we consider dangerous, it’s going to get turned off. And that will happen long before it has a chance to learn about alternative power sources. It’s much like zombie outbreaks in movies, for them to move much beyond patient zero requires either something really, really special about the “disease” or comically bad management of the outbreak. Sure, we’re going to have problems as we learn what guardrails to put around AI, but the doom and gloom version of only needing one mistake is way overblown. There are so many stopping points along the way from single function AI to world dominating AI that it’s kinda funny. And many of those stopping points are the same, “the attacker (humans) only need to get lucky once” situation. So no, I don’t believe that the paperclip optimizer AI problem is all that real.

    That does take us to the question of a real general purpose AI being let loose on the internet to consume all human knowledge and become good at everything, which then decides to control everything. And maybe this might be a problem, if we ever get there. Right now, that sort of thing is so firmly in the realm of sci-fi that I don’t think we can meaningfully analyze it. What we have today, fancy neural networks, LLMs and classifiers, puts us in the same ballpark as Jules Verne writing about space travel. Sure, he might have nailed one or two of the details; but, the whole this was so much more fantastically complex and difficult than he had any ability to conceive. Once we are closer to it, I expect we’re going to see that it’s not anything like we currently expect it to be. The computing power requirements may also limit it’s early deployment to only large universities and government projects, keeping it’s processing power well centralized. General purpose AI may well have the same decapitation problems humans do. They can have fantastical abilities, but they need really powerful data centers to run it. And those bring all the power, cooling and not getting blown the fuck up with a JDAM problems of current AI data centers. Again, we could go back and forth making up ways for AI to techno-magic it’s way around those problems, but it’s all just baseless speculation at this point. And that speculation will also inform the guardrails we build in at the time. It would boil down to the same game children play where they shoot each other with imaginary guns, and have imaginary shields. And they each keep re-imagining their guns and shields to defeat the other’s. So ya, it might be fun for a while, but it’s ultimately pointless.

  • tyo_ukko@sopuli.xyz
    link
    fedilink
    arrow-up
    45
    arrow-down
    1
    ·
    2 days ago

    No. The movies get it all wrong. There won’t be terminators and rogue AIs.

    What there will be is AI slop everywhere. AI news sites already produce hallucinated articles, which other AIs refer to and use as training data. Soon you cannot believe anything you read online, and fact checking will be basically impossible.

    • unwarlikeExtortion@lemmy.ml
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      23 hours ago

      Soon you cannot believe anything you read online.

      That’s a bit too blanket of a statement.

      There are, always were, and always will be reputable sources. Online or in print. Writteb or not.

      What AI will do is increase the amount of slop disproportionately. What it won’t do is suddenly make the real, actual, reputable sources magically disappear. Finding may become harder, but people will find a way - as they always do. New search engines, curated indexes of sites. Maybe even something wholly novel.

      .gov domains will be as reputable as the administration makes them - with or without AI.

      Wikipedia, so widely hated in academia, is proven to be at least as factual as Encyclopedia Britannica. It may be harder for it to deal with spam than it was before, but it mostly won’t be phased.

      Your local TV station will spout the same disinformation (or not) - with or without AI.

      Using AI (or not) is a management-level decision. What use of AI is or isn’t allowed is as well.

      AI, while undenkably a gamechanger, isn’t as big a gamechanger as it’s often sold as, and the parallels between the AI and the dot-com bubble are staggering, so bear with me for a bit:

      Was dot-com (the advent of the corporate worldwide Internet) a gamechanger? Yes.

      Did it hurt the publishing industry? Yes.

      But is the publishing industry dead? No.

      Swap “AI” for dot-com and “credible content” for the publishing industry and you have your boring, but realistic answer.

      Books still exist. They may not be as popular, but they’re still a thing. CDs and vinyl as well. Not ubiquitous, but definitely chugging along just fine. Why should “credible content” die, when the disruption AI causes to the intellectual supply chain is so much smaller than suddenly needing a single computer and an Internet line instead of an entire large-scale printing setup?

    • pilferjinx@piefed.social
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      2 days ago

      Unless we have a bot that’s dedicated to tracing the origin of online information and can roughly evaluate the accuracy to real events.

    • Lunatique Princess@lemmy.mlOP
      link
      fedilink
      arrow-up
      3
      arrow-down
      13
      ·
      2 days ago

      I agree with the slop part but you can’t say the movies get it all wrong if it hasn’t gotten to the point where it can be proven or disproven yet.

      • deadcade@lemmy.deadca.de
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        1 day ago

        Movies like Terminator have “AGI”, or artificial generalized intelligence. We had to come up with a new term for it after LLM companies kept claiming they had “AI”. Technically speaking, large language models fall under machine learning, but they are limited to just predicting language and text, and will never be able to “think” with concepts or adapt in real time to new situations.

        Take for example chess. We have stockfish (and other engines), that far outperform any human. Can these chess engines “think”? Can they reason? Adapt to new situations? Clearly not, for example, adding a new piece with different rules would require stockfish to re-train from scratch. Humans can take their existing knowledge and adapt it to the new situation. Also look at LLMs attempting to play chess. They can “predict the next token” as they were designed to, but nothing more. They have been trained on enough chess notation that the output is likely a valid notation, but they have no concept of what chess even is, so they will spit out nearly random moves, often without following rules.

        LLMs are effectively the same concept as chess engines. We just put googly eyes on the software, and now tons of people are worried about AI taking over the world. While current LLMs and generative AI do pose a risk (overwhelming amounts of slop and misinformation, which could affect human cultural development. And a human deciding to give an LLM external influence on anything, which could have major impact), it’s nowhere near Terminator-style AGI. For that to happen, humans would have to figure out a new way of thinking about machine learning, and there would have to be several orders of magnitude more computing resources for it.

        Since the classification for “AI” will probably include “AGI”, there will (hopefully) be legal barriers in place by the time anyone develops actual AGI. The computing resources problem is also gradual, an AGI does not simply “tranfer itself onto a smartphone” in the real world (or an airplane, a car, you name it). It will exist in a massive datacenter, and can have its power shut off. If AGI does get created, and causes a massive incident, it will likely be during this time. This would cause whatever real world entity created it to realize there should be safeguards.

        So to answer your question: No, the movies did not “get it right”. They are overexaggerated fantasies of what someone thinks could happen by changing some rules of our current reality. Artwork like that can pose some interesting questions, but when they’re trying to “predict the future”, they often get things wrong that changes the answer to any questions asked about the future it predicts.

      • BlueSquid0741@lemmy.sdf.org
        link
        fedilink
        arrow-up
        13
        arrow-down
        2
        ·
        2 days ago

        The movies depict actual AI. That is, machines/software that is sentient and can think and act for itself.

        The future is going to be more of the shit we have now- LLMs / “guessing software”.

        But also, why ask the question if you think the answer can’t be given yet?

  • comfy@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    19 hours ago

    “as bad”… not quite, and not in the same way. As other people have said, there’s no conscience to AI and I doubt there will be any financial incentive to develop one capable of “being evil” or doing some doomsday takeover. It’s a tool, it will continue to be abused by malicious actors, idiots will continue to trust it for things it can’t do properly, but this isn’t like the movies where it is malicious or murderous.

    It’s perfectly capable of, say, being used to push people into personalized hyperrealities (consider how political advertising was microtargeted in the Cambridge Analytica scandal, and consider how convincing fake AI imagery can be at a glance). It’s a more boring dystopia, but a powerful bad one nonetheless, capable of deconstructing societies to a large degree.

  • UltraGiGaGigantic@lemmy.ml
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    edit-2
    1 day ago

    AI (once it is actually here) is just a tool. Much like other tools, its impact will be dependent on who is using and and what for.

    Who do you feel has the most agency in our current status quo? What are they currently doing? These will answer your question.

    Its the 1%, and they will build a fully automated army and get rid of all but the sexiest of us to keep as sex slaves.

    This is worth it because capitalism is the most important thing on planet earth. Not humanity, capitalism. Thus the vasectomy. The 1% can make their own slaves. And with AI they will.

      • Dragonstaff@leminal.space
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        I don’t feel like you read the entire comment you replied to.

        Yes, AI is a tool with horrifying implications. Machine learning has some interesting use cases, but if one had any hope that it would be implemented well, that should be dashed by the way it is run by the weirdest bros imaginable with complete contempt for the concept of consent.

  • Gates9@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 day ago

    First it’s gonna crash the economy because it doesn’t work then it’s gonna crash the economy because it does

  • HiddenLayer555@lemmy.ml
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    1 day ago

    Short answer: No one today can know with any amount of certainty because we’re nowhere close to developing anything resembling “AI” in the movies. Today’s generative AI is so far from artificial general intelligence it would be like asking someone from the middle ages when the only form of remote communication was letters and messengers, whether social media will ruin society.

    Long answer:

    First we have to define what “AI” is. The current zeitgeist meaning of “AI” refers to LLMs, image generators, and other generative AI, which is nowhere close to anything resembling real consciousness and therefore can be neither evil nor good. It can certainly do evil things, but only at the direction of evil humans, who are the conscious beings in control. Same as any other tool we’ve invented.

    However, generative AI is just one class of neural network, and neural networks as a whole was once the colloquial definition of “AI” before ChatGPT. There have been simpler, single purpose neural networks before it, and there will certainly be even more complex neural networks after it. Neural networks are modeled after animal brains: nodes are analogous to neurons which either fully fire or doesn’t fire at all depending on input from the neurons it’s connected to, connections between nodes are analogous to connections between axons and dendrites, and neurons can up or down regulate input from different neurons similar to the weights applied to neural networks. Obviously, real nerve cells are much more complex than the simple mathematical representations of neural networks, but neural networks do show similar traits to networks of neurons in a brain, so it’s not inconceivable that in the future, we could potentially develop a neural network as or more complex than a human brain, at which point it could start exhibiting traits that are suggestive of consciousness.

    This brings us to the movie definition of “AI,” which is generally “conscious” AI as or more intelligent than a human. A being with an internal worldview, independent thoughts and opinions, and an awareness of itself in relation to the world, currently traits only brains are capable of, and when concepts like “good” or “evil” can maybe start to be applicable. Again, just because neural networks are modeled after animal brains doesn’t prove it can emulate a brain as complex as humans have, but we also can’t prove it definitely won’t be able to with enough technical advancement. So the most we can say right now is that it’s not inconceivable, and if we do ever develop consciousness in our AI, we might not even know until much later because consciousness is difficult to assess.

    The scary part about a hypothetical artificial general intelligence is that once it exists, it can rapidly gain intelligence at a rate orders of magnitude faster than the evolution of intelligence in animals. Once it starts doing its own AI research and creating the next generation of AI, it will become uncontrollable by humanity. What happens after or whether we’ll even get close to this is impossible to know.

  • BarrelsBallot@lemmygrad.ml
    link
    fedilink
    arrow-up
    3
    ·
    1 day ago

    It will be as bad as it is now with an even higher intensity.

    We will see it continue to be used as a substitute for research, learning, critical or even surface level thinking, and interpersonal relationships.

    If and when our masters create an AI that is actually intelligent, and maybe even sentient as depicted in movies- it will be a thing that provides biased judgments behind a veneer of perceived objectivity due to its artificial nature. People will see it as a persona completely divorced from the prejudices of its creators as they do now with chat GPT. And who ever can influence this new “objective” truth will wield considerable power.

      • BarrelsBallot@lemmygrad.ml
        link
        fedilink
        arrow-up
        1
        ·
        20 hours ago

        Trust that I agree with you on this, I use the word “master” intentionally though- as we are subjected to their whims without any say in the matter.

        There are also many of us who are (unwittingly) dependent or addicted to their products / services. You and I both know plenty of people who give into almost every impulse incentivized by these products, especially when in the form of entertainment.

        Our communities are now choc full of slaves and solicitors- a master is an enemy yes, but only when his slaves know who owns them.

  • collapse_already@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 days ago

    It will be worse than the movies because they don’t portray how every mundane thing will somehow be worse. Tech support? Worse. Customer service? Worse. Education? Worse. Insurance? Worse. Software? Worse. Health care? Worse. Mental health? Worse. Misinformation? Pervasive. Gaslighting? Pervasive.

    • III@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 hours ago

      Movie AI isn’t what we are headed for. This is what we are headed for. Where’s that movie?