I know there’s other plausible reasons, but thought I’d use this juicy title.

What does everyone think? As someone who works outside of tech I’m curious to hear the collective thoughts of the tech minds on Lemmy.

  • darth_helmet@sh.itjust.works
    link
    fedilink
    arrow-up
    80
    arrow-down
    1
    ·
    1 year ago

    Moving too dumb. Something caused Microsoft to ban OpenAI for its employees last week, probably a massive security blunder that we hopefully get to find out about eventually.

    • alternative_factor@kbin.social
      link
      fedilink
      arrow-up
      42
      arrow-down
      1
      ·
      1 year ago

      I think he was probably lying about where he got all the data used to train the model from, I’m guessing training a model on tons of copyrighted material and stolen user data won’t be legal in the near future.

        • alternative_factor@kbin.social
          link
          fedilink
          arrow-up
          12
          ·
          1 year ago

          Yeah I’ve done a tiny bit of AI stuff for what I do (biology) and I think it’s very sus they can build such a strong model out of data which costs lots of money. The reason the algos in my field of biology are so strong is because the NCBI has the genomes of everything that’s be sequenced FOR FREE, because obviously you don’t want people patenting genomes and it should all be free for science, etc.

          Which begs the question how the a start up that started out as a non-profit get that much user data and keep costs low? I know you can buy user data and I’m not sure how much it is to buy a bunch of google docs from a data broker, but if you buy from hackers who just data breached or used some illegal crawler you can probably cut that to prices a nonprofit could afford.

        • alternative_factor@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Very true but they don’t always win, and besides, there are other lobbyists who are out there batting for Disney. If there is one hint of Micky Mouse™ in their data set they might as well just dissolve the company now.

  • Pratai@lemmy.ca
    link
    fedilink
    arrow-up
    37
    arrow-down
    1
    ·
    1 year ago

    Can someone explain to me why everyone cares so much about this guy? Not trying to troll or anything, but I don’t get why some random guy in the tech field is getting so much coverage.

    • dgmib@lemmy.world
      link
      fedilink
      arrow-up
      78
      arrow-down
      1
      ·
      1 year ago

      It’s ridiculously unusual for a board to actually fire a CEO. Usually if the board thinks a new CEO is needed, even if the CEO doesn’t agree with the decision, there’s a transition plan announced the CEO “stepping down”, or “steps aside”, of the “next phase of growth” or whatever. It has a massive positive spin on it and the departing CEO is paid a ridiculous severance to go along with the plan publicly.

      It’s very negative press to have to outright fire a CEO. Especially in a case like this when the CEO saw the company through the kind of growth that every startup has wet dreams about.

      Something huge happened, and the world is speculating rampantly about what that was.

      • Pratai@lemmy.ca
        link
        fedilink
        arrow-up
        9
        arrow-down
        19
        ·
        1 year ago

        Okay, so that explains that it is rare, but not why anyone cares about it. I’m sure CEO’s getting fired from companies happen more often than anyone thinks- my question was why does everyone care so much about THIS particular guy.

        • Fubarberry@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          36
          ·
          1 year ago

          AI has been the hot stuff in tech for awhile, and as the CEO of openAI (who made chatGPT, starting the AI tech explosion and are current leaders of the AI tech), he’s been the face of AI.

          It’s kinda like if Facebook fired Mark Zuckerberg in the middle of the explosion of social media.

        • Alto@kbin.social
          link
          fedilink
          arrow-up
          5
          ·
          1 year ago

          There’s the potential that the root cause behind the firing could end up having ramifications in the AI/wider tech sector. There’s no evidence pointing towards anything right now so all of this is purely theoretical, but if he was for example somehow covering up major financial issues that severely impact OpenAI, you could see that effect the industry as a whole.

        • livus@kbin.social
          link
          fedilink
          arrow-up
          5
          ·
          1 year ago

          I think it’s because we’re interested in OpenAI, and what happened is relevant to its previous governance and/or the direction in which it’s going now.

        • BaroqueInMind@kbin.social
          link
          fedilink
          arrow-up
          4
          arrow-down
          3
          ·
          1 year ago

          The answer is the same for celebrities or monarchs: frankly they make more money than you and live interesting lives from the freedom that provides, which is interesting for people who do not have that same privilege.

          • Pratai@lemmy.ca
            link
            fedilink
            arrow-up
            2
            arrow-down
            10
            ·
            1 year ago

            He’s not a celebrity or a monarch. He’s a tech dude. So again, what is special about him that is different than any other tech dude that gets fired.

            • BaroqueInMind@kbin.social
              link
              fedilink
              arrow-up
              10
              arrow-down
              3
              ·
              edit-2
              1 year ago

              If you had any fucking reading comprehension you would have read that it is because he’s wealthy and influential.

              • Pratai@lemmy.ca
                link
                fedilink
                arrow-up
                2
                arrow-down
                3
                ·
                1 year ago

                And if YOU had any fucking reading comprehension you’d understand that I’m asking why people care about a wealthy tech dude.

                As far as influential…. Tell me: who’s he influencing? Musk can be called wealthy and influential also.

                The majority of people wouldn’t have k own this guy existed a week ago, and now he’s everywhere. I’m curious as to why.

                Hope this helps clear up your confusion.

                • all-knight-party@kbin.run
                  link
                  fedilink
                  arrow-up
                  3
                  ·
                  1 year ago

                  I mean, the guy headed up probably the largest AI company that exists, with AI being the largest new tech that exists, and has incredible potential to change the world, for the better or worse.

                  Purely based on him being at the forefront of the most interesting and novel industry in the world, regardless of anything else about him personally, I’m sure his position in the industry drives more than enough intrigue for people to pay attention to him, at least until they see what happens next.

    • Zeth0s@lemmy.world
      link
      fedilink
      arrow-up
      19
      ·
      edit-2
      1 year ago

      Because he was ceo of a company in a critical position to define the future of economy. Currently the tech field is the biggest and most influential of all economic fields. And by tech here we talk about digital world. There’s absolutely no comparable sector at the moment for importance, not even pharma.

      It literally defines the modern economy. In the field, openai is an incredibly important company for future relative success and power of big tech companies.

      This is why it is so important for world economy

      • Pratai@lemmy.ca
        link
        fedilink
        arrow-up
        6
        arrow-down
        2
        ·
        1 year ago

        Thank you! This is what I was looking for. I get it now. Seems most people want to argue semantics and not actually answer the question.

        • BaroqueInMind@kbin.social
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          No, most people, including myself, are too dumb to convey in text an elaboration to you that answers your question succinctly.

    • Sekrayray@lemmy.worldOP
      link
      fedilink
      arrow-up
      7
      arrow-down
      24
      ·
      1 year ago

      Yeah, I have this completely unfounded gut feeling that they may have created something close to AGI internally that led to full brakes.

      Their weird for-profit and non-for-profit board structure makes me think it was crafted that way in the event of a rapid acceleration—the pace and format of this firing makes me wonder if it was a last ditch effort to keep the genie from leaving the bottle.

      • ∟⊔⊤∦∣≶@lemmy.nz
        link
        fedilink
        arrow-up
        33
        arrow-down
        2
        ·
        edit-2
        1 year ago

        No, no need at all to worry about that kind of thing.

        AI (LLMs) are still just a box that spits out things when you put things in. It is a digital Galton board. That’s it.

        This is not going to take over the world.

            • Cogency@lemmy.world
              link
              fedilink
              arrow-up
              3
              ·
              edit-2
              1 year ago

              And also not that different from how most people would describe their fellow earthers.

              Ie - we aren’t that much more complicated than that when it gets right down to the philosophical break down of what an “I” is.

        • Sekrayray@lemmy.worldOP
          link
          fedilink
          arrow-up
          7
          arrow-down
          1
          ·
          1 year ago

          I mean, I don’t think AGI necessarily implies singularity, and I doubt singularity will ever come from LLM’s. But when you look at human intelligence one could make the argument that it is a glorified input-output system like LLM’s.

          I’m not sure. There’s a lot of things going on in the background with even human intelligence that we don’t understand.

          • agent_flounder@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 year ago

            Yes except human brains can learn things without the typical manual training and tweaking you see in ML. In other words, LLMs can’t just start from an initial “blank” state and train themselves autonomously. A baby starts from an initial state and learns about objects, calibrates their eyes, proprioception, movement, then learns to roll over crawl, stand, walk, grasp, learns to understand language then speak it, etc. of course there’s parental involvement and all that but not like someone training an LLM on a massive dataset.

          • xmunk@sh.itjust.works
            link
            fedilink
            arrow-up
            4
            arrow-down
            1
            ·
            1 year ago

            Spin up AI Dungeon with chatgpt and see how compelling it is once you run out of script.

            • Sekrayray@lemmy.worldOP
              link
              fedilink
              arrow-up
              5
              ·
              1 year ago

              Really good point. I’ve actually messed around a lot with GPT as a 5e DM and you’re right—as soon as it needs to generate unique content it just leads you in an infinite loop that goes no where.

              • ∟⊔⊤∦∣≶@lemmy.nz
                link
                fedilink
                arrow-up
                2
                ·
                1 year ago

                I’ve had some amazing fantasy conversations with LLMs running on my own GPU. Family and world history, tribal traditions, flora and fauna, etc. It’s quite amazing and fun.

      • ilinamorato@lemmy.world
        link
        fedilink
        arrow-up
        13
        arrow-down
        1
        ·
        1 year ago

        I’m very doubtful that an AGI is possible with our current understanding of technology.

        Current models have the appearance of intelligence because they’ve been trained on the entire Internet (which also has the appearance of intelligence), but it’s still at its core a predictive pattern matcher; a pile of linear algebra that can be stirred around to get an output. Useful. But if eight billion people all wrote down their answer to a question and we averaged them all out, we’d get a pretty good answer that appeared to be intelligent as well; and the human race as a whole isn’t a distinct intelligence.

        Data manipulated on a large scale, especially when it’s bounded with rules and perturbed with random noise, yields surprising and often even poignant results. That’s all AI is right now; a more-or-less average of the internet. Your prompt just points it toward a particular corner of the internet.

        • v_krishna@lemmy.ml
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 year ago

          the human race as a whole isn’t a distinct intelligence

          I don’t know it’s quite that simple, (some) cognitive scientists and Marvin Minsky might disagree too. Pedantic asshattery aside, AGI might be an intelligence that’s so fundamentally different from our own ego/narrative/1-person perspective intelligence that we have trouble recognizing it as such.

          • ilinamorato@lemmy.world
            link
            fedilink
            arrow-up
            3
            arrow-down
            1
            ·
            1 year ago

            Well the big thing is that, right now, the “intelligence” doesn’t exist without a prompt. It has no agency or continuity outside of our requests. It also has no reasoning or thought process that we can distinguish, just an algorithm. It’s fundamentally not distinct from basic computers, which means that if it is intelligence, so are our servers and smartwatches and satellite phones and Switch OLEDs.

          • ilinamorato@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            1 year ago

            Yeah. I mean, quantum computing might upend some of my assumptions, but in the long run we’re probably going to have nailed down a decent definition of sentience before we have to wonder if computers have it.

  • donuts@kbin.social
    link
    fedilink
    arrow-up
    23
    arrow-down
    1
    ·
    1 year ago

    How I like to think it went down…

    OpenAI board of directors and investors: So… it’s not a problem that our model is trained off a bunch of stuff that we don’t own and haven’t licensed, is it?

    Sam: Nope… not a problem at all!

  • kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    1 year ago

    There’s multiple reports by now that it was because Altman was pushing product out too fast.

    So no need for speculation.

  • echo64@lemmy.world
    link
    fedilink
    arrow-up
    17
    arrow-down
    4
    ·
    1 year ago

    We don’t know, but it likely had absolutely nothing todo with the actual technology and everything to do with maximizing investor returns.

      • echo64@lemmy.world
        link
        fedilink
        arrow-up
        12
        ·
        1 year ago

        99.999% of people have never heard of this guy before and won’t even hear this news, the face of the company is chatgpt

        • shapesandstuff@feddit.de
          link
          fedilink
          arrow-up
          6
          ·
          1 year ago

          Yeah I’ve dabbled plenty with LLMs and Generative stuff, but I have no clue who that is.

          Not everyone who uses a thing cares for the lore.

        • teawrecks@sopuli.xyz
          link
          fedilink
          arrow-up
          5
          ·
          1 year ago

          You could have said the same about Gordon Moore in the 70s, or Bill Gates in the 90s, or Zuckerberg in the late 00s.

          You actually can say the same about Steve Jobs who was fired from Apple, only for him to return later. But back when he was fired, no one knew who he was, they just knew the Apple II.

  • davel [he/him]@lemmy.ml
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    4
    ·
    edit-2
    1 year ago

    Because they weren’t able to sufficiently indoctrinate Altman into their cult.
    Pivot to AI: Replacing Sam Altman with a very small shell script

    Until Friday, OpenAI had a board of only six people: Greg Brockman (chairman and president), Ilya Sutskever (chief scientist), and Sam Altman (CEO), and outside members Adam D’Angelo, Tasha McCauley, and Helen Toner.

    Sutskever, the researcher who got OpenAI going in 2015, is deep into “AI safety” in the sense of Eliezer Yudkowsky. Toner and McCauley are Effective Altruists — that is to say, part of the same cult.

    Eliezer Yudkowsky founded a philosophy he called “rationality” — which bears little relation to any other philosophy of such a name in history. He founded the site LessWrong to promote his ideas. He also named “Effective Altruism,” on the assumption that the most effective altruism in the world was to give him money to stop a rogue superintelligent AI from turning everyone into paperclips.

    The “ethical AI” side of OpenAI are Yudkowsky believers, including Mira Murati, the CTO who is now CEO. They are AI doomsday cultists who say they don’t think ChatGPT will take over the world — but behave like they do think that.

    D’Angelo doesn’t appear to be an AI doomer — but presumably Sutskever convinced him to kick Altman out anyway.

    Yudkowsky endorsed Murati’s promotion to CEO: “I’m tentatively 8.5% more cheerful about OpenAI going forward.”

    We’ve written before about how everything in machine learning is hand-tweaked and how so much of what OpenAI does relies on underpaid workers in Africa and elsewhere. This stuff doesn’t yet work as any sort of product without a hidden workforce of humans behind it, pushing. The GPT series are just powerful autocomplete systems. They aren’t going to turn you into paperclips.

    Sam Altman was an AI doomer — just not as much as the others. The real problem was that he was making promises that OpenAI could not deliver on. The GPT series was running out of steam. Altman was out and about in the quest for yet more funding for the OpenAI company in ways that upset the true believers.

    A boardroom coup by the rationalist cultists is quite plausible, as well as being very funny. Rationalists’ chronic inability to talk like regular humans may even explain the statement calling Altman a liar. It’s standard for rationalists to call people who don’t buy their pitch liars.

    So what from normal people would be an accusation of corporate war crimes is, from rationalists, just how they talk about the outgroup of non-rationalists. They assume non-believers are evil.

    It is important to remember that Yudkowsky’s ideas are dumb and wrong, he has zero technological experience, and he has never built a single thing, ever. He’s an ideas guy, and his ideas are bad. OpenAI’s future is absolutely going to be wild.

  • CanadaPlus@futurology.today
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    1 year ago

    Given the other parties on the board who haven’t objected, too fast. It sounds like he lied about some sort of AI safety thing.

    • Dkarma@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      This was my gut reaction. He did something or hid something that opened the company up to liability and this was the fastest way to mitigate the damage. I assume the bombshell hasn’t dropped yet.

      • CanadaPlus@futurology.today
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        FYI this was the board of the nonprofit, not the capped-profit subsidiary. One of the members is some sort of activist, even, so it wasn’t necessarily about money in any way.

        • weew@lemmy.ca
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          it could still be about money. Non-profits can still get sued to oblivion

          • CanadaPlus@futurology.today
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Not for failing to provide a profit, obviously. I guess if it’s embezzlement they would have a duty to act, but otherwise a nonprofit can shovel money into a literal furnace as long as it advances their mission (IANAL).

  • Hurculina Drubman@lemm.ee
    link
    fedilink
    arrow-up
    7
    ·
    1 year ago

    they discovered that it had a fatal flaw because he based it on his own personality, just like the M5 from Star Trek TOS

  • Night Monkey@sh.itjust.works
    link
    fedilink
    arrow-up
    3
    arrow-down
    25
    ·
    edit-2
    1 year ago

    OpenAI is playing it way too safe. They’re afraid of hurting peoples feelings and won’t touch many topics. Waiting for an AI with a sense of humor and isn’t programmed to be a coward

      • taladar@sh.itjust.works
        link
        fedilink
        arrow-up
        7
        arrow-down
        1
        ·
        1 year ago

        I think that is really the big, dirty secret of the AI industry right now, that they are not that great at producing intentional outcomes, it is all a lot of trial and error because nobody has a real understanding of how to change things incrementally without side-effects in other parts of the behaviour.

        • donuts@kbin.social
          link
          fedilink
          arrow-up
          7
          ·
          1 year ago

          It’s almost as if machine learning is a black box that you superimpose massive amounts of random data onto.

    • Sekrayray@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      1 year ago

      Probably all done in the name of alignment. We only really have one shot to make an AGI that doesn’t kill everyone (or do other weird unaligned stuff).

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        I think we need to start distinguishing better between AGI and ASI. We may have only one shot at ASI (though that’s hard to predict since it’s inherently something unknowable at the current time) but AGI will be “just this guy, you know?” I don’t see why a murderous rogue AGI would be harder to put down than a murderous rogue human.

        • Sekrayray@lemmy.worldOP
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          Absolutely true. Thanks for the distinction.

          I think maybe the argument could be made that AGI’s could expedite the creation of singularity, but you are correct in saying that the alignment problems matters less with rudimentary AGI.