I know there’s other plausible reasons, but thought I’d use this juicy title.
What does everyone think? As someone who works outside of tech I’m curious to hear the collective thoughts of the tech minds on Lemmy.
Moving too dumb. Something caused Microsoft to ban OpenAI for its employees last week, probably a massive security blunder that we hopefully get to find out about eventually.
I think he was probably lying about where he got all the data used to train the model from, I’m guessing training a model on tons of copyrighted material and stolen user data won’t be legal in the near future.
I have been saying for a while the compute cost and copyright lawsuits are going to be a real bubble burst.
Yeah I’ve done a tiny bit of AI stuff for what I do (biology) and I think it’s very sus they can build such a strong model out of data which costs lots of money. The reason the algos in my field of biology are so strong is because the NCBI has the genomes of everything that’s be sequenced FOR FREE, because obviously you don’t want people patenting genomes and it should all be free for science, etc.
Which begs the question how the a start up that started out as a non-profit get that much user data and keep costs low? I know you can buy user data and I’m not sure how much it is to buy a bunch of google docs from a data broker, but if you buy from hackers who just data breached or used some illegal crawler you can probably cut that to prices a nonprofit could afford.
It doesn’t have to be nefarious. The API change at Twitter and Reddit were ostensibly about the fact that OpenAi et. al. pretty much downloaded all their content for free.
Throw in the fact that you can ingest all of wikipedia for free and you have a shitload of knowledge at your disposal.
I was under the impression that they crawled web sources but it seems like lots of copyrighted work was used.
I hadn’t heard of getting “illegal” data sets before so I looked into it and it sounds like they might have done that. Wow.
Link for the curious: https://www.theverge.com/2023/7/9/23788741/sarah-silverman-openai-meta-chatgpt-llama-copyright-infringement-chatbots-artificial-intelligence-ai
I’d think lobbysts would disagree with you on that point
Very true but they don’t always win, and besides, there are other lobbyists who are out there batting for Disney. If there is one hint of Micky Mouse™ in their data set they might as well just dissolve the company now.
That would honestly be preferable.
They also launched copilot. Could be the actual reason…
Copilot is GPT4
Yes. I meant they want them using it as copilot.
Can someone explain to me why everyone cares so much about this guy? Not trying to troll or anything, but I don’t get why some random guy in the tech field is getting so much coverage.
It’s ridiculously unusual for a board to actually fire a CEO. Usually if the board thinks a new CEO is needed, even if the CEO doesn’t agree with the decision, there’s a transition plan announced the CEO “stepping down”, or “steps aside”, of the “next phase of growth” or whatever. It has a massive positive spin on it and the departing CEO is paid a ridiculous severance to go along with the plan publicly.
It’s very negative press to have to outright fire a CEO. Especially in a case like this when the CEO saw the company through the kind of growth that every startup has wet dreams about.
Something huge happened, and the world is speculating rampantly about what that was.
Okay, so that explains that it is rare, but not why anyone cares about it. I’m sure CEO’s getting fired from companies happen more often than anyone thinks- my question was why does everyone care so much about THIS particular guy.
AI has been the hot stuff in tech for awhile, and as the CEO of openAI (who made chatGPT, starting the AI tech explosion and are current leaders of the AI tech), he’s been the face of AI.
It’s kinda like if Facebook fired Mark Zuckerberg in the middle of the explosion of social media.
That makes sense. Thank you for clarifying it a bit better.
There’s the potential that the root cause behind the firing could end up having ramifications in the AI/wider tech sector. There’s no evidence pointing towards anything right now so all of this is purely theoretical, but if he was for example somehow covering up major financial issues that severely impact OpenAI, you could see that effect the industry as a whole.
I think it’s because we’re interested in OpenAI, and what happened is relevant to its previous governance and/or the direction in which it’s going now.
The answer is the same for celebrities or monarchs: frankly they make more money than you and live interesting lives from the freedom that provides, which is interesting for people who do not have that same privilege.
He’s not a celebrity or a monarch. He’s a tech dude. So again, what is special about him that is different than any other tech dude that gets fired.
If you had any fucking reading comprehension you would have read that it is because he’s wealthy and influential.
And if YOU had any fucking reading comprehension you’d understand that I’m asking why people care about a wealthy tech dude.
As far as influential…. Tell me: who’s he influencing? Musk can be called wealthy and influential also.
The majority of people wouldn’t have k own this guy existed a week ago, and now he’s everywhere. I’m curious as to why.
Hope this helps clear up your confusion.
I mean, the guy headed up probably the largest AI company that exists, with AI being the largest new tech that exists, and has incredible potential to change the world, for the better or worse.
Purely based on him being at the forefront of the most interesting and novel industry in the world, regardless of anything else about him personally, I’m sure his position in the industry drives more than enough intrigue for people to pay attention to him, at least until they see what happens next.
Because he was ceo of a company in a critical position to define the future of economy. Currently the tech field is the biggest and most influential of all economic fields. And by tech here we talk about digital world. There’s absolutely no comparable sector at the moment for importance, not even pharma.
It literally defines the modern economy. In the field, openai is an incredibly important company for future relative success and power of big tech companies.
This is why it is so important for world economy
Thank you! This is what I was looking for. I get it now. Seems most people want to argue semantics and not actually answer the question.
No, most people, including myself, are too dumb to convey in text an elaboration to you that answers your question succinctly.
He’s a wunderkind. Imagine Elon musk but this guy actually makes the product himself.
i mean, they did the green goblin move to him
Too fast and without regard for safety. Ars Technica covered it well.
Yeah, I have this completely unfounded gut feeling that they may have created something close to AGI internally that led to full brakes.
Their weird for-profit and non-for-profit board structure makes me think it was crafted that way in the event of a rapid acceleration—the pace and format of this firing makes me wonder if it was a last ditch effort to keep the genie from leaving the bottle.
No, no need at all to worry about that kind of thing.
AI (LLMs) are still just a box that spits out things when you put things in. It is a digital Galton board. That’s it.
This is not going to take over the world.
I keep telling people it’s glorified autocomplete.
Just with a lot of training data.
Well that’s also quite reductionist.
And also not that different from how most people would describe their fellow earthers.
Ie - we aren’t that much more complicated than that when it gets right down to the philosophical break down of what an “I” is.
I mean, I don’t think AGI necessarily implies singularity, and I doubt singularity will ever come from LLM’s. But when you look at human intelligence one could make the argument that it is a glorified input-output system like LLM’s.
I’m not sure. There’s a lot of things going on in the background with even human intelligence that we don’t understand.
Yes except human brains can learn things without the typical manual training and tweaking you see in ML. In other words, LLMs can’t just start from an initial “blank” state and train themselves autonomously. A baby starts from an initial state and learns about objects, calibrates their eyes, proprioception, movement, then learns to roll over crawl, stand, walk, grasp, learns to understand language then speak it, etc. of course there’s parental involvement and all that but not like someone training an LLM on a massive dataset.
Good point
Spin up AI Dungeon with chatgpt and see how compelling it is once you run out of script.
Really good point. I’ve actually messed around a lot with GPT as a 5e DM and you’re right—as soon as it needs to generate unique content it just leads you in an infinite loop that goes no where.
I’ve had some amazing fantasy conversations with LLMs running on my own GPU. Family and world history, tribal traditions, flora and fauna, etc. It’s quite amazing and fun.
I’m very doubtful that an AGI is possible with our current understanding of technology.
Current models have the appearance of intelligence because they’ve been trained on the entire Internet (which also has the appearance of intelligence), but it’s still at its core a predictive pattern matcher; a pile of linear algebra that can be stirred around to get an output. Useful. But if eight billion people all wrote down their answer to a question and we averaged them all out, we’d get a pretty good answer that appeared to be intelligent as well; and the human race as a whole isn’t a distinct intelligence.
Data manipulated on a large scale, especially when it’s bounded with rules and perturbed with random noise, yields surprising and often even poignant results. That’s all AI is right now; a more-or-less average of the internet. Your prompt just points it toward a particular corner of the internet.
the human race as a whole isn’t a distinct intelligence
I don’t know it’s quite that simple, (some) cognitive scientists and Marvin Minsky might disagree too. Pedantic asshattery aside, AGI might be an intelligence that’s so fundamentally different from our own ego/narrative/1-person perspective intelligence that we have trouble recognizing it as such.
Well the big thing is that, right now, the “intelligence” doesn’t exist without a prompt. It has no agency or continuity outside of our requests. It also has no reasoning or thought process that we can distinguish, just an algorithm. It’s fundamentally not distinct from basic computers, which means that if it is intelligence, so are our servers and smartwatches and satellite phones and Switch OLEDs.
I’m very doubtful
After a lifetime in software development, I’m more doubtful than that.
Yeah. I mean, quantum computing might upend some of my assumptions, but in the long run we’re probably going to have nailed down a decent definition of sentience before we have to wonder if computers have it.
AI is way less ‘intelligent’ than most people think.
My guess: using resources of OpenAI for his other personal crypto project Worldcoin
How I like to think it went down…
OpenAI board of directors and investors: So… it’s not a problem that our model is trained off a bunch of stuff that we don’t own and haven’t licensed, is it?
Sam: Nope… not a problem at all!
There’s multiple reports by now that it was because Altman was pushing product out too fast.
So no need for speculation.
And what is the board but a modern day council machine council
Wasn’t it because he did not disclose all the information he is supposed to disclose to the board?
We don’t know, but it likely had absolutely nothing todo with the actual technology and everything to do with maximizing investor returns.
Seems like a bold move for profitability to oust the face of your whole product.
99.999% of people have never heard of this guy before and won’t even hear this news, the face of the company is chatgpt
Yeah I’ve dabbled plenty with LLMs and Generative stuff, but I have no clue who that is.
Not everyone who uses a thing cares for the lore.
You could have said the same about Gordon Moore in the 70s, or Bill Gates in the 90s, or Zuckerberg in the late 00s.
You actually can say the same about Steve Jobs who was fired from Apple, only for him to return later. But back when he was fired, no one knew who he was, they just knew the Apple II.
Yes.
Because they weren’t able to sufficiently indoctrinate Altman into their cult.
Pivot to AI: Replacing Sam Altman with a very small shell scriptUntil Friday, OpenAI had a board of only six people: Greg Brockman (chairman and president), Ilya Sutskever (chief scientist), and Sam Altman (CEO), and outside members Adam D’Angelo, Tasha McCauley, and Helen Toner.
Sutskever, the researcher who got OpenAI going in 2015, is deep into “AI safety” in the sense of Eliezer Yudkowsky. Toner and McCauley are Effective Altruists — that is to say, part of the same cult.
Eliezer Yudkowsky founded a philosophy he called “rationality” — which bears little relation to any other philosophy of such a name in history. He founded the site LessWrong to promote his ideas. He also named “Effective Altruism,” on the assumption that the most effective altruism in the world was to give him money to stop a rogue superintelligent AI from turning everyone into paperclips.
The “ethical AI” side of OpenAI are Yudkowsky believers, including Mira Murati, the CTO who is now CEO. They are AI doomsday cultists who say they don’t think ChatGPT will take over the world — but behave like they do think that.
D’Angelo doesn’t appear to be an AI doomer — but presumably Sutskever convinced him to kick Altman out anyway.
Yudkowsky endorsed Murati’s promotion to CEO: “I’m tentatively 8.5% more cheerful about OpenAI going forward.”
We’ve written before about how everything in machine learning is hand-tweaked and how so much of what OpenAI does relies on underpaid workers in Africa and elsewhere. This stuff doesn’t yet work as any sort of product without a hidden workforce of humans behind it, pushing. The GPT series are just powerful autocomplete systems. They aren’t going to turn you into paperclips.
Sam Altman was an AI doomer — just not as much as the others. The real problem was that he was making promises that OpenAI could not deliver on. The GPT series was running out of steam. Altman was out and about in the quest for yet more funding for the OpenAI company in ways that upset the true believers.
A boardroom coup by the rationalist cultists is quite plausible, as well as being very funny. Rationalists’ chronic inability to talk like regular humans may even explain the statement calling Altman a liar. It’s standard for rationalists to call people who don’t buy their pitch liars.
So what from normal people would be an accusation of corporate war crimes is, from rationalists, just how they talk about the outgroup of non-rationalists. They assume non-believers are evil.
It is important to remember that Yudkowsky’s ideas are dumb and wrong, he has zero technological experience, and he has never built a single thing, ever. He’s an ideas guy, and his ideas are bad. OpenAI’s future is absolutely going to be wild.
Given the other parties on the board who haven’t objected, too fast. It sounds like he lied about some sort of AI safety thing.
This was my gut reaction. He did something or hid something that opened the company up to liability and this was the fastest way to mitigate the damage. I assume the bombshell hasn’t dropped yet.
FYI this was the board of the nonprofit, not the capped-profit subsidiary. One of the members is some sort of activist, even, so it wasn’t necessarily about money in any way.
it could still be about money. Non-profits can still get sued to oblivion
Not for failing to provide a profit, obviously. I guess if it’s embezzlement they would have a duty to act, but otherwise a nonprofit can shovel money into a literal furnace as long as it advances their mission (IANAL).
they discovered that it had a fatal flaw because he based it on his own personality, just like the M5 from Star Trek TOS
Replaced by AI. Nothing to see here…
M-O-N-E-Y
OpenAI is playing it way too safe. They’re afraid of hurting peoples feelings and won’t touch many topics. Waiting for an AI with a sense of humor and isn’t programmed to be a coward
They’ve tried that, the robots act like your average racist edgelord teen.
I think that is really the big, dirty secret of the AI industry right now, that they are not that great at producing intentional outcomes, it is all a lot of trial and error because nobody has a real understanding of how to change things incrementally without side-effects in other parts of the behaviour.
It’s almost as if machine learning is a black box that you superimpose massive amounts of random data onto.
This sounds like the take of an average Ben Shapiro viewer
Ben Shapiro is a moron.
Just think what that says about you for a sec.
Because I want an AI that isn’t gonna censor itself? 😂
No, because you come off like a Ben Shapiro fan.
Microsoft’s Tay AI sounds right up your alley.
Well, you can always use GPT-4chan…
What monster made that? Glad they thought better of it.
Probably all done in the name of alignment. We only really have one shot to make an AGI that doesn’t kill everyone (or do other weird unaligned stuff).
I think we need to start distinguishing better between AGI and ASI. We may have only one shot at ASI (though that’s hard to predict since it’s inherently something unknowable at the current time) but AGI will be “just this guy, you know?” I don’t see why a murderous rogue AGI would be harder to put down than a murderous rogue human.
Absolutely true. Thanks for the distinction.
I think maybe the argument could be made that AGI’s could expedite the creation of singularity, but you are correct in saying that the alignment problems matters less with rudimentary AGI.