And yet, they’ll still keep tryjbg to shove it down our throats.
I wonder how big the crossover is between people that let AI run commands for them, and people that don’t have a single reliable backup system in place. Probably pretty large.
The venn diagram is in fact just one circle.
“Did I give you permission to delete my D:\ drive?”
Hmm… the answer here is probably YES. I doubt whatever agent he used defaulted to the ability to run all commands unsupervised.
He either approved a command that looked harmless but nuked D:\ OR he whitelisted the agent to run rmdir one day, and that whitelist remained until now.
There’s a good reason why people that choose to run agents with the ability to run commands at least try to sandbox it to limit the blast radius.
This guy let an LLM raw dog his CMD.EXE and now he’s sad that it made a mistake (as LLMs will do).
Next time, don’t point the gun at your foot and complain when it gets blown off.
Some day someone with a high military rank, in one of the nuclear armed countries (probably the US), will ask an AI play a song from youtube. Then an hour later the world will be in ashes. That’s how the “Judgement day” is going to happen imo. Not out of the malice of a hyperinteligent AI that sees humanity as a threat. Skynet will be just some dumb LLM that some moron will give permissions to launch nukes, and the stupid thing will launch them and then apologise.
I have been into AI Safety since before chat gpt.
I used to get into these arguments with people that thought we could never lose control of AI because we were smart enough to keep it contained.
The rise of LLMs have effectively neutered that argument since being even remotely interesting was enough for a vast swath of people to just give it root access to the internet and fall all over themselves inventing competing protocols to empower it to do stuff without our supervision.
the fuck is antigravity
a misspelling of antimavity.
Thing go up instead of down.
It’s Google’s version of an IDE with AI integrated, where you type a bit of code, and get Bard to fill stuff in.
Google have significantly improved upon Bard.
Now it can delete your C drive too!
Lmao
I have a question. I have tried Cursor and one more AI coding tool, and as far as I can remember, they always ask explicit permission before running a command in terminal. They can edit file contents without permission but creating new files and deleting any files requires the user to say yes to it.
Is Google not doing this? Or am I missing something?
Google gives you an option as to how autonomous you want it to be. There is an option to essentially let it do what it wants, there are settings for various degrees of making it get your approval first.
They can (unintentionally) obfuscate what they’re doing.
I’ve seen the agent make scripts with commands that aren’t immediately obvious. You could unknowingly say yes when it asks for confirmation, and only find out later when looking at the output.
You can give cursor the permission to always run a certain command without asking (useful for running tests or git commands). Maybe they did that with rm?
I love how it just vanishes into a puff of logic at the end.
“Logic” is doing a lot of heavy lifting there lol
How the fuck could anyone ever be so fucking stupid as to give a corporate LLM pretending to be an AI, that is still in alpha, read and write access to your god damned system files? They are a dangerously stupid human being and they 100% deserved this.
Not sure, maybe ask Microsoft?
sudogpt rm -rf / --no-preserve-rootDammit i guess I better do it
Thoughts for 25s
Prayers for 7s
lol.
lmao even.
Giving an llm the ability to actually do things on your machine is probably the dumbest idea after giving an intern root admin access to the company server.
What’s this version control stuff? I don’t need that, I have an AI.
- An actual quote from Deap-Hyena492
> gives git credentials to AI
> whole repository goes kaboosh
> history mysteriously vanishes \⢀⣀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠘⣿⣿⡟⠲⢤⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠈⢿⡇⠀⠀⠈⠑⠦⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣠⠴⢲⣾⣿⣿⠃ ⠀⠀⠈⢿⡀⠀⠀⠀⠀⠈⠓⢤⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠖⠚⠉⠀⠀⢸⣿⡿⠃⠀ ⠀⠀⠀⠈⢧⡀⠀⠀⠀⠀⠀⠀⠙⠦⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠖⠋⠁⠀⠀⠀⠀⠀⠀⣸⡟⠁⠀⠀ ⠀⠀⠀⠀⠀⠳⡄⠀⠀⠀⠀⠀⠀⠀⠈⠒⠒⠛⠉⠉⠉⠉⠉⠉⠉⠑⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⣰⠏⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠘⢦⡀⠀⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡴⠃⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠙⣶⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠰⣀⣀⠴⠋⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⣰⠁⠀⠀⠀⣠⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⣤⣀⠀⠀⠀⠀⠹⣇⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⢠⠃⠀⠀⠀⢸⣀⣽⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⣧⣨⣿⠀⠀⠀⠀⠀⠸⣆⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⡞⠀⠀⠀⠀ ⠘⠿⠛⠀⠀⠀⢀⣀⠀⠀⠀⠀⠙⠛⠋⠀⠀⠀⠀⠀⠀⢹⡄⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⢰⢃⡤⠖⠒⢦⡀⠀⠀⠀⠀⠀⠙⠛⠁⠀⠀⠀⠀⠀⠀⠀⣠⠤⠤⢤⡀⠀⠀⢧⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⢸⢸⡀⠀⠀⢀⡗⠀⠀⠀⠀⢀⣠⠤⠤⢤⡀⠀⠀⠀⠀⢸⡁⠀⠀⠀⣹⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⢸⡀⠙⠒⠒⠋⠀⠀⠀⠀⠀⢺⡀⠀⠀⠀⢹⠀⠀⠀⠀⠀⠙⠲⠴⠚⠁⠀⠀⠸⡇⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⢷⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⠦⠤⠴⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⢳⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⢸⠂⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠾⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠦⠤⠤⠤⠤⠤⠤⠤⠼⠇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀

Did you give it permission to do it? No. Did you tell it not to do it? Also, no. See, there’s your problem. You forgot to tell it to not do something it shouldn’t be doing in the first place.
From anti-gravity documentation:
When you first configure Antigravity, or via the settings menu, you must select a Terminal Command Auto Execution policy. This setting dictates the agent’s autonomy regarding shell commands.
So…
Did you give it permission to do it?
Yes. Yes, they did.
And the icing on the shit cake is it peacing out after all that
If you cut your finger while cooking, you wouldn’t expect the cleaver to stick around and pay the medical bill, would you?
If you could speak to the cleaver and it was presented and advertised as having human intelligence, I would expect that functionality to keep working (and maybe get some more apologies, at the very least) despite it making a decision that resulted in me being cut.
It didn’t make any decision.
It’s an AI agent which made a decision to run a cli command and it resulted in a drive being wiped. Please consider the context
It’s a human who made the decision to give such permissions to an AI agent and it resulted in a drive being wiped. That’s the context.
If a car is presented as fully self-driving and it crashes, then it’s not he passengers fault. If your automatic tool can fuck up your shit, it’s the company’s responsibility to not present it as automatic.
Did the car come with full self-driving mode disabled by default and a warning saying “Fully self-driving mode can kill you” when you try to enable it? I don’t think you understand that the user went out of their way to enable this functionality.
Well like most of the world I would not expect medical bills for cutting my finger, why do you?
You need to take care of that chip on your shoulder.

I’m confused. It sounds like you, or someone gave an AI access to their system, which would obviously be deeply stupid.
Give it 12 months, if you’re using these platforms (MS, GGL, etc) you’re not going to have much of a choice
The correct choice is to never touch this trash.
What if you poke it with a stick, like one would upon finding a raccoon or drug cartel?
It does, in general, have its uses, but Google’s may actually be dumber than I am. Like, I don’t know how they make these things exactly, but the brain trusts at Google did it…wrong.
Given the tendency of these systems to randomly implode (as demonstrated) I’m unconvinced they’re going to be a long-term threat.
Any company that desires to replace its employees with an AI is really just giving them an unpaid vacation. Not even a particularly long one if history is any judge.
But that’s what the system is made for
Ok, well Google’s Search AI is like the dumbest kid on the short bus, so I don’t know why I’d ever in a trillion years give it system access. Seriously, if ChatGPT is like Joe from Idiocracy, Google’s is like Frito.










