

Also, companies have been doing this on a regional or even local level for decades. Why do you think they always ask for your zip code or geolocation when you just want to pull up a menu for Taco Bell?


Also, companies have been doing this on a regional or even local level for decades. Why do you think they always ask for your zip code or geolocation when you just want to pull up a menu for Taco Bell?


Well, “laid off” and “fired” are two different things. It sounds like this is closer to being laid off.
But, regardless, nobody gets laid off for reporting sexual harassment.


Tell me you’re a corpo lackey buying into the “synergy speak” without telling me directly.


“Made redundant”? WTF, BBC?


Once you’ve traded your principles for proximity to power, do you even run your own company?
No. See also: Shareholder Primacy.


The company’s response was an auto-reply: “Legacy Media Lies.”
Funny, that seems to be the correct answer to the headline.


My point is that xai is making the case for AI regulation for us.
Ha, regulation? With what governing body? Congress is hopeless, because apathy has griped a majority of the voting public, and there’s still a large portion of morons who thought MAGA was a good idea.


So, are we saying we’re still going to be happy with a system that you can bypass with “ignore all previous instructions” or some stupid magic phrase like that?


Nobody is bitching about photoshopping, a thing that exists and almost anybody can do to put some person’s face on a naked body or whatever situation they want. It’s existed for decades. Suddenly, journalists are inventing a new moral panic with LLMs, saying they can do whatever they want with pictures, despite the fact that this technology already existed, it’s just a little bit easier now. It’s not a new problem, so reporting on it is just shifting the blame to a new boogeyman.
See, the magic formula is to slap the word “AI” on a headline and boom, instant attention! It doesn’t matter what it’s about, if it’s a new problem, if it’s only slightly related to the main root cause… As long as you’re talking shit about every angle around AI in the most extreme ways possible, mission accomplished. It is outrage reporting because there is no solutioning or historical context. The sole purpose is the outrage, because outrages generates clicks. It’s too hard for journalists to think outside the outrage box.


These people and journalists don’t see to remember celebrity photoshops. Or maybe they do and want to continue to feed the outrage machine.
Pardon me if I duck out of my Two Minutes Hate for today.


LLM liability is not exactly cut-and-dry, either. It doesn’t really matter how many rules you put on LLMs to not do something, people will find a way to break it to do the thing it said it wasn’t going to do. For fuck’s sake, have we really forgotten the lessons of Asimov’s I, Robot short stories? Almost every one of them was about how the “unbreakable” three laws were very breakable thing, because absolute laws don’t make sense in every context. (While I hate using AI fiction with LLM comparisons, this one fits.)
Ultimately, it’s the person’s responsibility for telling it to do a thing, and getting the thing it was told to get. LLMs are a tool, nothing more. If somebody buys a hammer, and misuses that hammer by bashing somebody’s brains in, we arrest the person who committed murder. If there’s some security hole on a website that a hacker used to steal data, depending on how negligent the company is, there is some liability with that company not providing enough protections against their data. But, the hacker 100% broke the law, and would get convicted, if caught.
Regardless of all of that, LLMs aren’t fucking sentient and these dumbass journalists need to stop personifying them.


You obviously haven’t seen certain sections of CivitAI.


Yeah, Home Assistant is the way to go, but it’s been a slow progression because every company is more interested in proprietary lock-in than trying to push for standards like Z-Wave. It’s cloud-based bullshit everywhere, which is exactly the wrong kind of thing for in-home privacy. There needs to be a better push for standard APIs and internal wireless protocols.
This shit should be fucking easy. HVAC systems are still wired like it’s the 1930s, and all it takes is one company to just swoop in and create an all-in-one solution that uses standards and monitors inside/outdoor/room temps, humidity, occupancy, etc. It could control smart vents to close off rooms that aren’t in use, turn on humidity systems when it’s too low and isn’t too cold outside, hook into other rules from HA.
Doing the right thing could earn them millions, but nobody wants to bother actually doing it.


This is a much cheaper and faster way to get nuclear power.
Is this “journalist” an industry plant?


You wouldn’t even notice it unless it’s pointed out and you see the real world place they filmed vs what they made it look like with CGI in the film.
That’s my point. If it’s invisible, it’s done its job.


Download all existing literature to build a library for preservation and you’re called a pirate.
Said library contains petabytes of the exact text of each and every piece of literature.
Download all existing literature from aforementioned library to train an LLM and you’re a tech innovator.
Said model contains gigabytes of a bunch of weights that can never go back to the exact words of the book.
What a strange world we live in.
It’s not strange at all. It’s degrees of compression. You compress a JPEG to the point that it’s unrecognizable, and it’s no longer breaking copyright. It’s essentially like trying to write a book you just read based on memory.


This is a deliberate oversimplification to try to excuse derivative and copied works of artists who have had their art stolen.
It’s not. You misunderstand both copyright law and how LLMs work.
Models are GBs of weights, typically in the 4GB to 24GB range. LLMs do not look at a picture and then copy that picture into the model. There’s not enough disk space to do something like that. It’s used for training, adjusting weights here and there, based on how the image links to the description. You can’t just say “recreate the Mona Lisa” and have it give you a pixel perfect copy of the original.
When you do it, it’s copyright infringement.
It’s not copyright infringement to copy a style. People do it all the time. You wouldn’t believe the amount of times I’ve seen something that I thought was some unique style, and thought that one artist did it, but it turns out it’s just another copycat artist “inspired by” the more popular artist.
Because that’s what people do to something unique, or even remotely rare: Copy it a thousands times and drive it into the ground until you’re fucking sick of it.
If you think a letter substitution hack is going to poison LLM training data, when an LLM itself can easily decipher your “code”, then I have some Nigerian princes who would love to donate millions of dollars in cash to you.