The anti AI crowd is getting crazy. Everyone uses it during development.
It’s a tool for fuck’s sake, what’s next? Banning designers from using Photoshop because using it is faster and thus taking jobs from multiple artists who would have to be employed otherwise?
I’m not going to fault someone for driving to work in a car, but I certainly wouldn’t call them the winner of a marathon even if they only drove for a few minutes of that marathon.
There’s a difference between something that runs the race for you (LLM AI) and something that simply helps you do what you are already doing (I suppose photoshop is the equivalent of drinking gatorade).
I don’t think that’s a relevant comparison - marathon is a race meant specifically to test what the human body is capable of. Using a car there is obviously against the goal of the competition.
When I’m writing code, I’ll happily offload the boring parts to AI. There’s only so many times you can solve the same problem without it being boring. And I’ve been doing this long enough that actually new problems I haven’t solved yet are pretty rare.
I’m not going to fault you for that - but do you think you should receive an award for the work you didn’t do? Even if you only use the car on the “easy” parts of the race that nobody cares about?
In the case of this particular game, perhaps the bulk of the creative work was done by humans. But if the GOTY committee isn’t confident with drawing a line between what work is OK to offload onto an AI and what work isn’t, then I think it’s fair for then to say this year that any generative AI use is a disqualifier.
You and I can say with ease that an implementation of a basic swap function (c=a, a=b, b=c) doesn’t require any creative work and has been done to death, so there’s no shame in copypasteing something from stackoverflow or chatgpt into your own code to save time.
But it’s harder to gauge that for more complex things. Especially with art - where would you draw the line? Reference material? Concept art? Background textures or 3d models of basic props (random objects in the scene like chairs, trees, etc)?
I don’t think there’s a clear answer for that. You might have an answer you think is correct, and I might have one as well, but I think it will be difficult and time consuming to achieve consensus in the game development community.
So, the most efficient answer for now is to have any generative AI be a disqualifier.
You are 1000% correct. I’ve been yelled at or have witnessed a few times people making a huge stink but clearly can’t differentiate between the types of “ai” to even know what they are complaining about.
They just know “AI = Bad” so they get their pitchforks out.
Well, you see, the “everyone” you are referring to are the same stupid masses that already don’t deserve respect on the macro level. The same stupid masses voting in politicial officials like Donald Trump.
Yes, it is indeed embarrassing that you insist on defending AI.
My sister insisted on using an AI this year to generate our secret santa.
She didn’t get a gift. Ahahahaha.
AI is strictly for stupid people that can’t do good work on their own. No reasonably intelligent person is using AI to make a draft that they can then correct, because it will always be more practical to just make it yourself.
Use your AI generation all you want but don’t enter a painting contest using machine generated content trained on other people’s work without their consent.
The fact that you’re comparing human artists to slop machines is really sad. There is no “silicone brain” making any of this stuff. I think you should take a few minutes and learn how this stuff works before making these comparisons.
Wow thank you for this comment. It helps detail your level of knowledge on this subject, which is very helpful to myself and others. There is nothing else to discuss here on my end.
Alrighty, so generative AI works by giving it training data and it transforms that data and then generates something based on a prompt and how that prompt is related to the training data it has.
That’s not functionally different from how commissioned human artists work. They train on publicly available works, their brain transforms and stores that data and uses it to generate a work based on a prompt. They even often directly use a reference work to generate their own without permission from the original artist.
Like I said, there are tons of valid criticisms against Gen AI, but this criticism just boils down to “AI bad because it’s not a human exploiting other’s work.”
And all of this is ignoring the fact that ethically trained Gen AI models exist.
It does not think, it is not capable of creating novel new works, and it is incapable of the emotion necessary to be expressive.
All it can do is ingest content and replicate it. This is not the same as a human seeing someone’s work and being inspired by it to create something uniquely their own in response.
I never claimed that Gen AI has consciousness, or that what they produce has emotions behind it, so I’m not sure why you’re focusing on that.
I’m specifically talking about the argument that AI is bad because trains on copyrighted material without consent from the artist, which is functionally no different than humans doing the exact same thing.
This isn’t me defending AI, this is me saying this one specific argument against it is stupid. Because even if artificial consciousness was a thing, it would still have to be trained on the same data.
And that means humans don’t learn art the same way a machine trains on data. Even if they learn from other artists, a human’s artistic output is novel and original.
How exactly is a generated image not novel? You’re not going to get the same image twice with the same prompt. Everything it generates will be original. It’s not like they’re just providing you with an existing image.
And still the argument I’m hearing is that it’s fine for humans to use artistic works without consent or credit just because it’s a human doing it.
Just because the underlying processes are different doesn’t mean the two are functionally different.
I also think it’s funny because I’m betting the Venn diagram of people who think AI using publicly available artwork to train on is bad and people who think piracy is good is almost a single circle.
Not everyone, and it probably multiplies review time 10 fold. Makes maintenance horrible. It doesn’t save time, just moves it and makes devs dumber and unable to justify coding choices the AI generates.
I mean, it’s a tool. You can use a hammer to smash someone’s skull in or you can use it to put some nail on a wall.
If you see it used like that, it’s shitty developers, the AI is not to blame. Don’t get me wrong, I do have coworkers who use it like this and it sucks. One literally told to next time tell Copilot directly what to fix when I’m doing a review.
But overall it helps if you know how and most importantly when to use it.
We’re pushed to use AI a lot at our job and man is it awful. I’d say maybe 20-30% of the time it does okay, the other 70% is split between it just making shit up, or saying that it’s done something it hasn’t.
I’m in an entirely different industry than the topic at hand here, but my boss is really keen on ChatGPT and whatnot. Every problem that comes up, he’s like “have you asked AI yet?”
We have very expensive machines, which are maintained (ideally) by people who literally go to school to learn how to. We had an issue with a machine the other day and the same ol’ question came up, “have you asked AI yet?” He took a photo of the alarm screen and fed it to ChatGPT. It spit out a huge reply and he forwarded it to me and told me to try it out.
Literally the first troubleshooting step ChatGPT gave was nonsense and did not apply to our specific machine and our specific set-up and our specific use-case.
That way, you don’t have to commit to AI and can distance a bit from the micromanagement. If he persists. “I have a number of avenues I’d like to go down and will update on progress tomorrow”.
Though I’d be tempted to flippant, “if you’re feeling confident to pick it up, I’m happy to review it”. If they hesitate, “that’s OK, I’ll go through the process.”
Standups should be quick. Any progress, any issues, what you’re focussing on. Otherwise you waste everyone’s time. Any messages I’ll ignore until I have 5 mins. Micromanagement environments are not worth it.
The only question I’ve asked chatgpt recently was how to delete my account, and it couldn’t even get that right. It told me to click on the profile button on the top right of the screen. The profile button was on the bottom left, and looked more like a prompt to upgrade to a paid version. Fucking useless.
Not really. We just have to wait long enough for either enough disasters to occur that the crowd successfully rejects it; or the current crop of workers will be so unable to accomplish simple tasks without it, the rest of us will just move up the ladder past you. They’ll ask ChatGPT, “how to spreadsheet.”, because they just can’t remember since use of LLMs has been creating cognitive decline in users. Those of us who use our brains, rather than the stolen knowledge and hallucinated regurgitation of a blind database, will be the drivers in the work force.
Most of the major software development tools have some form of AI-based assistance features now. And the room those, be nearly all have those assistance and completion features enabled by default.
If you want absolutely no AI in your games, then you need to verify all of those functions were disabled for the entire development time. And you have no way to verify that.
So it’s safe to assume all code generation was trained on GPL code from GitHub and therefore the game code is derived work of GPL code and therefore under GPL itself? So decompilation and cracking is fine?
Please quote me the line where this covers machine generation as well? I’d love to sell Google translated Harry Potter books for being transformative work. Maybe I can transform the lastest movie releases to MKV and sell those.
transformative use or transformation is a type of fair use that builds on a copyrighted work in a different manner or for a different purpose from the original, and thus does not infringe its holder’s copyright.
You can use a book to train an AI model, you can’t sell a translation just because you used AI to translate it. These are two different things.
Collage is transformative, and it uses copyrighted pictures to make completely new works of art. It’s the same principle.
It’s also important to understand that it’s a tool. You can create copyright infringing content with word, google translate or photoshop as well. The training of the model itself doesn’t infringe on current copyright laws.
It uses the content in a different way for a different purpose. The part I highlighted above applies to it? Do you expect copyright laws to mention every single type of transformative work acceptable? You are being purposely ignorant.
Do you expect copyright laws to mention every single type of transformative work acceptable? You are being purposely ignorant.
I asked nicely to provide a quote that machine generation is also covered that you couldn’t provide and now feels the need to lash out.
And yes, I absolutely expect that machine generation is explicitly mentioned for the simple fact that right now machine generated anything is not copyrightable at all. A computer isn’t smart, a computer isn’t creative. Its output doesn’t pass the threshold of originality, as such there is no creative transformation happening, as there is with reinterpretations of songs.
What is copyrightable are the works that served as training set, therefore there absolutely has to be an explicit mention somewhere that machine generated works do not simply pass the original copyright into the generated work, just like how a human writes source code and the compiled executable is still the human author’s work.
The anti AI crowd is getting crazy. Everyone uses it during development.
It’s a tool for fuck’s sake, what’s next? Banning designers from using Photoshop because using it is faster and thus taking jobs from multiple artists who would have to be employed otherwise?
I’m not going to fault someone for driving to work in a car, but I certainly wouldn’t call them the winner of a marathon even if they only drove for a few minutes of that marathon.
There’s a difference between something that runs the race for you (LLM AI) and something that simply helps you do what you are already doing (I suppose photoshop is the equivalent of drinking gatorade).
I don’t think that’s a relevant comparison - marathon is a race meant specifically to test what the human body is capable of. Using a car there is obviously against the goal of the competition.
When I’m writing code, I’ll happily offload the boring parts to AI. There’s only so many times you can solve the same problem without it being boring. And I’ve been doing this long enough that actually new problems I haven’t solved yet are pretty rare.
I’m not going to fault you for that - but do you think you should receive an award for the work you didn’t do? Even if you only use the car on the “easy” parts of the race that nobody cares about?
In the case of this particular game, perhaps the bulk of the creative work was done by humans. But if the GOTY committee isn’t confident with drawing a line between what work is OK to offload onto an AI and what work isn’t, then I think it’s fair for then to say this year that any generative AI use is a disqualifier.
You and I can say with ease that an implementation of a basic swap function (c=a, a=b, b=c) doesn’t require any creative work and has been done to death, so there’s no shame in copypasteing something from stackoverflow or chatgpt into your own code to save time.
But it’s harder to gauge that for more complex things. Especially with art - where would you draw the line? Reference material? Concept art? Background textures or 3d models of basic props (random objects in the scene like chairs, trees, etc)?
I don’t think there’s a clear answer for that. You might have an answer you think is correct, and I might have one as well, but I think it will be difficult and time consuming to achieve consensus in the game development community.
So, the most efficient answer for now is to have any generative AI be a disqualifier.
Have you seen the pro-AI crowd? The most insane people currently in the tech world.
I did and you’re right. That’s why I’m firmly in the “it’s just a fucking tool” gang.
Both people who treat it like a messiah and those who treat it like the worst thing ever seem pretty much insane to me.
I feel there is nutjobs on both sides tbh.
You are 1000% correct. I’ve been yelled at or have witnessed a few times people making a huge stink but clearly can’t differentiate between the types of “ai” to even know what they are complaining about.
They just know “AI = Bad” so they get their pitchforks out.
“Everyone uses it” is just such a dumb argument.
I don’t use it, I’ve never committed any code written by genAI. My colleagues don’t use it. Many, many people choose not to use it.
I didn’t mean it in the literal sense but if it makes you happy, we can pretend that whenever someone says “everyone” they mean it literally.
Bullshit. “Everyone” in your comment was a clear “appeal to popularity” argument, made to other anyone not on the bandwagon.
It deserves to be called out for what it is.
Well, you see, the “everyone” you are referring to are the same stupid masses that already don’t deserve respect on the macro level. The same stupid masses voting in politicial officials like Donald Trump.
An unethically developed tool that’s burning the planet faster with the ultimate goal of starving the working class out of society.
Inb4 alarmism lol tell me the fucking lie if you can.
Dude, go touch grass, please. This is embarrassing.
Yes, it is indeed embarrassing that you insist on defending AI.
My sister insisted on using an AI this year to generate our secret santa.
She didn’t get a gift. Ahahahaha.
AI is strictly for stupid people that can’t do good work on their own. No reasonably intelligent person is using AI to make a draft that they can then correct, because it will always be more practical to just make it yourself.
Use your AI generation all you want but don’t enter a painting contest using machine generated content trained on other people’s work without their consent.
Do human artists usually get consent before training on content freely available on the Internet?
There are plenty of reasons to hate on AI, but this reason is just being pissed that a silicon brain did it instead of a carbon one.
The fact that you’re comparing human artists to slop machines is really sad. There is no “silicone brain” making any of this stuff. I think you should take a few minutes and learn how this stuff works before making these comparisons.
Right, because computers don’t use silicone.
But Gen AI is modeled after the way the brain works, so maybe you need to learn how it works before arguing against an accurate comparison.
Wow thank you for this comment. It helps detail your level of knowledge on this subject, which is very helpful to myself and others. There is nothing else to discuss here on my end.
Alrighty, so generative AI works by giving it training data and it transforms that data and then generates something based on a prompt and how that prompt is related to the training data it has.
That’s not functionally different from how commissioned human artists work. They train on publicly available works, their brain transforms and stores that data and uses it to generate a work based on a prompt. They even often directly use a reference work to generate their own without permission from the original artist.
Like I said, there are tons of valid criticisms against Gen AI, but this criticism just boils down to “AI bad because it’s not a human exploiting other’s work.”
And all of this is ignoring the fact that ethically trained Gen AI models exist.
GenAI is a glorified Markov Chain. Nothing more.
It is a stochastic parrot.
It does not think, it is not capable of creating novel new works, and it is incapable of the emotion necessary to be expressive.
All it can do is ingest content and replicate it. This is not the same as a human seeing someone’s work and being inspired by it to create something uniquely their own in response.
I never claimed that Gen AI has consciousness, or that what they produce has emotions behind it, so I’m not sure why you’re focusing on that.
I’m specifically talking about the argument that AI is bad because trains on copyrighted material without consent from the artist, which is functionally no different than humans doing the exact same thing.
This isn’t me defending AI, this is me saying this one specific argument against it is stupid. Because even if artificial consciousness was a thing, it would still have to be trained on the same data.
deleted by creator
Humans aren’t machines, dummy
And?
And that means humans don’t learn art the same way a machine trains on data. Even if they learn from other artists, a human’s artistic output is novel and original.
How exactly is a generated image not novel? You’re not going to get the same image twice with the same prompt. Everything it generates will be original. It’s not like they’re just providing you with an existing image.
And still the argument I’m hearing is that it’s fine for humans to use artistic works without consent or credit just because it’s a human doing it.
Just because the underlying processes are different doesn’t mean the two are functionally different.
I also think it’s funny because I’m betting the Venn diagram of people who think AI using publicly available artwork to train on is bad and people who think piracy is good is almost a single circle.
Not everyone, and it probably multiplies review time 10 fold. Makes maintenance horrible. It doesn’t save time, just moves it and makes devs dumber and unable to justify coding choices the AI generates.
I mean, it’s a tool. You can use a hammer to smash someone’s skull in or you can use it to put some nail on a wall.
If you see it used like that, it’s shitty developers, the AI is not to blame. Don’t get me wrong, I do have coworkers who use it like this and it sucks. One literally told to next time tell Copilot directly what to fix when I’m doing a review.
But overall it helps if you know how and most importantly when to use it.
We’re pushed to use AI a lot at our job and man is it awful. I’d say maybe 20-30% of the time it does okay, the other 70% is split between it just making shit up, or saying that it’s done something it hasn’t.
I’m in an entirely different industry than the topic at hand here, but my boss is really keen on ChatGPT and whatnot. Every problem that comes up, he’s like “have you asked AI yet?”
We have very expensive machines, which are maintained (ideally) by people who literally go to school to learn how to. We had an issue with a machine the other day and the same ol’ question came up, “have you asked AI yet?” He took a photo of the alarm screen and fed it to ChatGPT. It spit out a huge reply and he forwarded it to me and told me to try it out.
Literally the first troubleshooting step ChatGPT gave was nonsense and did not apply to our specific machine and our specific set-up and our specific use-case.
“I will be investigating this shortly.”
That way, you don’t have to commit to AI and can distance a bit from the micromanagement. If he persists. “I have a number of avenues I’d like to go down and will update on progress tomorrow”.
Though I’d be tempted to flippant, “if you’re feeling confident to pick it up, I’m happy to review it”. If they hesitate, “that’s OK, I’ll go through the process.”
Standups should be quick. Any progress, any issues, what you’re focussing on. Otherwise you waste everyone’s time. Any messages I’ll ignore until I have 5 mins. Micromanagement environments are not worth it.
The only question I’ve asked chatgpt recently was how to delete my account, and it couldn’t even get that right. It told me to click on the profile button on the top right of the screen. The profile button was on the bottom left, and looked more like a prompt to upgrade to a paid version. Fucking useless.
The anti-AI people will be forced to use it due to capitalism. They’ll be pissing against the wind if they didn’t.
So you’re saying capitalism is the problem. We agree!
Not really. We just have to wait long enough for either enough disasters to occur that the crowd successfully rejects it; or the current crop of workers will be so unable to accomplish simple tasks without it, the rest of us will just move up the ladder past you. They’ll ask ChatGPT, “how to spreadsheet.”, because they just can’t remember since use of LLMs has been creating cognitive decline in users. Those of us who use our brains, rather than the stolen knowledge and hallucinated regurgitation of a blind database, will be the drivers in the work force.
Most of the major software development tools have some form of AI-based assistance features now. And the room those, be nearly all have those assistance and completion features enabled by default.
If you want absolutely no AI in your games, then you need to verify all of those functions were disabled for the entire development time. And you have no way to verify that.
So it’s safe to assume all code generation was trained on GPL code from GitHub and therefore the game code is derived work of GPL code and therefore under GPL itself? So decompilation and cracking is fine?
https://en.wikipedia.org/wiki/Transformative_use
Please quote me the line where this covers machine generation as well? I’d love to sell Google translated Harry Potter books for being transformative work. Maybe I can transform the lastest movie releases to MKV and sell those.
You can use a book to train an AI model, you can’t sell a translation just because you used AI to translate it. These are two different things.
Collage is transformative, and it uses copyrighted pictures to make completely new works of art. It’s the same principle.
It’s also important to understand that it’s a tool. You can create copyright infringing content with word, google translate or photoshop as well. The training of the model itself doesn’t infringe on current copyright laws.
Not a single line in your comment offers anything that machine generation, which is not at all human creative work, falls under fair use.
It uses the content in a different way for a different purpose. The part I highlighted above applies to it? Do you expect copyright laws to mention every single type of transformative work acceptable? You are being purposely ignorant.
I asked nicely to provide a quote that machine generation is also covered that you couldn’t provide and now feels the need to lash out.
And yes, I absolutely expect that machine generation is explicitly mentioned for the simple fact that right now machine generated anything is not copyrightable at all. A computer isn’t smart, a computer isn’t creative. Its output doesn’t pass the threshold of originality, as such there is no creative transformation happening, as there is with reinterpretations of songs.
What is copyrightable are the works that served as training set, therefore there absolutely has to be an explicit mention somewhere that machine generated works do not simply pass the original copyright into the generated work, just like how a human writes source code and the compiled executable is still the human author’s work.
Edit: Downvotes instead of arguments. Pathetic.