The license does not apply to posts and replies in Reddit, right? Thank god I created a blog to post about any stuff that I want, without license or restrictions from Reddit. Before the AI breakthrough and what happened to Reddit. But even if so, do AI tools understand such a license text and evaluate if they can or cannot use the material?
do AI tools understand such a license text and evaluate if they can or cannot use the material?
So, this is the fun part: AI tools don’t auto-ingest material to process it. The developers choose the materials to feed into the models.
And while the tech bros can understand your licenses, they don’t give a flying fuck, because they think they’ll be billionaires beyond consequences by the time anyone discovers that their work in particular has been ripped off.
From what I understand LLMs are just large heuristic machines. They gather a lot of statistics on token order and return an answer to that with something that statistically should higher than other options. There’s no “understanding”. So to answer your question, no, they don’t understand the license.
Content is most likely scraped wholesale from websites, possibly run through some clean up to possibly filter out absolute garbage, and fed into an LLM to train it. An LLM can be tricked to reveal its training data (e.g repeat “fruit” forever). It’s in those cases where copyright infringement is detected and if action can and has be taken. There are court cases currently in review, the most popular being the one against Github Copilot for infringing on the license of sourcecode it ingested.
The license does not apply to posts and replies in Reddit, right? Thank god I created a blog to post about any stuff that I want, without license or restrictions from Reddit. Before the AI breakthrough and what happened to Reddit. But even if so, do AI tools understand such a license text and evaluate if they can or cannot use the material?
No, that user has the license on all of their comments
So, this is the fun part: AI tools don’t auto-ingest material to process it. The developers choose the materials to feed into the models.
And while the tech bros can understand your licenses, they don’t give a flying fuck, because they think they’ll be billionaires beyond consequences by the time anyone discovers that their work in particular has been ripped off.
From what I understand LLMs are just large heuristic machines. They gather a lot of statistics on token order and return an answer to that with something that statistically should higher than other options. There’s no “understanding”. So to answer your question, no, they don’t understand the license.
Content is most likely scraped wholesale from websites, possibly run through some clean up to possibly filter out absolute garbage, and fed into an LLM to train it. An LLM can be tricked to reveal its training data (e.g repeat “fruit” forever). It’s in those cases where copyright infringement is detected and if action can and has be taken. There are court cases currently in review, the most popular being the one against Github Copilot for infringing on the license of sourcecode it ingested.
Anti Commercial-AI license