Greg Rutkowski, a digital artist known for his surreal style, opposes AI art but his name and style have been frequently used by AI art generators without his consent. In response, Stable Diffusion removed his work from their dataset in version 2.0. However, the community has now created a tool to emulate Rutkowski’s style against his wishes using a LoRA model. While some argue this is unethical, others justify it since Rutkowski’s art has already been widely used in Stable Diffusion 1.5. The debate highlights the blurry line between innovation and infringement in the emerging field of AI art.

  • Melody Fwygon@beehaw.org
    link
    fedilink
    arrow-up
    25
    ·
    edit-2
    1 year ago

    AI art is factually not art theft. It is creation of art in the same rough and inexact way that we humans do it; except computers and AIs do not run on meat-based hardware that has an extraordinary number of features and demands that are hardwired to ensure survival of the meat-based hardware. It doesn’t have our limitations; so it can create similar works in various styles very quickly.

    Copyright on the other hand is, an entirely different and, a very sticky subject. By default, “All Rights Are Reserved” is something that usually is protected by these laws. These laws however, are not grounded in modern times. They are grounded in the past; before the information age truly began it’s upswing.

    Fair use generally encompasses all usage of information that is one or more of the following:

    • Educational; so long as it is taught as a part of a recognized class and within curriculum.
    • Informational; so long as it is being distributed to inform the public about valid, reasonable public interests. This is far broader than some would like; but it is legal.
    • Transformative; so long as the content is being modified in a substantial enough manner that it is an entirely new work that is not easily confused for the original. This too, is far broader than some would like; but it still is legal.
    • Narrative or Commentary purposes; so long as you’re not copying a significant amount of the whole content and passing it off as your own. Short clips with narration and lots of commentary interwoven between them is typically protected. Copyright is not intended to be used to silence free speech. This also tends to include satire; as long as it doesn’t tread into defamation territory.
    • Reasonable, ‘Non-Profit Seeking or Motivated’ Personal Use; People are generally allowed to share things amongst themselves and their friends and other acquaintances. Reasonable backup copies, loaning of copies, and even reproduction and presentation of things are generally considered fair use.

    In most cases AI art is at least somewhat Transformative. It may be too complex for us to explain it simply; but the AI is basically a virtual brain that can, without error or certain human faults, ingest image information and make decisions based on input given to it in order to give a desired output.

    Arguably; if I have license or right to view artwork; or this right is no longer reserved, but is granted to the public through the use of the World Wide Web…then the AI also has those rights. Yes. The AI has license to view, and learn from your artwork. It just so happens to be a little more efficient at learning and remembering than humans can be at times.

    This does not stop you from banning AIs from viewing all of your future works. Communicating that fact with all who interact with your works is probably going to make you a pretty unpopular person. However; rightsholders do not hold or reserve the right to revoke rights that they have previously given. Once that genie is out of the bottle; it’s out…unless you’ve got firm enough contract proof to show that someone agreed to otherwise handle the management of rights.

    In some cases; that proof exists. Good luck in court. In most cases however; that proof does not exist in a manner that is solid enough to please the court. A lot of the time; we tend to exchange, transfer and reserve rights ephemerally…that is in a manner that is not strictly always 100% recognized by the law.

    Gee; Perhaps we should change that; and encourage the reasonable adaptation and growth of Copyright to fairly address the challenges of the information age.

        • raccoona_nongrata@beehaw.org
          link
          fedilink
          arrow-up
          21
          ·
          1 year ago

          They could though, any sufficiently observant human can make art, even if they’ve never seen any before. Humans are compelled to create representations of the world around us, just as expression or out of curiosity. Humans have independently “invented” art multiple times across our history as a species.

          We do inspire and learn from eachother, but it’s not strictly necessary. This type of AI model will never spontaneously create art because that’s not how it functions. It requires human art to fulfill it’s fundemental function, else it would just sit there printing nothing.

          • Deniz Opal@syzito.xyz
            link
            fedilink
            arrow-up
            10
            ·
            1 year ago

            @raccoona_nongrata

            Actually. It is necessary. The process of creativity is much much more a synergy of past consumption than we think.

            It took 100,000 years to get from cave drawings to Leonard Da Vinci.

            Yes we always find ways to draw, but the pinnacle of art comes from a shared culture of centuries.

            • raccoona_nongrata@beehaw.org
              link
              fedilink
              arrow-up
              12
              ·
              1 year ago

              Stable Diffusion, sitting on its own for 100,000 years or a million would not create art, that is the distinction.

              A human could express themselves with art in some form or another having never been exposed to other human art. Whether you consider that art refined doesn’t really factor into the question.

              • Deniz Opal@syzito.xyz
                link
                fedilink
                arrow-up
                2
                ·
                1 year ago

                @raccoona_nongrata

                A machine will not unilaterally develop an art form, and develop it for 100,000 years.

                Yes I agree with this.

                However, they are not developing an art form now.

                Nor did Monet, Shakespeare, or Beethoven develop an art form. Or develop it for 100,000 years.

                So machines cannot emulate that.

                But they can create the end product based on past creations, much as Monet, Shakespeare, and Beethoven did.

                • raccoona_nongrata@beehaw.org
                  link
                  fedilink
                  arrow-up
                  7
                  ·
                  edit-2
                  1 year ago

                  Sure, but those individuals are responsible for their proportional contribution to that 100,000 years, which can be a lot to a human being, sometimes a life’s work.

                  If you stopped feeding new data to Diffusion, it would not progress or advance the human timeline of art, it would just stagnate. It might have a broader scope than if you fed it cave drawings, but it would never contribute anything itself.

                  People don’t want their work and contribution scooped up by a machine that then shoves them aside with literally no compensation.

                  If we create a society where no one has to work, we can revisit the question, but that’s nowhere on the horizon.

                  • Deniz Opal@syzito.xyz
                    link
                    fedilink
                    arrow-up
                    2
                    ·
                    1 year ago

                    @raccoona_nongrata

                    Actually this is how we are training some models now.

                    The models are separated, fed different versions of the source data, then we kick off a process of feeding them content that was created by the other models creating a loop. It has proven very effective. It is also the case that this generation of AI created content is the next generations training data, simply by existing. What you are saying is absolutely false. Generated content DOES have a lot of value as source data

                • ParsnipWitch@feddit.de
                  link
                  fedilink
                  arrow-up
                  3
                  ·
                  1 year ago

                  No, humans create and develope styles in art from “mistakes” that AI would not continue pursuing. Because they personally like it or have a strange addiction to their own creative process. The current hand mistakes for example were perhaps one of the few interesting things AI has done…

                  Current AI models recreate what is most liked by the majority of people.

                  • I_Has_A_Hat@lemmy.ml
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    1 year ago

                    And what if the human running the AI likes one of these “mistakes” and tells the AI to run with it?

        • Ben from CDS@dice.camp
          link
          fedilink
          arrow-up
          13
          ·
          1 year ago

          @selzero @raccoona_nongrata @fwygon But human creativity is not ONLY a combination of past creativity. It is filtered through a lifetime of subjective experience and combined knowledge. Two human artists schooled on the same art history can still produce radically different art. Humans are capable of going beyond has been done before.

          Before going too deep on AI creation spend some time learning about being human. After that, if you still find statistical averages interesting, go back to AI.

          • Deniz Opal@syzito.xyz
            link
            fedilink
            arrow-up
            5
            ·
            edit-2
            1 year ago

            @glenatron @raccoona_nongrata @fwygon

            I mean, yes, you are right, but essentially, it is all external factors. They can be lived through external factors, or data fed external factors.

            I don’t think there is a disagreement here other than you are placing a lot of value on “the human experience” being an in real life thing rather than a read thing. Which is not even fully true of the great masters. It’s a form of puritan fetishisation I guess.

            • Ben from CDS@dice.camp
              link
              fedilink
              arrow-up
              8
              ·
              1 year ago

              @selzero @raccoona_nongrata @fwygon I don’t think it’s even contraversial. Will sentient machines ever have an equivalent experience? Very probably. Will they be capable of creating art? Absolutely.

              Can our current statistical bulk reincorporation tools make any creative leap? Absolutely not. They are only capable of plagiarism. Will they become legitimate artistic tools? Perhaps, when the people around them start taking artists seriously instead of treating them with distain.

              • Deniz Opal@syzito.xyz
                link
                fedilink
                arrow-up
                6
                ·
                1 year ago

                @glenatron @raccoona_nongrata @fwygon

                This angle is very similar to a debate going on in the cinema world, with Scorsese famously ranting that Marvel movies are “not movies”

                The point being without a directors message being portrayed, these cookie cutter cinema experiences, with algorithmically developed story lines, should not be classified as proper movies.

                But the fact remains, we consume them as movies.

                We consume AI art as art.

                  • Deniz Opal@syzito.xyz
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    1 year ago

                    @aredridel @glenatron @raccoona_nongrata @fwygon

                    Humans are also machines, biological machines, with a neurology based on neurons and synapse. As pointed out before, human “creativity” is also a result of past external consumption.

                    When AI is used to eventually make a movie, it will use more than one AI model. Does that make a difference? I guess your “one person” example is Scorsese’s “auteur”?

                    It seems we are fetishizing biological machines over silicon machines?

    • Thevenin@beehaw.org
      link
      fedilink
      arrow-up
      18
      ·
      1 year ago

      It doesn’t change anything you said about copyright law, but current-gen AI is absolutely not “a virtual brain” that creates “art in the same rough and inexact way that we humans do it.” What you are describing is called Artificial General Intelligence, and it simply does not exist yet.

      Today’s large language models (like ChatGPT) and diffusion models (like Stable Diffusion) are statistics machines. They copy down a huge amount of example material, process it, and use it to calculate the most statistically probable next word (or pixel), with a little noise thrown in so they don’t make the same thing twice. This is why ChatGPT is so bad at math and Stable Diffusion is so bad at counting fingers – they are not making any rational decisions about what they spit out. They’re not striving to make the correct answer. They’re just producing the most statistically average output given the input.

      Current-gen AI isn’t just viewing art, it’s storing a digital copy of it on a hard drive. It doesn’t create, it interpolates. In order to imitate a person’t style, it must make a copy of that person’s work; describing the style in words is insufficient. If human artists (and by extension, art teachers) lose their jobs, AI training sets stagnate, and everything they produce becomes repetitive and derivative.

      None of this matters to copyright law, but it matters to how we as a society respond. We do not want art itself to become a lost art.

      • Fauxreigner@beehaw.org
        link
        fedilink
        arrow-up
        8
        ·
        1 year ago

        Current-gen AI isn’t just viewing art, it’s storing a digital copy of it on a hard drive.

        This is factually untrue. For example, Stable Diffusion models are in the range of 2GB to 8GB, trained on a set of 5.85 billion images. If it was storing the images, that would allow approximately 1 byte for each image, and there are only 256 possibilities for a single byte. Images are downloaded as part of training the model, but they’re eventually “destroyed”; the model doesn’t contain them at all, and it doesn’t need to refer back to them to generate new images.

        It’s absolutely true that the training process requires downloading and storing images, but the product of training is a model that doesn’t contain any of the original images.

        None of that is to say that there is absolutely no valid copyright claim, but it seems like either option is pretty bad, long term. AI generated content is going to put a lot of people out of work and result in a lot of money for a few rich people, based off of the work of others who aren’t getting a cut. That’s bad.

        But the converse, where we say that copyright is maintained even if a work is only stored as weights in a neural network is also pretty bad; you’re going to have a very hard time defining that in such a way that it doesn’t cover the way humans store information and integrate it to create new art. That’s also bad. I’m pretty sure that nobody who creates art wants to have to pay Disney a cut because one time you looked at some images they own.

        The best you’re likely to do in that situation is say it’s ok if a human does it, but not a computer. But that still hits a lot of stumbling blocks around definitions, especially where computers are used to create art constantly. And if we ever hit the point where digital consciousness is possible, that adds a whole host of civil rights issues.

        • Thevenin@beehaw.org
          link
          fedilink
          arrow-up
          4
          ·
          1 year ago

          It’s absolutely true that the training process requires downloading and storing images

          This is the process I was referring to when I said it makes copies. We’re on the same page there.

          I don’t know what the solution to the problem is, and I doubt I’m the right person to propose one. I don’t think copyright law applies here, but I’m certainly not arguing that copyright should be expanded to include the statistical matrices used in LLMs and DPMs. I suppose plagiarism law might apply for copying a specific style, but that’s not the argument I’m trying to make, either.

          The argument I’m trying to make is that while it might be true that artificial minds should have the same rights as human minds, the LLMs and DPMs of today absolutely aren’t artificial minds. Allowing them to run amok as if they were is not just unfair to living artists… it could deal irreparable damage to our culture because those LLMs and DPMs of today cannot take up the mantle of the artists they hedge out or pass down their knowledge to the next generation.

          • Fauxreigner@beehaw.org
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            Thanks for clarifying. There are a lot of misconceptions about how this technology works, and I think it’s worth making sure that everyone in these thorny conversations has the right information.

            I completely agree with your larger point about culture; to the best of my knowledge we haven’t seen any real ability to innovate, because the current models are built to replicate the form and structure of what they’ve seen before. They’re getting extremely good at combining those elements, but they can’t really create anything new without a person involved. There’s a risk of significant stagnation if we leave art to the machines, especially since we’re already seeing issues with new models including the output of existing models in their training data. I don’t know how likely that is; I think it’s much more likely that we see these tools used to replace humans for more mundane, “boring” tasks, not really creative work.

            And you’re absolutely right that these are not artificial minds; the language models remind me of a quote from David Langford in his short story Answering Machine: “It’s so very hard to realize something that talks is not intelligent.” But we are getting to the point where the question of “how will we know” isn’t purely theoretical anymore.

      • Zyansheep@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago
        1. How do you know human brains don’t work in roughly the same way chatbots and image generators work?

        2. What is art? And what does it mean for it to become “lost”?

          • Zyansheep@lemmy.ml
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            No, he just said AI isn’t like human brains because its a “statistical machine”. What I’m asking is how he knows that human brains aren’t statistical machines?

            Human brains aren’t that good at direct math calculation either!

            Also he definitely didn’t explain what “lost art” is.

    • ParsnipWitch@feddit.de
      link
      fedilink
      arrow-up
      13
      ·
      1 year ago

      Current AI models do not learn the way human brains do. And the way current models learn how do “make art” is very different from how human artists do it. To repeatedly try and recreate the work of other artists is something beginners do. And posting these works online was always shunned in artist communities. You also don’t learn to draw a hand by remembering where a thousand different artists put the lines so it looks like a hand.

    • shiri@foggyminds.com
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      1 year ago

      @fwygon all questions of how AI learns aside, it’s not legally theft but philosophically the topic is debatable and very hot button.

      I can however comment pretty well on your copyright comments which are halfway there, but have a lot of popular inaccuracies.

      Fair use is a very vague topic, and they explicitly chose to not make explicit terms on what is allowed but rather the intents of what is to be allowed. We’ve got some firm ones not because of specific laws but from abundance of case evidence.

      * Educational; so long as it is taught as a part of a recognized class and within curriculum.
      * Informational; so long as it is being distributed to inform the public about valid, reasonable public interests. This is far broader than some would like; but it is legal.
      * Narrative or Commentary purposes; so long as you’re not copying a significant amount of the whole content and passing it off as your own. Short clips with narration and lots of commentary interwoven between them is typically protected. Copyright is not intended to be used to silence free speech. This also tends to include satire; as long as it doesn’t tread into defamation territory.

      These are basically all the same category and includes some misinformation about what it does and does not cover. It’s permitted to make copies for purely informational, public interest (ie. journalistic) purposes. This would include things like showing a clip of a movie or a trailer to make commentary on it.

      Education doesn’t get any special treatment here, but research might (ie. making copies that are kept to a restricted environment, and only used for research purposes, this is largely the protection that AI models currently fall under because the training data uses copyrighted data but the resulting model does not).

      * Transformative; so long as the content is being modified in a substantial enough manner that it is an entirely new work that is not easily confused for the original. This too, is far broader than some would like; but it still is legal.

      “Easily confused” is a rule from Trademark Law, not copyright. Copyright doesn’t care about consumer confusion, but does care about substitution. That is, if the content could be a substitute for the original (ie. copying someone else’s specific painting is going to be a violation up until the point where it can only be described as “inspired by” the painting)

      * Reasonable, ‘Non-Profit Seeking or Motivated’ Personal Use; People are generally allowed to share things amongst themselves and their friends and other acquaintances. Reasonable backup copies, loaning of copies, and even reproduction and presentation of things are generally considered fair use.

      This is a very very common myth that gets a lot of people in trouble. Copyright doesn’t care about whether you profit from it, more about potential lost profits.

      Loaning is completely disconnected from copyright because no copies are being made (“digital loaning” is a nonsense attempt to claiming loaning, but is just “temporary” copying which is a violation).

      Personal copies are permitted so long as you keep the original copy (or the original copy is explicitly irrecoverably lost or destroyed) as you already acquired it and multiple copies largely are just backups or conversions to different formats. The basic gist is that you are free to make copies so long as you don’t give any of them to anyone else (if you copy a DVD and give either the original or copy to a friend, even as a loan, it’s illegal).

      It’s not good to rely on it being “non-profit” as a copyright excuse, as that’s more just an area of leniency than a hard line. People far too often thing that allows them to get away with copying things, it’s really just for topics like making backups of your movies or copying your CDs to mp3s.

      … All that said, fun fact: AI works are not covered by copyright law.

      To be copyrighted a human being must actively create the work. You can copyright things made with AI art, but not the AI art itself (ie. a comic book made with AI art is copyrighted, but the AI art in the panels is not, functioning much like if you made a comic book out of public domain images). Prompts and set up are not considered enough to allow for copyright (example case was a monkey picking up a camera and taking pictures, those pictures were deemed unable to be copyrighted because despite the photographer placing the camera… it was the monkey taking the photos).

      • Harrison [He/Him]@ttrpg.network
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        This is true in US law but it should probably be noted that a lot of the “misconceptions” you’re outlining in OP’s comment are things that are legal in other jurisdictions

    • joe_vinegar@slrpnk.net
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      This is a very nice and thorough comment! Can you provide a reputable source for these points? (no criticism intended: as you seem knowledgeable, I’d trust you could have such reputable sources already selected and at hand, that’s why I’m asking).

      • throwsbooks@lemmy.ca
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        Not the poster you’re replying to, but I’m assuming you’re looking for some sort of source that neural networks generate stuff, rather than plagiarize?

        Google scholar is a good place to start. You’d need a general understanding of how NNs work, but it ends up leading to papers like this one, which I picked out because it has neat pictures as examples. https://arxiv.org/abs/1611.02200

        What this one is doing is taking an input in the form of a face, and turning it into a cartoon. They call it an emoji, cause it’s based on that style, but it’s the same principle as how AI art is generated. Learn a style, then take a prompt (image or text) and do something with the prompt in the style.