• queermunist she/her@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    4 hours ago

    Fine tuning costs money which means they aren’t going to do it. I fully expect they’ll settle for slop (they already have) and so will everyone else. You might as well get used to it. Everything gets worse forever and nothing ever gets better.

    • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      3 hours ago

      LoRA’s are actually really cheap and fast to make. That article I linked explains how it literally took 2 bucks to do. I don’t really think anything is getting worse forever. Things are just changing, and that’s one constant in the world.

      • queermunist she/her@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        3 hours ago

        And it was still something like 30% detectable as AI. That tells me that every article will still read as samey, even if it’s different enough to fool a tool that was trained on the current trends. Authorial voice is lost, replaced by the machine’s voice.

        It was only when they trained on authors specifically, which cost $81, that it dropped down to 3%. They won’t do that.

        • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
          link
          fedilink
          arrow-up
          1
          ·
          2 hours ago

          Give it a year and we’ll see. These things are improving at an incredible pace, and costs continue to go down as well. Things you needed to have a data center to do just a year ago can now be done on a laptop.