Google’s DeepMind unit is unveiling today a new method it says can invisibly and permanently label images that have been generated by artificial intelligence.

  • nodsocket@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    This is Google were talking about. We’re probably going to find out that you can remove the mark by resizing or reducing the color depth or something stupid like that. Remember how YouTube added ContentID and it would flag innocent users while giving actual pirates a pass? As said in a related article:

    “There are few or no watermarks that have proven robust over time,” said Ben Zhao, professor at the University of Chicago studying AI authentication. “An attacker seeking to promote deepfake imagery as real, or discredit a real photo as fake, will have a lot to gain, and will not stop at cropping, or lossy compression or changing colors.”

    https://www.maginative.com/article/google-deepmind-launches-new-tool-to-label-ai-generated-images/

    • Puzzle_Sluts_4Ever@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      Lowering quality and futzing with the colors is already how fakes are identified. So that undoes most of the benefits of a “deepfake”. And people have already been trained to understand why every video of Bigfoot is super blurry and shakey even though we all have ridiculously good cameras (many with auto-stabilization) in our pockets at all time.

      As for technical competence: Meh. Like it or not but Youtube is pretty awesome tech and a lot of the issues with false positives is more to do with people gaming the system than failings of the algorithm. But we also have MS, Amazon, Facebook, etc involved in this. And it is in their best interest to make the most realistic AI generated images and videos possible (if only for media/content creation) while still being able to identify a fake. And attributing said images/video/whatever to “Deepmind” or “ChatGPT” is pretty easy since it can be done at creation rather than relying on a creator to fill out the paperwork.

      • nodsocket@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        If this watermark is actually capable of hindering trolls, I guarantee they will fight it with the strength of a thousand GitHub repos.

        • Puzzle_Sluts_4Ever@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          1 year ago

          That’s great.

          It is basically the exact same situation as DRM in video games. A truly dedicated person can find a workaround (actually… more on that shortly). But the vast majority of people aren’t going to put in any more effort than it takes to search “Generic Barney Game No CD”.

          And… stuff like Denuvo has consistently demonstrated itself to be something that a very limited number of people can crack. Part of that is just a general lack of interest but part of it is the same as it was with Starforce and even activation model Securom back in the day: Shit is hard and you need to put the time and effort in to knowing how to recognize a call.

          Albeit, the difference there is that people actively are not paying for video game cracks. Whereas there would be a market for “unlocked” LLMs. But there is also a strong demand for people who know how to really code those and make them sing so… it becomes a question of whether it is worth running a dark web site and getting paid in crypto versus just working for Google.

          So yeah, maybe some of the open source LLMs will have teams of people who find every single call to anything that might be a watermark AND debug whether those impact the final product. But the percentage of people who will be able to run their own LLM will get increasingly small as things become more and more complex and computationally/data intensive. So maybe large state backed organizations will be doing this. But, with sufficient watermarking/DRM/content tracing, the ability for someone to ask DALL-E 2 to make them a realistic picture of Biden having an orgie with the entire cast of Sex Education and it not being identified as a fake fairly easily is… pretty much at the same level as people not realizing that someone photoshopped a person’s head onto some porn. Idiots will believe it. Everyone else will just see a quick twitter post debunking it and move on with their lives.