• Mike@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    3
    ·
    6 months ago

    I think the challenge with Generative AI CSAM is the question of where did training data originate? There has to be some questionable data there.

    • scoobford@lemmy.zip
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      6 months ago

      That would mean you need to enforce the law for whoever built the model. If the original creator has 100TB of cheese pizza, then they should be the one who gets arrested.

      Otherwise you’re busting random customers at a pizza shop for possession of the meth the cook smoked before his shift.

    • erwan@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 months ago

      There is also the issue of determining if a given image is real or AI. If AI were legal, that means prosecution would need to prove images are real and not AI with the risk of letting go real offenders.

      The need to ban AI CSAM is even clearer than cartoon CSAM.