The article and release are interesting unto themselves. However, as this is c/Futurism, let’s discuss what happens in the future. How do you folks think this ideological battleground plays out in 5, 50, or 500 years?

  • Troy@lemmy.caOPM
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    1 year ago

    I disagree. There is middle ground. If an engineer gives bad advice, it shouldn’t be propagated – you know, bridges fall down and people die. Where possible, the invalid info should be scrubbed and replaced with valid info. The engineering firm also has their reputation, permits to practice, etc. at stake. But an AI does not. There’s no one to sue for negligence when someone takes invalid advice from an AI which is masquerading as a doctor. Etc. The companies making AIs are mostly trying to protect themselves when they put the gates in place.

    You could go stand on your soapbox and shout suicide tips to the crowd as they walk by. You might get locked up as you’re abetting a crime (in most jurisdictions). But what if you’re posting suicide advice into a forum, and the advice was generated by an AI? What if a script is posting it? Where does the legal responsibility for harm fall?

    • baconisaveg@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      A Large Language Model is just a set of computer algorithms designed to answer a user’s question, it’s just a tool. None of your arguments are at all relevant to the tool itself, but rather to how the tool is used. A hammer is designed to pound nails but it can also be used to murder somehow, are you going to sue the hammer manufacturer because they didn’t prevent that?

      If someone uses a hammer to murder someone, do they get away with it because the hammer wasn’t designed to kill someone, so clearly it’s not their fault? No, of course not. This article is nothing but rage-bait. They may as well have taken a hammer and started hitting everything they could (except for nails of course), and then wrote some bullshit about how Master-Craft produces items that can be used to perform abortions and kill Native Americans.

      And as for my original post, this has to do with how the LLM is trained. There’s several ways to ‘censor’ the output from a LLM, including prompts and ban tokens. This is what services like GPT or Stable Diffusion do, they don’t censor the training data, they censor the inputs and outputs shown to the user. So should the training data be scrubbed of all traces of anything we find objectionable? There was plenty of murders in Hamlet, do we exclude that because the model might suggest poisoning your partner by pouring poison in their ear while they sleep?