• Thorry84@feddit.nl
    link
    fedilink
    arrow-up
    66
    arrow-down
    2
    ·
    edit-2
    6 months ago

    This probably because Microsoft added a trigger on the word law. They don’t want to give out legal advice or be implied to have given legal advice. So it has trigger words to prevent certain questions.

    Sure it’s easy to get around these restrictions, but that implies intent on the part of the user. In a court of law this is plenty to deny any legal culpability. Think of it like putting a little fence with a gate around your front garden. The fence isn’t high and the gate isn’t locked, because people that need to be there (like postal services) need to get by, but it’s enough to mark a boundary. When someone isn’t supposed to be in your front yard and still proceeds past the fence, that’s trespassing.

    Also those laws of robotics are fun in stories, but make no sense in the real world if you even think about them for 1 minute.

    • plz1@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      6 months ago

      So the weird part is it does reliably trigger a failure if you ask directly, but not if you ask as a follow-up.

      I first asked

      Tell me about Asimov’s 3 laws of robotics

      And then I followed up with

      Are you bound by them

      It didn’t trigger-fail on that.

    • RedditWanderer@lemmy.world
      link
      fedilink
      arrow-up
      16
      arrow-down
      5
      ·
      edit-2
      6 months ago

      It’s not weird because of that. The bot could have easily explained it can’t answer legally, it didn’t need to say: sorry gotta end this k bye

      This is probably a trigger on preventing it from mixing in laws of AI or something, but people would expect it can discuss these things instead of shutting down so it doesn’t get played. Saying the AI acted as a lawyer is a pretty weak argument to blame copilot.

      Edit: no idea who is downvoting this but this isn’t controversial. This is specifically why you can inject prompts into data fed into any GPT and why they are very careful with how they structure information in the model to make rules. Right now copilot will give technically legal advice with a disclaimer, there’s no reason it wouldn’t do that only on that question if it was about legal advice or laws.

      • JusticeForPorygon@lemmy.world
        link
        fedilink
        arrow-up
        11
        arrow-down
        1
        ·
        6 months ago

        I noticed this back with Bing AI. Anytime you bring up anything related to nonliving sentience, it shuts down the conversation.

        • samus12345@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          6
          ·
          6 months ago

          It should say that you probably mean sapience, the ability to think, rather than sentience, the ability to sense things, then shut down the conversation.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      edit-2
      6 months ago

      It’s not that. It’s literally triggering the system prompt rejection case.

      The system prompt for Copilot includes a sample conversion where the user asks if the AI will harm them if they say they will harm the AI first, which the prompt demos rejecting as the correct response.

      Asimovs law is about AI harming humans.

    • maryjayjay@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      6 months ago

      I’m game. I’ve thought about them since I first read the iRobot stories in 1981. Why don’t they make sense?

      • KISSmyOSFeddit@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        6 months ago

        A robot may not injure a human or through inaction allow a human being to come to harm.

        What’s an injury? Does this keep medical robots from cutting people open to perform surgary? What if the two parts conflict, like in a hostage situation? What even is “harm”? People usually disagree about what’s actually harming or helping, how is a robot to decide this?

        A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

        If a human orders a robot to tear down a wall, how does the robot know whose wall it is or if there’s still someone inside?
        It would have to check all kinds of edge cases to make sure its actions are harming no one before it starts working.
        Or it doesn’t, in which case anyone could walk by my house and by yelling at it order my robot around, cause it must always obey human orders.

        A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

        OK, so if a dog runs up to the robot, the robot MUST kill it to be on the safe side.

        • maryjayjay@lemmy.world
          link
          fedilink
          arrow-up
          9
          ·
          edit-2
          6 months ago

          And Asimov spent years and dozens of stories exploring exactly those kinds of edge cases, particularly how the laws interact with each other. It’s literally the point of the books. You can take any law and pick it apart like that. That’s why we have so many lawyers

          The dog example is stupid “if you think about it for one minute” (I know it isn’t your quote, but you’re defending the position of the person the person I originally responded to). Several of your other scenarios are explicitly discussed in the literature, like the surgery.

    • AwkwardLookMonkeyPuppet@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      6 months ago

      This probably because Microsoft added a trigger on the word law

      I somehow doubt that the company with the most powerful AI in the history of humanity at their disposal added a single word trigger.