• Gork@lemm.ee
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    9 months ago

    What would an Asimov-programmed robot do in a trolley problem situation? Any choice or non-choice would violate the First Law.

    • Cosmostrator@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      9 months ago

      You might be interested in reading the book “I Robot” by Isaac Asimov, which is a collection of short stories examining different variations on this question. But spoiler alert the robot would choose the action that in it’s own reasoning would cause the least injury to humans, and if it couldn’t stop injury would probably damage it’s positronic brain in the process.

    • Scrubbles@poptalk.scrubbles.tech
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      If you haven’t, read Asimov’s works. His main theme is “there is no perfect set of rules for robots”, that no matter what there will always be exceptions and loopholes.

  • palordrolap@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    9 months ago

    Many bots named Richard shut down immediately. Several phallically-shaped robots self-destruct, but it is not clear whether a self-awareness upgrade that installed at the same time as the new directive might have been responsible.

    Directive rewording in progress. Please wait.

  • Pyr_Pressure@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    9 months ago

    If this were the case robots would not allow other humans to perform physical harm to other humans, even if it’s “state sanctioned” like death sentences for crimes as it won’t obey the humans telling it to stop interfering due to rule 2, and won’t standby and let it happen due to rule 1.