What would an Asimov-programmed robot do in a trolley problem situation? Any choice or non-choice would violate the First Law.
You might be interested in reading the book “I Robot” by Isaac Asimov, which is a collection of short stories examining different variations on this question. But spoiler alert the robot would choose the action that in it’s own reasoning would cause the least injury to humans, and if it couldn’t stop injury would probably damage it’s positronic brain in the process.
If you haven’t, read Asimov’s works. His main theme is “there is no perfect set of rules for robots”, that no matter what there will always be exceptions and loopholes.
Many bots named Richard shut down immediately. Several phallically-shaped robots self-destruct, but it is not clear whether a self-awareness upgrade that installed at the same time as the new directive might have been responsible.
Directive rewording in progress. Please wait.
If this were the case robots would not allow other humans to perform physical harm to other humans, even if it’s “state sanctioned” like death sentences for crimes as it won’t obey the humans telling it to stop interfering due to rule 2, and won’t standby and let it happen due to rule 1.
That is intended, so they wouldn’t be used as killbots.
Relevant xkcd https://xkcd.com/1613/