How does Asimov's second law deal with contradictory orders from different people?Can you tell my robot to kill itself? (Three Laws)In Asimov's robot story “Liar!”, why does Dr. Calvin show no mercy for Herbie?Did Isaac Asimov give two or more unrelated characters the same name component in any of his science fiction stories?Does the Foundation from Asimov's novels have an emblem?

Microgravity indicators

Why would anyone ever invest in a cash-only etf?

Should I intervene when a colleague in a different department makes students run laps as part of their grade?

How would a lunar colony attack Earth?

Security measures that could plausibly last 150+ years?

Bouncing map back into its bounds, after user dragged it out

Solve equation using Mathematica

May a hotel provide accommodation for fewer people than booked?

How to choose using Collection<Id> rather than Collection<String>, or the opposite?

Would people understand me speaking German all over Europe?

How did the SysRq key get onto modern keyboards if it's rarely used?

Does Ubuntu reduce battery life?

How close to the Sun would you have to be to hear it?

What Marvel character has this 'W' symbol?

Verb Classification of あげる (to give)

Was the Psych theme song written for the show?

My employer is refusing to give me the pay that was advertised after an internal job move

Would it take any sort of amendment to make DC a state?

What clothes would flying-people wear?

Did Vladimir Lenin have a cat?

Why tantalum for the Hayabusa bullets?

Just how much information should you share with a former client?

Correct word for a little toy that always stands up?

How to efficiently shred a lot of cabbage?



How does Asimov's second law deal with contradictory orders from different people?


Can you tell my robot to kill itself? (Three Laws)In Asimov's robot story “Liar!”, why does Dr. Calvin show no mercy for Herbie?Did Isaac Asimov give two or more unrelated characters the same name component in any of his science fiction stories?Does the Foundation from Asimov's novels have an emblem?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








6















The second law states that:




A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.




But it says nothing about property. What if a robot owned by someone is given an order by someone else?



Surely if the same person gives contradictory orders, the last one will be executed:




Move that box upstairs. Wait no, just put it on the porch.




But what if a robot is carrying groceries outside, and someone says:




Hey you, stop what you're doing and take those groceries to my apartment.




It shouldn't agree to do that.



So, do people just say "Only obey me" when they buy a robot? Does that mean a robot ignores every order by non-owners, even if it's small help like helping an old lady cross the road? The robot would have to tell between harmless orders from strangers and orders that cause non-physical harm (like stealing their stuff, smashing their car, etc.)



If a households owns a robot, they'll say something like "Only obey me and my family", then what if orders contradict between family members?



How does it work exactly? Is it just managed by some part of the AI less fundamental than the 3 rules?



I've never read Asimov, sorry if it's explained in the first chapter of the first book. I didn't see it discussed anywhere.










share|improve this question









New contributor



Teleporting Goat is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
















  • 5





    The potential conflicts between the three laws is actually what I, Robot spends most its time examining. I highly recommend giving it a read.

    – Harabeck
    8 hours ago











  • You might check out the answers to Can you tell my robot to kill itself? Some of them explain that an earlier order has a higher potential than a later order. And an order that does not involve causing distress to the people it works for would also have higher potential than one that does.

    – DavidW
    8 hours ago











  • en.wikipedia.org/wiki/Little_Lost_Robot and less so but en.wikipedia.org/wiki/Liar!_(short_story) - but yeah...pretty much, read the book :) - as Harabeck says, the entire point of the stories is figuring out how conflicts in the laws work themselves out

    – NKCampbell
    3 hours ago


















6















The second law states that:




A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.




But it says nothing about property. What if a robot owned by someone is given an order by someone else?



Surely if the same person gives contradictory orders, the last one will be executed:




Move that box upstairs. Wait no, just put it on the porch.




But what if a robot is carrying groceries outside, and someone says:




Hey you, stop what you're doing and take those groceries to my apartment.




It shouldn't agree to do that.



So, do people just say "Only obey me" when they buy a robot? Does that mean a robot ignores every order by non-owners, even if it's small help like helping an old lady cross the road? The robot would have to tell between harmless orders from strangers and orders that cause non-physical harm (like stealing their stuff, smashing their car, etc.)



If a households owns a robot, they'll say something like "Only obey me and my family", then what if orders contradict between family members?



How does it work exactly? Is it just managed by some part of the AI less fundamental than the 3 rules?



I've never read Asimov, sorry if it's explained in the first chapter of the first book. I didn't see it discussed anywhere.










share|improve this question









New contributor



Teleporting Goat is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
















  • 5





    The potential conflicts between the three laws is actually what I, Robot spends most its time examining. I highly recommend giving it a read.

    – Harabeck
    8 hours ago











  • You might check out the answers to Can you tell my robot to kill itself? Some of them explain that an earlier order has a higher potential than a later order. And an order that does not involve causing distress to the people it works for would also have higher potential than one that does.

    – DavidW
    8 hours ago











  • en.wikipedia.org/wiki/Little_Lost_Robot and less so but en.wikipedia.org/wiki/Liar!_(short_story) - but yeah...pretty much, read the book :) - as Harabeck says, the entire point of the stories is figuring out how conflicts in the laws work themselves out

    – NKCampbell
    3 hours ago














6












6








6








The second law states that:




A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.




But it says nothing about property. What if a robot owned by someone is given an order by someone else?



Surely if the same person gives contradictory orders, the last one will be executed:




Move that box upstairs. Wait no, just put it on the porch.




But what if a robot is carrying groceries outside, and someone says:




Hey you, stop what you're doing and take those groceries to my apartment.




It shouldn't agree to do that.



So, do people just say "Only obey me" when they buy a robot? Does that mean a robot ignores every order by non-owners, even if it's small help like helping an old lady cross the road? The robot would have to tell between harmless orders from strangers and orders that cause non-physical harm (like stealing their stuff, smashing their car, etc.)



If a households owns a robot, they'll say something like "Only obey me and my family", then what if orders contradict between family members?



How does it work exactly? Is it just managed by some part of the AI less fundamental than the 3 rules?



I've never read Asimov, sorry if it's explained in the first chapter of the first book. I didn't see it discussed anywhere.










share|improve this question









New contributor



Teleporting Goat is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











The second law states that:




A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.




But it says nothing about property. What if a robot owned by someone is given an order by someone else?



Surely if the same person gives contradictory orders, the last one will be executed:




Move that box upstairs. Wait no, just put it on the porch.




But what if a robot is carrying groceries outside, and someone says:




Hey you, stop what you're doing and take those groceries to my apartment.




It shouldn't agree to do that.



So, do people just say "Only obey me" when they buy a robot? Does that mean a robot ignores every order by non-owners, even if it's small help like helping an old lady cross the road? The robot would have to tell between harmless orders from strangers and orders that cause non-physical harm (like stealing their stuff, smashing their car, etc.)



If a households owns a robot, they'll say something like "Only obey me and my family", then what if orders contradict between family members?



How does it work exactly? Is it just managed by some part of the AI less fundamental than the 3 rules?



I've never read Asimov, sorry if it's explained in the first chapter of the first book. I didn't see it discussed anywhere.







isaac-asimov laws-of-robotics






share|improve this question









New contributor



Teleporting Goat is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.










share|improve this question









New contributor



Teleporting Goat is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








share|improve this question




share|improve this question








edited 2 hours ago







Teleporting Goat













New contributor



Teleporting Goat is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








asked 8 hours ago









Teleporting GoatTeleporting Goat

1313 bronze badges




1313 bronze badges




New contributor



Teleporting Goat is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




New contributor




Teleporting Goat is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.












  • 5





    The potential conflicts between the three laws is actually what I, Robot spends most its time examining. I highly recommend giving it a read.

    – Harabeck
    8 hours ago











  • You might check out the answers to Can you tell my robot to kill itself? Some of them explain that an earlier order has a higher potential than a later order. And an order that does not involve causing distress to the people it works for would also have higher potential than one that does.

    – DavidW
    8 hours ago











  • en.wikipedia.org/wiki/Little_Lost_Robot and less so but en.wikipedia.org/wiki/Liar!_(short_story) - but yeah...pretty much, read the book :) - as Harabeck says, the entire point of the stories is figuring out how conflicts in the laws work themselves out

    – NKCampbell
    3 hours ago













  • 5





    The potential conflicts between the three laws is actually what I, Robot spends most its time examining. I highly recommend giving it a read.

    – Harabeck
    8 hours ago











  • You might check out the answers to Can you tell my robot to kill itself? Some of them explain that an earlier order has a higher potential than a later order. And an order that does not involve causing distress to the people it works for would also have higher potential than one that does.

    – DavidW
    8 hours ago











  • en.wikipedia.org/wiki/Little_Lost_Robot and less so but en.wikipedia.org/wiki/Liar!_(short_story) - but yeah...pretty much, read the book :) - as Harabeck says, the entire point of the stories is figuring out how conflicts in the laws work themselves out

    – NKCampbell
    3 hours ago








5




5





The potential conflicts between the three laws is actually what I, Robot spends most its time examining. I highly recommend giving it a read.

– Harabeck
8 hours ago





The potential conflicts between the three laws is actually what I, Robot spends most its time examining. I highly recommend giving it a read.

– Harabeck
8 hours ago













You might check out the answers to Can you tell my robot to kill itself? Some of them explain that an earlier order has a higher potential than a later order. And an order that does not involve causing distress to the people it works for would also have higher potential than one that does.

– DavidW
8 hours ago





You might check out the answers to Can you tell my robot to kill itself? Some of them explain that an earlier order has a higher potential than a later order. And an order that does not involve causing distress to the people it works for would also have higher potential than one that does.

– DavidW
8 hours ago













en.wikipedia.org/wiki/Little_Lost_Robot and less so but en.wikipedia.org/wiki/Liar!_(short_story) - but yeah...pretty much, read the book :) - as Harabeck says, the entire point of the stories is figuring out how conflicts in the laws work themselves out

– NKCampbell
3 hours ago






en.wikipedia.org/wiki/Little_Lost_Robot and less so but en.wikipedia.org/wiki/Liar!_(short_story) - but yeah...pretty much, read the book :) - as Harabeck says, the entire point of the stories is figuring out how conflicts in the laws work themselves out

– NKCampbell
3 hours ago











4 Answers
4






active

oldest

votes


















6














As far as I can recall, he doesn't. It's important to remember that Asimov was writing the robot stories for Astounding and his readers liked logical puzzles, surprise consequences and clever gimmicks -- none of which would be present in a simple story of two different people telling a robot to do two different things.



The closest I recall, is it "Robot AL-76 Goes Astray" where a robot built for use on the Moon for some reason gets lost is the back woods. It invents a disintegration tool, causes some havoc and is told by someone to get rid of the disintegrator and forget about building more. When the robot's finally found, the owners want to know how it built the disintegrator, but it can't help -- it's forgotten everything.



This wasn't a case of contradictory orders, but it was an order by an unauthorized person.






share|improve this answer

























  • Since the order of the rules is so significant, perhaps order of instructions is significant too.

    – marcellothearcane
    4 hours ago


















4














The answer to this question depends strongly on the level of sophistication of the robot. The robot stories (almost all of them were short stories, aside from the novel-length The Bicentennial Man) covered an internal time span of centuries -- even though the core of the stories involved Susan Calvin from her graduate days until her old age.



Early on, robots weren't "smart" enough to know the difference between sensible orders and nonsensical ones. Later, they reached a level of sophistication where the could debate amongst themselves exactly what constitutes a human -- and conclude that they are more qualified for that moniker than any of these biological entities. Somewhere in between, Second Law developed into some level of qualification relative to the order giver (orders from small children and known incompetents could be ignored). It was out of this latter capability that robots eventually decided that concerns for harm of humans (and humanity) overrode all their orders -- humans must be protected from themselves to the point where they were all declared incompetent to give orders to a robot.






share|improve this answer
































    4














    There was a case in the short story 'Runaround' where a robot had two conflicting objectives. In this case, it was because he was built with an extra-strong Third Law, so could not decide between obeying human orders and self-preservation. He essentially tries to satisfy both conditions, running circles a safe distance from his dangerous destination, and otherwise acting erratic and drunk.



    I would think a similar thing would happen in the case of two conflicting orders. The robot would do what it could to satisfy both, but could become stuck if it was impossible. In the above story, the deadlock is resolved when the human operators put themselves in danger, thus invoking the First Law. I would think in most cases committing a crime would be considered a minor violation of the First Law, so smart enough robots would try to avoid it. But as Zeiss mentioned, it would really depend on the model of robot.






    share|improve this answer
































      4














      Consider, all Robots stories are short stories or novellas published during a very long time spawn, so they are not 100% coherent among themselves sometimes, though the overall framework is.



      As highlighted in the comments, Positronic brains reaction to orders, situations and environment is generally described in-universe as the outcome of the "potentials" induced by them on the 3 Laws. There is no AI at work as we know it today for the Robot to decide to do something or not, but more like a compulsion towards a generic line of action, driven by priorities between the laws and the Robot understanding of his surroundings. To give you an idea, there is some discussion in one of the stories if I remember well on a potential scenario where a strong enough order, repeated continuously under different wording and reinforced with information on how negative would it be for the person issuing it if not followed, could have potentially led to a breach of First Law by simple buildup of potential on Second Law adherence. The idea is rejected as impossible in practice, but it is discussed.



      Also take into account there is an in-universe evolution on how Laws are enforced by design; in the first stories, which are supposed to take place earlier in-universe, Laws are rigid, literal absolutes with no space for interpretation. As in-universe timeline progresses, they become more flexible and open to interpretation by the Robot, in part as an attempt by US Robotics to prevent scenarios such as a third party ordering a Robot to, for instance, “get that bike and throw it to the river”, when the bike owner is not present to negate the order.



      This flexibility is often presented as a re-interpretation of First Law, where “damaging a human” is originally presented as implying exclusively physical damage, and it slowly includes the concept of mental damage or distress. Under this implementation of the First Law, breaking it is to build up enough negative potential vs. it as per Robot understanding of human mind and social conventions, which leads to amusing side-effects (read the books!).



      So the bike will not be thrown to the river because the robot guesses the owner will not like it (negative potential vs. First Law).



      So, what does all this rambling mean to your question:



      • An early Robot will tend to do as ordered literally if there is no
        evident violation of First Law, or a violation of a stronger previous
        order. A way to protect your Robot from abuse would be to issue a
        second order in strong terms not to obey counter-orders. This will
        generate two positive potentials a single counter-order will find
        difficult to overrule.

      • A later Robot will tend to be more mindful of negative psychological
        impact of following an order or taking a course of action, which
        leads to amusing side-effects (seriously, read the books!).

      • Any Robot of whichever period, faced with a situation where
        contradictory lines of action will have the same potential on all
        Laws, or inevitably break First Law (later interpreted as leading to
        a negative high enough potential vs. it, not necessarily implying
        physical damage), will enter a loop looking for an impossible
        solution and be blocked.





      share|improve this answer



























        Your Answer








        StackExchange.ready(function()
        var channelOptions =
        tags: "".split(" "),
        id: "186"
        ;
        initTagRenderer("".split(" "), "".split(" "), channelOptions);

        StackExchange.using("externalEditor", function()
        // Have to fire editor after snippets, if snippets enabled
        if (StackExchange.settings.snippets.snippetsEnabled)
        StackExchange.using("snippets", function()
        createEditor();
        );

        else
        createEditor();

        );

        function createEditor()
        StackExchange.prepareEditor(
        heartbeatType: 'answer',
        autoActivateHeartbeat: false,
        convertImagesToLinks: false,
        noModals: true,
        showLowRepImageUploadWarning: true,
        reputationToPostImages: null,
        bindNavPrevention: true,
        postfix: "",
        imageUploader:
        brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
        contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
        allowUrls: true
        ,
        noCode: true, onDemand: true,
        discardSelector: ".discard-answer"
        ,immediatelyShowMarkdownHelp:true
        );



        );






        Teleporting Goat is a new contributor. Be nice, and check out our Code of Conduct.









        draft saved

        draft discarded


















        StackExchange.ready(
        function ()
        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fscifi.stackexchange.com%2fquestions%2f216858%2fhow-does-asimovs-second-law-deal-with-contradictory-orders-from-different-peopl%23new-answer', 'question_page');

        );

        Post as a guest















        Required, but never shown

























        4 Answers
        4






        active

        oldest

        votes








        4 Answers
        4






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes









        6














        As far as I can recall, he doesn't. It's important to remember that Asimov was writing the robot stories for Astounding and his readers liked logical puzzles, surprise consequences and clever gimmicks -- none of which would be present in a simple story of two different people telling a robot to do two different things.



        The closest I recall, is it "Robot AL-76 Goes Astray" where a robot built for use on the Moon for some reason gets lost is the back woods. It invents a disintegration tool, causes some havoc and is told by someone to get rid of the disintegrator and forget about building more. When the robot's finally found, the owners want to know how it built the disintegrator, but it can't help -- it's forgotten everything.



        This wasn't a case of contradictory orders, but it was an order by an unauthorized person.






        share|improve this answer

























        • Since the order of the rules is so significant, perhaps order of instructions is significant too.

          – marcellothearcane
          4 hours ago















        6














        As far as I can recall, he doesn't. It's important to remember that Asimov was writing the robot stories for Astounding and his readers liked logical puzzles, surprise consequences and clever gimmicks -- none of which would be present in a simple story of two different people telling a robot to do two different things.



        The closest I recall, is it "Robot AL-76 Goes Astray" where a robot built for use on the Moon for some reason gets lost is the back woods. It invents a disintegration tool, causes some havoc and is told by someone to get rid of the disintegrator and forget about building more. When the robot's finally found, the owners want to know how it built the disintegrator, but it can't help -- it's forgotten everything.



        This wasn't a case of contradictory orders, but it was an order by an unauthorized person.






        share|improve this answer

























        • Since the order of the rules is so significant, perhaps order of instructions is significant too.

          – marcellothearcane
          4 hours ago













        6












        6








        6







        As far as I can recall, he doesn't. It's important to remember that Asimov was writing the robot stories for Astounding and his readers liked logical puzzles, surprise consequences and clever gimmicks -- none of which would be present in a simple story of two different people telling a robot to do two different things.



        The closest I recall, is it "Robot AL-76 Goes Astray" where a robot built for use on the Moon for some reason gets lost is the back woods. It invents a disintegration tool, causes some havoc and is told by someone to get rid of the disintegrator and forget about building more. When the robot's finally found, the owners want to know how it built the disintegrator, but it can't help -- it's forgotten everything.



        This wasn't a case of contradictory orders, but it was an order by an unauthorized person.






        share|improve this answer













        As far as I can recall, he doesn't. It's important to remember that Asimov was writing the robot stories for Astounding and his readers liked logical puzzles, surprise consequences and clever gimmicks -- none of which would be present in a simple story of two different people telling a robot to do two different things.



        The closest I recall, is it "Robot AL-76 Goes Astray" where a robot built for use on the Moon for some reason gets lost is the back woods. It invents a disintegration tool, causes some havoc and is told by someone to get rid of the disintegrator and forget about building more. When the robot's finally found, the owners want to know how it built the disintegrator, but it can't help -- it's forgotten everything.



        This wasn't a case of contradictory orders, but it was an order by an unauthorized person.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered 8 hours ago









        Mark OlsonMark Olson

        17.3k3 gold badges60 silver badges93 bronze badges




        17.3k3 gold badges60 silver badges93 bronze badges















        • Since the order of the rules is so significant, perhaps order of instructions is significant too.

          – marcellothearcane
          4 hours ago

















        • Since the order of the rules is so significant, perhaps order of instructions is significant too.

          – marcellothearcane
          4 hours ago
















        Since the order of the rules is so significant, perhaps order of instructions is significant too.

        – marcellothearcane
        4 hours ago





        Since the order of the rules is so significant, perhaps order of instructions is significant too.

        – marcellothearcane
        4 hours ago













        4














        The answer to this question depends strongly on the level of sophistication of the robot. The robot stories (almost all of them were short stories, aside from the novel-length The Bicentennial Man) covered an internal time span of centuries -- even though the core of the stories involved Susan Calvin from her graduate days until her old age.



        Early on, robots weren't "smart" enough to know the difference between sensible orders and nonsensical ones. Later, they reached a level of sophistication where the could debate amongst themselves exactly what constitutes a human -- and conclude that they are more qualified for that moniker than any of these biological entities. Somewhere in between, Second Law developed into some level of qualification relative to the order giver (orders from small children and known incompetents could be ignored). It was out of this latter capability that robots eventually decided that concerns for harm of humans (and humanity) overrode all their orders -- humans must be protected from themselves to the point where they were all declared incompetent to give orders to a robot.






        share|improve this answer





























          4














          The answer to this question depends strongly on the level of sophistication of the robot. The robot stories (almost all of them were short stories, aside from the novel-length The Bicentennial Man) covered an internal time span of centuries -- even though the core of the stories involved Susan Calvin from her graduate days until her old age.



          Early on, robots weren't "smart" enough to know the difference between sensible orders and nonsensical ones. Later, they reached a level of sophistication where the could debate amongst themselves exactly what constitutes a human -- and conclude that they are more qualified for that moniker than any of these biological entities. Somewhere in between, Second Law developed into some level of qualification relative to the order giver (orders from small children and known incompetents could be ignored). It was out of this latter capability that robots eventually decided that concerns for harm of humans (and humanity) overrode all their orders -- humans must be protected from themselves to the point where they were all declared incompetent to give orders to a robot.






          share|improve this answer



























            4












            4








            4







            The answer to this question depends strongly on the level of sophistication of the robot. The robot stories (almost all of them were short stories, aside from the novel-length The Bicentennial Man) covered an internal time span of centuries -- even though the core of the stories involved Susan Calvin from her graduate days until her old age.



            Early on, robots weren't "smart" enough to know the difference between sensible orders and nonsensical ones. Later, they reached a level of sophistication where the could debate amongst themselves exactly what constitutes a human -- and conclude that they are more qualified for that moniker than any of these biological entities. Somewhere in between, Second Law developed into some level of qualification relative to the order giver (orders from small children and known incompetents could be ignored). It was out of this latter capability that robots eventually decided that concerns for harm of humans (and humanity) overrode all their orders -- humans must be protected from themselves to the point where they were all declared incompetent to give orders to a robot.






            share|improve this answer













            The answer to this question depends strongly on the level of sophistication of the robot. The robot stories (almost all of them were short stories, aside from the novel-length The Bicentennial Man) covered an internal time span of centuries -- even though the core of the stories involved Susan Calvin from her graduate days until her old age.



            Early on, robots weren't "smart" enough to know the difference between sensible orders and nonsensical ones. Later, they reached a level of sophistication where the could debate amongst themselves exactly what constitutes a human -- and conclude that they are more qualified for that moniker than any of these biological entities. Somewhere in between, Second Law developed into some level of qualification relative to the order giver (orders from small children and known incompetents could be ignored). It was out of this latter capability that robots eventually decided that concerns for harm of humans (and humanity) overrode all their orders -- humans must be protected from themselves to the point where they were all declared incompetent to give orders to a robot.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered 8 hours ago









            Zeiss IkonZeiss Ikon

            9,8521 gold badge17 silver badges54 bronze badges




            9,8521 gold badge17 silver badges54 bronze badges
























                4














                There was a case in the short story 'Runaround' where a robot had two conflicting objectives. In this case, it was because he was built with an extra-strong Third Law, so could not decide between obeying human orders and self-preservation. He essentially tries to satisfy both conditions, running circles a safe distance from his dangerous destination, and otherwise acting erratic and drunk.



                I would think a similar thing would happen in the case of two conflicting orders. The robot would do what it could to satisfy both, but could become stuck if it was impossible. In the above story, the deadlock is resolved when the human operators put themselves in danger, thus invoking the First Law. I would think in most cases committing a crime would be considered a minor violation of the First Law, so smart enough robots would try to avoid it. But as Zeiss mentioned, it would really depend on the model of robot.






                share|improve this answer





























                  4














                  There was a case in the short story 'Runaround' where a robot had two conflicting objectives. In this case, it was because he was built with an extra-strong Third Law, so could not decide between obeying human orders and self-preservation. He essentially tries to satisfy both conditions, running circles a safe distance from his dangerous destination, and otherwise acting erratic and drunk.



                  I would think a similar thing would happen in the case of two conflicting orders. The robot would do what it could to satisfy both, but could become stuck if it was impossible. In the above story, the deadlock is resolved when the human operators put themselves in danger, thus invoking the First Law. I would think in most cases committing a crime would be considered a minor violation of the First Law, so smart enough robots would try to avoid it. But as Zeiss mentioned, it would really depend on the model of robot.






                  share|improve this answer



























                    4












                    4








                    4







                    There was a case in the short story 'Runaround' where a robot had two conflicting objectives. In this case, it was because he was built with an extra-strong Third Law, so could not decide between obeying human orders and self-preservation. He essentially tries to satisfy both conditions, running circles a safe distance from his dangerous destination, and otherwise acting erratic and drunk.



                    I would think a similar thing would happen in the case of two conflicting orders. The robot would do what it could to satisfy both, but could become stuck if it was impossible. In the above story, the deadlock is resolved when the human operators put themselves in danger, thus invoking the First Law. I would think in most cases committing a crime would be considered a minor violation of the First Law, so smart enough robots would try to avoid it. But as Zeiss mentioned, it would really depend on the model of robot.






                    share|improve this answer













                    There was a case in the short story 'Runaround' where a robot had two conflicting objectives. In this case, it was because he was built with an extra-strong Third Law, so could not decide between obeying human orders and self-preservation. He essentially tries to satisfy both conditions, running circles a safe distance from his dangerous destination, and otherwise acting erratic and drunk.



                    I would think a similar thing would happen in the case of two conflicting orders. The robot would do what it could to satisfy both, but could become stuck if it was impossible. In the above story, the deadlock is resolved when the human operators put themselves in danger, thus invoking the First Law. I would think in most cases committing a crime would be considered a minor violation of the First Law, so smart enough robots would try to avoid it. But as Zeiss mentioned, it would really depend on the model of robot.







                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered 7 hours ago









                    DaaaahWhooshDaaaahWhoosh

                    9,0306 gold badges37 silver badges69 bronze badges




                    9,0306 gold badges37 silver badges69 bronze badges
























                        4














                        Consider, all Robots stories are short stories or novellas published during a very long time spawn, so they are not 100% coherent among themselves sometimes, though the overall framework is.



                        As highlighted in the comments, Positronic brains reaction to orders, situations and environment is generally described in-universe as the outcome of the "potentials" induced by them on the 3 Laws. There is no AI at work as we know it today for the Robot to decide to do something or not, but more like a compulsion towards a generic line of action, driven by priorities between the laws and the Robot understanding of his surroundings. To give you an idea, there is some discussion in one of the stories if I remember well on a potential scenario where a strong enough order, repeated continuously under different wording and reinforced with information on how negative would it be for the person issuing it if not followed, could have potentially led to a breach of First Law by simple buildup of potential on Second Law adherence. The idea is rejected as impossible in practice, but it is discussed.



                        Also take into account there is an in-universe evolution on how Laws are enforced by design; in the first stories, which are supposed to take place earlier in-universe, Laws are rigid, literal absolutes with no space for interpretation. As in-universe timeline progresses, they become more flexible and open to interpretation by the Robot, in part as an attempt by US Robotics to prevent scenarios such as a third party ordering a Robot to, for instance, “get that bike and throw it to the river”, when the bike owner is not present to negate the order.



                        This flexibility is often presented as a re-interpretation of First Law, where “damaging a human” is originally presented as implying exclusively physical damage, and it slowly includes the concept of mental damage or distress. Under this implementation of the First Law, breaking it is to build up enough negative potential vs. it as per Robot understanding of human mind and social conventions, which leads to amusing side-effects (read the books!).



                        So the bike will not be thrown to the river because the robot guesses the owner will not like it (negative potential vs. First Law).



                        So, what does all this rambling mean to your question:



                        • An early Robot will tend to do as ordered literally if there is no
                          evident violation of First Law, or a violation of a stronger previous
                          order. A way to protect your Robot from abuse would be to issue a
                          second order in strong terms not to obey counter-orders. This will
                          generate two positive potentials a single counter-order will find
                          difficult to overrule.

                        • A later Robot will tend to be more mindful of negative psychological
                          impact of following an order or taking a course of action, which
                          leads to amusing side-effects (seriously, read the books!).

                        • Any Robot of whichever period, faced with a situation where
                          contradictory lines of action will have the same potential on all
                          Laws, or inevitably break First Law (later interpreted as leading to
                          a negative high enough potential vs. it, not necessarily implying
                          physical damage), will enter a loop looking for an impossible
                          solution and be blocked.





                        share|improve this answer





























                          4














                          Consider, all Robots stories are short stories or novellas published during a very long time spawn, so they are not 100% coherent among themselves sometimes, though the overall framework is.



                          As highlighted in the comments, Positronic brains reaction to orders, situations and environment is generally described in-universe as the outcome of the "potentials" induced by them on the 3 Laws. There is no AI at work as we know it today for the Robot to decide to do something or not, but more like a compulsion towards a generic line of action, driven by priorities between the laws and the Robot understanding of his surroundings. To give you an idea, there is some discussion in one of the stories if I remember well on a potential scenario where a strong enough order, repeated continuously under different wording and reinforced with information on how negative would it be for the person issuing it if not followed, could have potentially led to a breach of First Law by simple buildup of potential on Second Law adherence. The idea is rejected as impossible in practice, but it is discussed.



                          Also take into account there is an in-universe evolution on how Laws are enforced by design; in the first stories, which are supposed to take place earlier in-universe, Laws are rigid, literal absolutes with no space for interpretation. As in-universe timeline progresses, they become more flexible and open to interpretation by the Robot, in part as an attempt by US Robotics to prevent scenarios such as a third party ordering a Robot to, for instance, “get that bike and throw it to the river”, when the bike owner is not present to negate the order.



                          This flexibility is often presented as a re-interpretation of First Law, where “damaging a human” is originally presented as implying exclusively physical damage, and it slowly includes the concept of mental damage or distress. Under this implementation of the First Law, breaking it is to build up enough negative potential vs. it as per Robot understanding of human mind and social conventions, which leads to amusing side-effects (read the books!).



                          So the bike will not be thrown to the river because the robot guesses the owner will not like it (negative potential vs. First Law).



                          So, what does all this rambling mean to your question:



                          • An early Robot will tend to do as ordered literally if there is no
                            evident violation of First Law, or a violation of a stronger previous
                            order. A way to protect your Robot from abuse would be to issue a
                            second order in strong terms not to obey counter-orders. This will
                            generate two positive potentials a single counter-order will find
                            difficult to overrule.

                          • A later Robot will tend to be more mindful of negative psychological
                            impact of following an order or taking a course of action, which
                            leads to amusing side-effects (seriously, read the books!).

                          • Any Robot of whichever period, faced with a situation where
                            contradictory lines of action will have the same potential on all
                            Laws, or inevitably break First Law (later interpreted as leading to
                            a negative high enough potential vs. it, not necessarily implying
                            physical damage), will enter a loop looking for an impossible
                            solution and be blocked.





                          share|improve this answer



























                            4












                            4








                            4







                            Consider, all Robots stories are short stories or novellas published during a very long time spawn, so they are not 100% coherent among themselves sometimes, though the overall framework is.



                            As highlighted in the comments, Positronic brains reaction to orders, situations and environment is generally described in-universe as the outcome of the "potentials" induced by them on the 3 Laws. There is no AI at work as we know it today for the Robot to decide to do something or not, but more like a compulsion towards a generic line of action, driven by priorities between the laws and the Robot understanding of his surroundings. To give you an idea, there is some discussion in one of the stories if I remember well on a potential scenario where a strong enough order, repeated continuously under different wording and reinforced with information on how negative would it be for the person issuing it if not followed, could have potentially led to a breach of First Law by simple buildup of potential on Second Law adherence. The idea is rejected as impossible in practice, but it is discussed.



                            Also take into account there is an in-universe evolution on how Laws are enforced by design; in the first stories, which are supposed to take place earlier in-universe, Laws are rigid, literal absolutes with no space for interpretation. As in-universe timeline progresses, they become more flexible and open to interpretation by the Robot, in part as an attempt by US Robotics to prevent scenarios such as a third party ordering a Robot to, for instance, “get that bike and throw it to the river”, when the bike owner is not present to negate the order.



                            This flexibility is often presented as a re-interpretation of First Law, where “damaging a human” is originally presented as implying exclusively physical damage, and it slowly includes the concept of mental damage or distress. Under this implementation of the First Law, breaking it is to build up enough negative potential vs. it as per Robot understanding of human mind and social conventions, which leads to amusing side-effects (read the books!).



                            So the bike will not be thrown to the river because the robot guesses the owner will not like it (negative potential vs. First Law).



                            So, what does all this rambling mean to your question:



                            • An early Robot will tend to do as ordered literally if there is no
                              evident violation of First Law, or a violation of a stronger previous
                              order. A way to protect your Robot from abuse would be to issue a
                              second order in strong terms not to obey counter-orders. This will
                              generate two positive potentials a single counter-order will find
                              difficult to overrule.

                            • A later Robot will tend to be more mindful of negative psychological
                              impact of following an order or taking a course of action, which
                              leads to amusing side-effects (seriously, read the books!).

                            • Any Robot of whichever period, faced with a situation where
                              contradictory lines of action will have the same potential on all
                              Laws, or inevitably break First Law (later interpreted as leading to
                              a negative high enough potential vs. it, not necessarily implying
                              physical damage), will enter a loop looking for an impossible
                              solution and be blocked.





                            share|improve this answer













                            Consider, all Robots stories are short stories or novellas published during a very long time spawn, so they are not 100% coherent among themselves sometimes, though the overall framework is.



                            As highlighted in the comments, Positronic brains reaction to orders, situations and environment is generally described in-universe as the outcome of the "potentials" induced by them on the 3 Laws. There is no AI at work as we know it today for the Robot to decide to do something or not, but more like a compulsion towards a generic line of action, driven by priorities between the laws and the Robot understanding of his surroundings. To give you an idea, there is some discussion in one of the stories if I remember well on a potential scenario where a strong enough order, repeated continuously under different wording and reinforced with information on how negative would it be for the person issuing it if not followed, could have potentially led to a breach of First Law by simple buildup of potential on Second Law adherence. The idea is rejected as impossible in practice, but it is discussed.



                            Also take into account there is an in-universe evolution on how Laws are enforced by design; in the first stories, which are supposed to take place earlier in-universe, Laws are rigid, literal absolutes with no space for interpretation. As in-universe timeline progresses, they become more flexible and open to interpretation by the Robot, in part as an attempt by US Robotics to prevent scenarios such as a third party ordering a Robot to, for instance, “get that bike and throw it to the river”, when the bike owner is not present to negate the order.



                            This flexibility is often presented as a re-interpretation of First Law, where “damaging a human” is originally presented as implying exclusively physical damage, and it slowly includes the concept of mental damage or distress. Under this implementation of the First Law, breaking it is to build up enough negative potential vs. it as per Robot understanding of human mind and social conventions, which leads to amusing side-effects (read the books!).



                            So the bike will not be thrown to the river because the robot guesses the owner will not like it (negative potential vs. First Law).



                            So, what does all this rambling mean to your question:



                            • An early Robot will tend to do as ordered literally if there is no
                              evident violation of First Law, or a violation of a stronger previous
                              order. A way to protect your Robot from abuse would be to issue a
                              second order in strong terms not to obey counter-orders. This will
                              generate two positive potentials a single counter-order will find
                              difficult to overrule.

                            • A later Robot will tend to be more mindful of negative psychological
                              impact of following an order or taking a course of action, which
                              leads to amusing side-effects (seriously, read the books!).

                            • Any Robot of whichever period, faced with a situation where
                              contradictory lines of action will have the same potential on all
                              Laws, or inevitably break First Law (later interpreted as leading to
                              a negative high enough potential vs. it, not necessarily implying
                              physical damage), will enter a loop looking for an impossible
                              solution and be blocked.






                            share|improve this answer












                            share|improve this answer



                            share|improve this answer










                            answered 7 hours ago









                            SeretbaSeretba

                            8461 silver badge8 bronze badges




                            8461 silver badge8 bronze badges























                                Teleporting Goat is a new contributor. Be nice, and check out our Code of Conduct.









                                draft saved

                                draft discarded


















                                Teleporting Goat is a new contributor. Be nice, and check out our Code of Conduct.












                                Teleporting Goat is a new contributor. Be nice, and check out our Code of Conduct.











                                Teleporting Goat is a new contributor. Be nice, and check out our Code of Conduct.














                                Thanks for contributing an answer to Science Fiction & Fantasy Stack Exchange!


                                • Please be sure to answer the question. Provide details and share your research!

                                But avoid


                                • Asking for help, clarification, or responding to other answers.

                                • Making statements based on opinion; back them up with references or personal experience.

                                To learn more, see our tips on writing great answers.




                                draft saved


                                draft discarded














                                StackExchange.ready(
                                function ()
                                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fscifi.stackexchange.com%2fquestions%2f216858%2fhow-does-asimovs-second-law-deal-with-contradictory-orders-from-different-peopl%23new-answer', 'question_page');

                                );

                                Post as a guest















                                Required, but never shown





















































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown

































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown







                                Popular posts from this blog

                                Invision Community Contents History See also References External links Navigation menuProprietaryinvisioncommunity.comIPS Community ForumsIPS Community Forumsthis blog entry"License Changes, IP.Board 3.4, and the Future""Interview -- Matt Mecham of Ibforums""CEO Invision Power Board, Matt Mecham Is a Liar, Thief!"IPB License Explanation 1.3, 1.3.1, 2.0, and 2.1ArchivedSecurity Fixes, Updates And Enhancements For IPB 1.3.1Archived"New Demo Accounts - Invision Power Services"the original"New Default Skin"the original"Invision Power Board 3.0.0 and Applications Released"the original"Archived copy"the original"Perpetual licenses being done away with""Release Notes - Invision Power Services""Introducing: IPS Community Suite 4!"Invision Community Release Notes

                                Canceling a color specificationRandomly assigning color to Graphics3D objects?Default color for Filling in Mathematica 9Coloring specific elements of sets with a prime modified order in an array plotHow to pick a color differing significantly from the colors already in a given color list?Detection of the text colorColor numbers based on their valueCan color schemes for use with ColorData include opacity specification?My dynamic color schemes

                                Ласкавець круглолистий Зміст Опис | Поширення | Галерея | Примітки | Посилання | Навігаційне меню58171138361-22960890446Bupleurum rotundifoliumEuro+Med PlantbasePlants of the World Online — Kew ScienceGermplasm Resources Information Network (GRIN)Ласкавецькн. VI : Літери Ком — Левиправивши або дописавши її