Page 1 of 2
1
2
LastLast
  1. #1
    The Insane Acidbaron's Avatar
    10+ Year Old Account
    Join Date
    Oct 2010
    Location
    Belgium, Flanders
    Posts
    18,230

    Are Asimov's laws still relevant?

    The 3 robotic laws put in place to protect us,

    • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    • A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
    • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    And then there be Drones. Combat drones, used for the exact purpose to harm others and we are far from stopping there we even want robots to eventually start fighting for us, we even are creating swarms of small robots that can be used for exactly that. And these are just the first generations.

    There's also the other opinion that robotics will harm us, since there's this political idea floating around that we have to compete with countries like china regards to product cost, the only real way to do this is automate a lot more of our industry. Some politicians sell us the tale that "well those jobs will simply become maintenance jobs" First off, i don't believe everyone that country works in a simple production job has the capabilities or insight to do maintenance, secondly a robot will replace several people, not just one and multiple robots will be maintained by a single human. So the basic math around this doesn't add up. There's actually a term for this ideology, it sadly escapes me.


    So my opinion is that Asimov laws are at this point to nothing more then fiction as they are long surpassed and all future intent seems to be ignoring them anyway, What's your opinion?

  2. #2
    Combat Drones have pilots...it's not like they have autonomous death robots going around missile-ing people...wtf
    "You six-piece Chicken McNobody."
    Quote Originally Posted by RICH816 View Post
    You are a legend thats why.

  3. #3
    Drones aren't robots, son. This thread has gone full retard at the first post.

  4. #4

  5. #5
    The Lightbringer Blade Wolf's Avatar
    10+ Year Old Account
    Join Date
    Aug 2013
    Location
    Futa Heaven
    Posts
    3,294
    Quote Originally Posted by Connal View Post
    That depends. Personally, I think the last two laws are immoral, if we create a conscious machine.

    I think AI/Robots should value life, but to treat them as our slaves is wrong.
    We really don't want a Geth situation.

  6. #6
    Deleted
    We haven't even developed any shred of a true and working AI and you already think the laws are out of date?

    - - - Updated - - -

    Quote Originally Posted by Blade Wolf View Post
    We really don't want a Geth situation.
    Or the other extreme, the I Robot movie where the robots decided that Humanity is a danger to itself and all humans should be detained in their own apartments while a robot attends to their every need.

  7. #7
    The Insane Acidbaron's Avatar
    10+ Year Old Account
    Join Date
    Oct 2010
    Location
    Belgium, Flanders
    Posts
    18,230
    Quote Originally Posted by Tradewind View Post
    Combat Drones have pilots...it's not like they have autonomous death robots going around missile-ing people...wtf
    I would argue that there is not much difference between letting software help you aim and press a button then giving a robot a command to kill.

    Also there's a reason i quoted that wikipedia article, i'll quote a paragraph:

    "There have been some developments towards developing autonomous fighter jets and bombers.[4] The use of autonomous fighters and bombers to destroy enemy targets is especially promising because of the lack of training required for robotic pilots, autonomous planes are capable of performing maneuvers which could not otherwise be done with human pilots (due to high amount of G-Force), plane designs do not require a life support system, and a loss of a plane does not mean a loss of a pilot. However, the largest draw back to robotics is their inability to accommodate for non-standard conditions. Advances in artificial intelligence in the near future may help to rectify this."

    So the research for those military tools is happening and is being worked on as the next step in robotic warfare.

  8. #8
    The Undying Kalis's Avatar
    10+ Year Old Account
    Join Date
    Jul 2012
    Location
    Στην Κυπρο
    Posts
    32,390
    Quote Originally Posted by Acidbaron View Post
    So my opinion is that Asimov laws are at this point to nothing more then fiction as they are long surpassed and all future intent seems to be ignoring them anyway, What's your opinion?
    The laws were always fiction, they were written as such. So, yes, fiction is still fiction.


    You're going to be so upset when you hear about midichlorians.

  9. #9
    The Insane Acidbaron's Avatar
    10+ Year Old Account
    Join Date
    Oct 2010
    Location
    Belgium, Flanders
    Posts
    18,230
    Quote Originally Posted by Ever present View Post
    We haven't even developed any shred of a true and working AI and you already think the laws are out of date?
    Based on the research focus, the current advancements and how robotics not such function in the military but also in the industry yes, hence i also added a second line of reasoning that robotics are taking over people and while it's not physically harming them it is taking away their lively hood.

    - - - Updated - - -

    Quote Originally Posted by Kalis View Post
    The laws were always fiction, they were written as such. So, yes, fiction is still fiction.


    You're going to be so upset when you hear about midichlorians.
    Why immediately resort to thinking i'm upset about this? merely crossed my mind when talking about something with someone earlier today and found it to be an interesting topic.

  10. #10
    •A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    •A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
    •A robot must protect its own existence as long as such protection does not conflict with the First or Second Law


    "Do not impose on others what you yourself do not desire" - Confucius.

  11. #11
    eh, I always assumed Asimov's rules were more applicable to an autonomous AI driven entity, not something controlled or programmed to do a task by humans. Those are just an extension of human will at that point.
    "You six-piece Chicken McNobody."
    Quote Originally Posted by RICH816 View Post
    You are a legend thats why.

  12. #12
    The Insane Kujako's Avatar
    10+ Year Old Account
    Join Date
    Oct 2009
    Location
    In the woods, doing what bears do.
    Posts
    17,987
    Even if we assume such laws could be implemented, aren't most of his books about how they can be perverted and circumvented?
    It is by caffeine alone I set my mind in motion. It is by the beans of Java that thoughts acquire speed, the hands acquire shakes, the shakes become a warning.

    -Kujako-

  13. #13
    For myself I see it more as, Robot= Machine with rudimentary intelligence that is focused completely on doing it's job like the ones that make cars, Android = An actual thinking machine basically not just what it's programmed. So for a robot yeah I see the rules as still being applicable.

  14. #14
    I Don't Work Here Endus's Avatar
    10+ Year Old Account
    Join Date
    Feb 2010
    Location
    Ottawa, ON
    Posts
    79,320
    I really wish people would actually read Asimov's Robot series. Because those rules do not work, and that was the fundamental point behind the stories; that true intelligence cannot be effectively bound by such rules, because it is too complex for such simplicity to work. Most of the Robot stories, particularly the short stories, involve robots who get around the Three Laws in various ways.

    Something akin to the Three Laws is required as the basis for a moral code, but they aren't sufficient, in and of themselves. Because morality is not such a simplistic thing.


  15. #15
    They're clearly intended to be insufficient without a Zeroth law, and even then it seems to be pretty clear that they're not supposed to be airtight.

  16. #16
    Deleted
    Quote Originally Posted by Acidbaron View Post
    There's actually a term for this ideology, it sadly escapes me.
    Do you mean extropism?

    Quote Originally Posted by Acidbaron View Post
    So my opinion is that Asimov laws are at this point to nothing more then fiction as they are long surpassed and all future intent seems to be ignoring them anyway, What's your opinion?
    This is applied to advanced AI systems. An everyday robot with decision-making would still have limitations.

    For example this robot is one without AI, and thus without decision-making ability:


    You can apply the principles to a robot that can otherwise decide his actions.

    Combat drones are still controlled by humans, the decision is rather supported than made by an AI system. It would be extremely dangerous to all humanity to have artificial general intelligence left to decide which target is killed.

  17. #17
    The Insane Acidbaron's Avatar
    10+ Year Old Account
    Join Date
    Oct 2010
    Location
    Belgium, Flanders
    Posts
    18,230
    Quote Originally Posted by Endus View Post
    I really wish people would actually read Asimov's Robot series. Because those rules do not work, and that was the fundamental point behind the stories; that true intelligence cannot be effectively bound by such rules, because it is too complex for such simplicity to work. Most of the Robot stories, particularly the short stories, involve robots who get around the Three Laws in various ways.

    Something akin to the Three Laws is required as the basis for a moral code, but they aren't sufficient, in and of themselves. Because morality is not such a simplistic thing.
    Fair enough i never read them, this is as pointed out earlier something that came out of a conversation about robotics and their influences on us as a society over time.

  18. #18
    Quote Originally Posted by Endus View Post
    I really wish people would actually read Asimov's Robot series. Because those rules do not work, and that was the fundamental point behind the stories; that true intelligence cannot be effectively bound by such rules, because it is too complex for such simplicity to work. Most of the Robot stories, particularly the short stories, involve robots who get around the Three Laws in various ways.

    Something akin to the Three Laws is required as the basis for a moral code, but they aren't sufficient, in and of themselves. Because morality is not such a simplistic thing.
    An interesting aspect is how as artificial intelligence improved, certain paradoxes became harder to deal with. Should you tell the truth even if it will harm someone? What if not telling the truth could cause more harm later?

  19. #19
    The Insane Acidbaron's Avatar
    10+ Year Old Account
    Join Date
    Oct 2010
    Location
    Belgium, Flanders
    Posts
    18,230
    Quote Originally Posted by Summoner View Post
    Do you mean extropism?


    This is applied to advanced AI systems. An everyday robot with decision-making would still have limitations.

    For example this robot is one without AI, and thus without decision-making ability:


    You can apply the principles to a robot that can otherwise decide his actions.

    Combat drones are still controlled by humans, the decision is rather supported than made by an AI system. It would be extremely dangerous to all humanity to have artificial general intelligence left to decide which target is killed.
    It was a more political then scientific term had to do with us adjusting to compete with china's worker wages.

    True giving full autonomy would be dangerous since we can't fully predict its reaction.

  20. #20
    I Don't Work Here Endus's Avatar
    10+ Year Old Account
    Join Date
    Feb 2010
    Location
    Ottawa, ON
    Posts
    79,320
    Quote Originally Posted by Rassium View Post
    An interesting aspect is how as artificial intelligence improved, certain paradoxes became harder to deal with. Should you tell the truth even if it will harm someone? What if not telling the truth could cause more harm later?
    Hell, we can't even determine simple answers to these questions ourselves.

    Utilitarian theory says you should take the act that minimizes the harm. Taken to extremes, this can justify deliberate murder, or slavery; the morality is not based on the action, but on whether it causes more or less overall harm.

    Other ethical frameworks are rules-based, where actions are wrong based on their specific merits, not the outcomes.

    Neither theory is "right", nor "wrong". They're just different perspectives, and each has a measure of validity.

    Expecting AI to figure this stuff out with a handful of simple sentences, when humanity can't, is pretty silly.


Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •