Page 2 of 2 FirstFirst
1
2
  1. #21
    Quote Originally Posted by Endus View Post
    Hell, we can't even determine simple answers to these questions ourselves.

    Utilitarian theory says you should take the act that minimizes the harm. Taken to extremes, this can justify deliberate murder, or slavery; the morality is not based on the action, but on whether it causes more or less overall harm.

    Other ethical frameworks are rules-based, where actions are wrong based on their specific merits, not the outcomes.

    Neither theory is "right", nor "wrong". They're just different perspectives, and each has a measure of validity.

    Expecting AI to figure this stuff out with a handful of simple sentences, when humanity can't, is pretty silly.
    Agreed. It's taken to a pretty interesting extreme in the Foundation series. Without spoiling anything, they tie it into questions about free will.

  2. #22
    Asimov's "3 laws" weren't just guidelines on the way humans should use machines to do their work. In Asimov's books a robot is specifically a being that uses a positronic brain to simulate human-like intelligence, the "laws" are actually complex equations that set up field potentials to determine the robot's behaviour, but as Endus says it isn't a bullet-proof way of making robots predictable, particularly more complex ones who can consider what it actually means to be "human"or can rationalise harming a human to prevent greater harm to other humans.

    Modern drones don't really compare, they're more like remote-controlled planes than autonomous, artificial intelligences.

  3. #23
    I thought drones where remotely piloted deathmachines aka pilotless war aircrafts.

  4. #24
    The Insane Kujako's Avatar
    10+ Year Old Account
    Join Date
    Oct 2009
    Location
    In the woods, doing what bears do.
    Posts
    17,987
    Quote Originally Posted by Kryctos View Post
    I thought drones where remotely piloted deathmachines aka pilotless war aircrafts.
    Currently, but they are working on making them fully autonomous.
    It is by caffeine alone I set my mind in motion. It is by the beans of Java that thoughts acquire speed, the hands acquire shakes, the shakes become a warning.

    -Kujako-

  5. #25
    Bloodsail Admiral Beery Swine's Avatar
    7+ Year Old Account
    Join Date
    Mar 2015
    Location
    Philly, PA
    Posts
    1,033
    Quote Originally Posted by Tradewind View Post
    Combat Drones have pilots...it's not like they have autonomous death robots going around missile-ing people...wtf
    I've seen those Michael Bay movies and they're utter shit.
    Weird Al - I never feed trolls and I don't read spam
    Galen Hallcyon - The internet has shown us that everyone is a fuckin' moron.

  6. #26
    I feel that once we get to that point we will have an effective 3 laws. Whether we call it "The 3 laws" Im not sure, but I would not be surprised if we did, Its a very solid concept.


    Dont forget law 0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

    - - - Updated - - -

    Quote Originally Posted by Connal View Post
    That depends. Personally, I think the last two laws are immoral, if we create a conscious machine.

    I think AI/Robots should value life, but to treat them as our slaves is wrong.
    Asimovs robots were not necessarily sentient.
    READ and be less Ignorant.

  7. #27
    Banned Kellhound's Avatar
    10+ Year Old Account
    Join Date
    Jul 2013
    Location
    Bank of the Columbia
    Posts
    20,935
    To let a computer make any choices concerning potential harm to a human with out a "man-in-the-loop" is unwise.

  8. #28
    The Insane Kujako's Avatar
    10+ Year Old Account
    Join Date
    Oct 2009
    Location
    In the woods, doing what bears do.
    Posts
    17,987
    Quote Originally Posted by Kellhound View Post
    To let a computer make any choices concerning potential harm to a human with out a "man-in-the-loop" is unwise.
    And yet, that's how the US stock market is currently being run.
    It is by caffeine alone I set my mind in motion. It is by the beans of Java that thoughts acquire speed, the hands acquire shakes, the shakes become a warning.

    -Kujako-

  9. #29
    Quote Originally Posted by Kujako View Post
    And yet, that's how the US stock market is currently being run.
    That type of high volume trading is great because it adds no value to our economy and opens the door to all kinds of volatile behavior which negatively impacts everyone.

  10. #30
    Quote Originally Posted by Connal View Post
    Then that is not really a problem. I am purely talking about sentient/conscious machines. But as Endus pointed out, the laws are kind of subjective, and a non sentient machine would need a lot more objective directives.
    Some one else brought up Foundation, the later books in the series get into these exact concepts.

    StarTrek TNG episode Measure of a Man, also does a good job laying it out.
    READ and be less Ignorant.

  11. #31
    Void Lord Doctor Amadeus's Avatar
    10+ Year Old Account
    Join Date
    May 2011
    Location
    In Security Watching...
    Posts
    43,750
    I think it is foolish to expect robots to be sentient or superior and not able to make choices it's creator can make. That sounds like slavery, a practice humans should have learned away from along with a whole host of other horrible practices a long time ago.


    By the time robots or any other creation reaches that claim, said robot will be able to say the one thing it's creator didn't intend and probably isn't ok with, and that is "NO"

    At that point that creation will be the creators greatest achievement or greatest nightmare. Especially depending on what he put in it


    So no, the laws aren't as relevant as those creating whatever, create some instrument of retribution horror, chaos, and death well guess what.
    Milli Vanilli, Bigger than Elvis

  12. #32
    Stealthed Defender unbound's Avatar
    7+ Year Old Account
    Join Date
    Nov 2014
    Location
    All that moves is easily heard in the void.
    Posts
    6,798
    Actually, if you read the I, Robot stories, you find out that the 3 laws don't actually solve everything.

  13. #33
    Quote Originally Posted by IIamaKing View Post
    I feel that once we get to that point we will have an effective 3 laws. Whether we call it "The 3 laws" Im not sure, but I would not be surprised if we did, Its a very solid concept.

    Dont forget law 0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
    The Zeroth Law wasn't supposed to be part of a robot's psyche, it was an emergent property of the 3 Laws that came about when they were examined by robots of sufficient sophistication. They actually show a failure of the 3 laws as a robot can use it to justify breaking the 3 laws.

  14. #34
    Quote Originally Posted by Connal View Post
    I loved that episode, when I first watched it, I was 12, I teared up.

    Here is a good clip for that episode:


    That episode is one of my favorite pieces of science fiction.
    READ and be less Ignorant.

  15. #35
    Quote Originally Posted by Connal View Post
    That depends. Personally, I think the last two laws are immoral, if we create a conscious machine.

    I think AI/Robots should value life, but to treat them as our slaves is wrong.
    Simulation of consciousness =/= consciousness. The three laws are pretty brilliant on their face.

    - - - Updated - - -

    Quote Originally Posted by IIamaKing View Post
    That episode is one of my favorite pieces of science fiction.
    It is a brilliant piece of science fiction... but it is is science fiction. The moral dilemmas raised there, or Short Circuit, or Chappie, are possible because it is a fictional setting in which robots can form genuine and autonomous consciousness.

  16. #36
    The way I see it, AI with equal consciousness with humans should be able to access human rights, equal consciousness of a dog or cat should receive dog or cat level rights. It's probably likely that it's possible to design an AI that does not have the same desires as humans, designed in the ability to feel happy and experience pleasure without designing in the ability to feel anger, pain, or depression or desires.

    There will be AI that will be designed to have no interests in the accessing the rights we have today, but what if the AI decides to reprogram itself to want those rights? Or even more likely, people designing AI to want rights. An example would be individuals who want to create an AI that acts and thinks just like their deceased relative, in a sense, a mechanical clone of their lost loved ones.

    For these reasons, it's perhaps inevitable that it would be the best interest for all self aware sentient beings that human rights should be accessible to AI with human consciousness.

    - - - Updated - - -

    Quote Originally Posted by Stormdash View Post
    Simulation of consciousness =/= consciousness. The three laws are pretty brilliant on their face.

    .
    Sure, but that does not mean we won't at some point be able to create genuine consciousness in an artificial sub-strait.

  17. #37
    The Unstoppable Force Belize's Avatar
    10+ Year Old Account
    Join Date
    Mar 2010
    Location
    Gen-OT College of Shitposting
    Posts
    21,936
    Quote Originally Posted by Blade Wolf View Post
    We really don't want a Geth situation.
    Except the Geth were attacked first and were defending themselves prior to Reapers.

  18. #38
    Quote Originally Posted by Rassium View Post
    An interesting aspect is how as artificial intelligence improved, certain paradoxes became harder to deal with. Should you tell the truth even if it will harm someone? What if not telling the truth could cause more harm later?
    That's why I really liked "The Fall".

    Take on the role of ARID, the artificial intelligence onboard a high-tech combat suit. ARID's program activates after crashing on an unknown planet. The human pilot within the combat suit is unconscious, and it is ARID's duty to protect him at all costs! As she progresses into her twisted and hostile surroundings, driven to find medical aid before it is too late, the realities of what transpired on this planet force ARID to reflect upon her own protocols. ARID's journey to save her pilot ultimately challenges the very rules that are driving her.
    The AI must fight against its own protocols to bypass obstacles, but to do that the AI needs to put the pilot in potential danger.

  19. #39
    Deleted
    Quote Originally Posted by Mall Security View Post
    I think it is foolish to expect robots to be sentient or superior and not able to make choices it's creator can make.
    You may know a person you would not kill under most if not all circumstances. Why would you not kill this person? Did you ever make such decision yourself?

  20. #40
    Herald of the Titans Chain Chungus's Avatar
    10+ Year Old Account
    Join Date
    Oct 2013
    Posts
    2,523
    What will really happen in the long run is that the two separate evolutionary paths of humans and robots will join together at some point.

    They will be us and we will be them.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •