Asimov's "3 laws" weren't just guidelines on the way humans should use machines to do their work. In Asimov's books a robot is specifically a being that uses a positronic brain to simulate human-like intelligence, the "laws" are actually complex equations that set up field potentials to determine the robot's behaviour, but as Endus says it isn't a bullet-proof way of making robots predictable, particularly more complex ones who can consider what it actually means to be "human"or can rationalise harming a human to prevent greater harm to other humans.
Modern drones don't really compare, they're more like remote-controlled planes than autonomous, artificial intelligences.
I thought drones where remotely piloted deathmachines aka pilotless war aircrafts.
It is by caffeine alone I set my mind in motion. It is by the beans of Java that thoughts acquire speed, the hands acquire shakes, the shakes become a warning.
-Kujako-
I feel that once we get to that point we will have an effective 3 laws. Whether we call it "The 3 laws" Im not sure, but I would not be surprised if we did, Its a very solid concept.
Dont forget law 0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
- - - Updated - - -
Asimovs robots were not necessarily sentient.
READ and be less Ignorant.
To let a computer make any choices concerning potential harm to a human with out a "man-in-the-loop" is unwise.
It is by caffeine alone I set my mind in motion. It is by the beans of Java that thoughts acquire speed, the hands acquire shakes, the shakes become a warning.
-Kujako-
I think it is foolish to expect robots to be sentient or superior and not able to make choices it's creator can make. That sounds like slavery, a practice humans should have learned away from along with a whole host of other horrible practices a long time ago.
By the time robots or any other creation reaches that claim, said robot will be able to say the one thing it's creator didn't intend and probably isn't ok with, and that is "NO"
At that point that creation will be the creators greatest achievement or greatest nightmare. Especially depending on what he put in it
So no, the laws aren't as relevant as those creating whatever, create some instrument of retribution horror, chaos, and death well guess what.
Milli Vanilli, Bigger than Elvis
Actually, if you read the I, Robot stories, you find out that the 3 laws don't actually solve everything.
The Zeroth Law wasn't supposed to be part of a robot's psyche, it was an emergent property of the 3 Laws that came about when they were examined by robots of sufficient sophistication. They actually show a failure of the 3 laws as a robot can use it to justify breaking the 3 laws.
Simulation of consciousness =/= consciousness. The three laws are pretty brilliant on their face.
- - - Updated - - -
It is a brilliant piece of science fiction... but it is is science fiction. The moral dilemmas raised there, or Short Circuit, or Chappie, are possible because it is a fictional setting in which robots can form genuine and autonomous consciousness.
The way I see it, AI with equal consciousness with humans should be able to access human rights, equal consciousness of a dog or cat should receive dog or cat level rights. It's probably likely that it's possible to design an AI that does not have the same desires as humans, designed in the ability to feel happy and experience pleasure without designing in the ability to feel anger, pain, or depression or desires.
There will be AI that will be designed to have no interests in the accessing the rights we have today, but what if the AI decides to reprogram itself to want those rights? Or even more likely, people designing AI to want rights. An example would be individuals who want to create an AI that acts and thinks just like their deceased relative, in a sense, a mechanical clone of their lost loved ones.
For these reasons, it's perhaps inevitable that it would be the best interest for all self aware sentient beings that human rights should be accessible to AI with human consciousness.
- - - Updated - - -
Sure, but that does not mean we won't at some point be able to create genuine consciousness in an artificial sub-strait.
That's why I really liked "The Fall".
The AI must fight against its own protocols to bypass obstacles, but to do that the AI needs to put the pilot in potential danger.Take on the role of ARID, the artificial intelligence onboard a high-tech combat suit. ARID's program activates after crashing on an unknown planet. The human pilot within the combat suit is unconscious, and it is ARID's duty to protect him at all costs! As she progresses into her twisted and hostile surroundings, driven to find medical aid before it is too late, the realities of what transpired on this planet force ARID to reflect upon her own protocols. ARID's journey to save her pilot ultimately challenges the very rules that are driving her.
What will really happen in the long run is that the two separate evolutionary paths of humans and robots will join together at some point.
They will be us and we will be them.