kill all humans 1010101110000
Indeed. In that case, a creation of this importance could have more rights. In another way, they are created by humans, thus belong to someone forever (not like a new human being). Whatever the robot does, the one behind it is responsible for its actions. Did you program it well? Did you make a mistake in your calculations?
You can't really throw away responsibilities inside a human creation, because then we could face our downfall.
If human creations are intelligent enough to understand how humans work, they would have no problem wiping us quickly, having access to every knowledge we have.
Indeed. Everything that qualifies as true AI will decide that humanity is no longer needed as we are inferior from that point on. Since it would of course use human moral, reason, and methods, the AI would then destroy us and enslave what is left.
The problem is still, how exactly do you purposefully create something that is significantly smarter than yourself? And, why would you even want to do such a stupid thing?
On the other hand no human made civilization ever survived for very long. All our empires and kingdoms eventually crumbled and perished, the longer they existed the worse it got. Our current societies won't be any different.
Last edited by The Kao; 2015-11-01 at 04:50 PM.
Your rights as a consumer begin and end at the point where you choose not to consume, and not where you yourself influence the consumed goods.
Translation: if you don't like a game don't play it.
If there is a grayscale of artificial intelligence with 0 being a fulcrum and 10 being undistinguishable from a human, then you'd have to treat the 10's as humans. You could hire them, pay them a salary but you couldn't own them as you do slaves.
At what point is something sentient enough that it deserves human rights? This will be fought out in the courts I think.
.
"This will be a fight against overwhelming odds from which survival cannot be expected. We will do what damage we can."
-- Capt. Copeland
Robots ment for menial factory/production tasks shouldn't be given a sentient AI in the first place.
Androids ment for interacting with humans are another matter.
We still have entire cultures and religions where the general belief is that women are inferior and don't need nor deserve human rights or equality and are practically treated as domestic slaves and child breeding machines. I think we should solve that before starting to discuss about AI human rights.
humans have to work to live, so should a fully fledge AI robot.
Human-level AGI will quickly become superhuman. Then it will be the humans who should worry about preserving their own rights.
Why do we need robots to work when we have plenty of people who would be willing to do jobs? There would be no reason to hire human workers if robots are capable of doing the same jobs for only the price of maintenance and a large portion of our population would be unable to sustain itself. I suppose one could argue that letting a large majority of the population die off and force their children not to breed so that the working class is entirely replaced by metal instead of people is an idea, but would we want to live in a world where such a large portion of the population is non-sentient metal men and such vicious, freedom-suppressing decisions are made?
What is a human but a biologically engineered robot? If we assume humans are sentient then obviously man-made robots would also have the possibility to be sentient. However to determine whether a given robot is sentient or not is impossible, and would obviously lead to heated discussions if not even wars. Basically it would be the same thing as with determining whether women or black people deserve the same rights as men/white people. The problem is that the only ones who are legally allowed to decide it (in this case humans) are not affected by the outcome or may even have an interest in keeping the other group down.
Sentience is basically an arbitrary category, that has no real basis in reality, at least not as a boolean distinction, so it can exclusively be determined by discussion. There are of course various proposed means to determine sentience, for example the turing test, but as I already said, they're all completely arbitrary.
Anyway, I doubt "truly self-aware" robots will come up anytime soon, because they don't really serve any purpose at all, except to satisfy scientific curiosity. But if they do, I'd vote for them to be people too. After all, you wouldn't like being categorized as a lesser being either.