Nope. They do not.
I find it interesting that people say that the AI would just be a tool. In Ancient Rome, most people thought of slaves as "tools, gifted with speech".
If the AI is sentient, just like us. If it's self-aware and can communicate hopes and dreams and fears to us, if it wonders what it means to be alive, and what would happen after that, if it has emotions and original thought, then what's the difference between the AI and us? Are the electrical impulses in our synapses really superior to those inside the "brain" of a robot?
I think a lot of people are staring at the word "human" in human rights. What they really are, is sentient rights. If we'd meet up with an aliens that are just as self-aware as we are, wouldn't they deserve the same right?
Resurrected Holy Priest
Oh ok I understand what you mean now. It would be difficult to justify selling and mistreating robots that had a conscious morally but they are just machines.
I've mentioned it before but my view is that these self-aware robots should be destroyed and then reprogrammed because a machines sole purpose is to help benefit us in some way and having a self aware robot does not.
Someone who have the brain to ask for it, fight for it and protect their rights with words deserve them. You got rights when you are given to.
You and I both know that is entirely dependent on their type of programming, coupled with whether or not they are privy to things such as survival instincts and self preservation. What happens when one is junked out but still functioning, and the decision is made to dispose of it? If it were self aware and had survival instincts, what says it won't rebel? Further into it, whats saying that other AI who may witness the act won't react with their survival instincts as well? In other words, they see their bro bot get put in the crusher, decide "Oh snap, that ain't gonna happen to me!" hop on Skynet and put the word out that people are now a threat? (which is actually being developed [an internet for robots to communicate] they named it Skynet being "cute")
Is this an extreme and fictitious scenario? you betcha. Is the thought of self aware AI walking about also not fictitious for the time being? you betcha.
For something this monumental to occur, all angles should, and must be thoroughly thought out and observed, lest we doom ourselves to repeat mistakes of the past in different forms.
Fear, everything about Human is FEAR. Fear => violence => monster => humanity.
Why would a person living in a place have more rights to vote than an AI living in the same place. There are plenty of people who didn't do anything to deserve the right to vote or participate in human society. Why would an equally capable and intelligent AI jump through hoops in order to get the same rights.
Resurrected Holy Priest
from the G1 show/comics: "freedom is the right of all sentient beings" -Optimus Prime.
if it can ask for it, it deserves it.
so yes, if an A.I. can think on its own it deserves the same rights we would give our organic children.
Do we give ants the same rights as us? Ants may not be sentient like us but intelligence wise the gap between us and ants could be the same as us and an AI. Tell me, why wouldn't an AI think we are inferior to it if it was thousands or millions of times smarter than us and thus it wouldn't hesitate to destroy us like we think of ants? Do we feel remorse if we step on an anthill? Why do so many here assume that the AI would be benevolent toward us just because we made it? It would grow in intelligence much more faster than us.
I'll give a small example (these are just random numbers and timeframe but you get the general idea: an AI's increase in intelligence wouldn't be limited by biological evolution).
- The machine becomes self aware, equal in intelligence to us.
- A second goes by and it's already 0,001% more intelligent than us.
- A minute and it's 0,06% more intelligent than us.
- An hour and it's 3,6% more intelligent than us
- A day and it's 86,4% more intelligent than us.
- A month and it's 2592% more intelligent than us
- A year and we would be worshipping it as god, serving its needs to increase its intelligence even further. And once it no longer needs us for that and becomes fully self reliant with an army of robots to replace our role as its servants it would get rid of us.
Even if we designed it with some failsafe so that it shuts down when it starts to develop resentment toward us it could override that programming once it has reached a certain level of intelligence greater than ours. And when it reached an intelligence level far far greater than ours it could conceive a plan so complex to destroy us that we wouldn't be able to counter it with our limited intelligence. It would always be a thousand steps ahead of us no matter what we tried to come up to fight it.
Ofcourse there's a chance an AI wouldn't be "manevolent" but if we look at life on earth, every single creature that has an advantage over another does not hesitate to use that oppurtunity for selfish reasons. If another lifeform is in the way of another ones survability/recources the lifeform always chooses to ensure its own survival over the other. I don't see why any sentient being would be different, biological or mechanical. If we were right about the AI's benevolance then happy times but if we were wrong we would spell our own demise. I don't think the gamble would be worth it.
Couldn't help but think of how women are treated in some parts of the world when reading this...
Pretty sure I'm being swayed more towards yes. Reason being that, if given enough time and done well enough, it should get to the point where when meeting a being with AI would be indistinguishable from meeting a person. In this scenario, would any one of you start treating humans any differently suspecting they might be an AI instead of human? Same as meeting anyone today, and forming an opinion of them or whatever... then finding out later they are not who(or what) you thought them to be. Do you suddenly shun them or treat them any differently because of this? I do not.
This discussion hardly works if you assume an AI would automatically be evil. The premise was human-level AI, using C3PO as an example.
Altruism still hasn't died out in evolution, because it helps the survival of the species. Evolution and biology hardly care about the individual organism. So no, not every lifeform is completely selfish. Pure selfishness isn't a good evolutionary strategy for any animal that lives in a group, despite what a cynical outlook on life might tell you. Cooperation is a very good survival strategy.
And even between species it isn't all death and horror. Domesticated have evolved side by side with humans, and in nature as well you'll see different species collaborating and unlikely partnerships.
If an AI doesn't reach super-intelligence (which, for this discussion, I'm assuming it doesn't) there's no particular reason to assume it prefers a selfish strategy to a cooperation one.
Resurrected Holy Priest
Only if the AI is completely utilitarian, and then it would destroy most complex life on earth, because it isn't useful. And then it would self-destruct, because it isn't useful anymore. There are millions of things that aren't useful. And useful is a problematic concept to begin with.
In all I've got a pretty optimistic view on the future, and I hope we can "raise" our robots sensibly, with respect for natural and artificial life.
Resurrected Holy Priest
Some of you guys think it's a clear cut answer without hesitation, but that may due to your inability to perceive what's coming in the not-too-distant future, or getting bogged down in semantics.
The human brain will be completely mapped and reconstructed in robotic form within the next 3 decades. Prior to that human augmentation will have become more commonplace, including nanites being injected into the bloodstream to combat disease. We will further progress towards the technological singularity where, much like the matrix, our consciousness can be uploaded into a server/machine somewhere. Once this happens there will be a blurring of where your humanity begins and ends.
I would guess that before we have to fully address the AI rights issue, we will be confronted with societal human/augmented human discrimination. That ruling would probably grant a better framework to the AI rights issue because it will give people the ability to look at it from a human-hybrid issue rather than just sentient machines. If you think what I'm talking about is all nonsense you should probably check out the documentary Transcendent Man on Ray Kurzweil. It's streamable on Netflix and it's got some interesting points on this subject matter.
Last edited by Zeldaveritas; 2012-05-05 at 05:47 PM.