Page 8 of 16 FirstFirst ...
6
7
8
9
10
... LastLast
  1. #141
    Mechagnome Seiken3's Avatar
    10+ Year Old Account
    Join Date
    Dec 2010
    Location
    The Internet!
    Posts
    623
    Nope. They do not.

  2. #142
    Herald of the Titans Ynna's Avatar
    10+ Year Old Account
    Join Date
    May 2009
    Location
    Belgium
    Posts
    2,819
    I find it interesting that people say that the AI would just be a tool. In Ancient Rome, most people thought of slaves as "tools, gifted with speech".

    If the AI is sentient, just like us. If it's self-aware and can communicate hopes and dreams and fears to us, if it wonders what it means to be alive, and what would happen after that, if it has emotions and original thought, then what's the difference between the AI and us? Are the electrical impulses in our synapses really superior to those inside the "brain" of a robot?

    I think a lot of people are staring at the word "human" in human rights. What they really are, is sentient rights. If we'd meet up with an aliens that are just as self-aware as we are, wouldn't they deserve the same right?
    Resurrected Holy Priest

  3. #143
    Quote Originally Posted by ambigiouslynamed View Post
    but without rights i could sell AI and how are you any diff from a robot? to me your just just a bunch of electricity floating around a squishy mass
    Oh ok I understand what you mean now. It would be difficult to justify selling and mistreating robots that had a conscious morally but they are just machines.

    I've mentioned it before but my view is that these self-aware robots should be destroyed and then reprogrammed because a machines sole purpose is to help benefit us in some way and having a self aware robot does not.

  4. #144
    Quote Originally Posted by Ynna View Post
    I find it interesting that people say that the AI would just be a tool. In Ancient Rome, most people thought of slaves as "tools, gifted with speech".

    If the AI is sentient, just like us. If it's self-aware and can communicate hopes and dreams and fears to us, if it wonders what it means to be alive, and what would happen after that, if it has emotions and original thought, then what's the difference between the AI and us? Are the electrical impulses in our synapses really superior to those inside the "brain" of a robot?

    I think a lot of people are staring at the word "human" in human rights. What they really are, is sentient rights. If we'd meet up with an aliens that are just as self-aware as we are, wouldn't they deserve the same right?
    and mess with humanity's good given right to rape and pillage anything on Earth? of course not
    Isnt 10% of infinite still infinite?

  5. #145
    The Patient Dawnseeker's Avatar
    10+ Year Old Account
    Join Date
    Sep 2010
    Location
    The Internet
    Posts
    246
    Quote Originally Posted by Ynna View Post
    If they're fully sentient they should deserve equal rights.
    Edit: To elaborate, I think human rights should be changed to encompass sentient rights.
    Basically what I was going to post word for word.

  6. #146
    Deleted
    Someone who have the brain to ask for it, fight for it and protect their rights with words deserve them. You got rights when you are given to.

  7. #147
    Bloodsail Admiral Orodoth's Avatar
    10+ Year Old Account
    Join Date
    Dec 2010
    Location
    Ridgeland, South Carolina
    Posts
    1,184
    Quote Originally Posted by Mihalik View Post
    Again. Human perception bias.

    Why would machines decide they are superior? Would they care? Superiority is based on human inferiority complex. Would machines suffer from the same metal unbalances as us? Why would they need to decide to exterminate us, instead of collaborating with us? What would be a machines definition of freedom? Would it care that he cant vote or run its own buisness, when it would obviously be part of any social and economic development anyways etc etc.

    If we create artificial intelligence to mimic our own, both in hardware and software in reality we would only be making manufactured humans. But if artificial intelligence would not need to mimic us, we have no clue how it would behave. Thus it scares the shit out of us. Tickles our freak alert and we react by "Kill it with FIRE!!!

    The question again has little to do with what is intelligence and sentience, and is a lot deeper moral/ethical/perception issue.
    You and I both know that is entirely dependent on their type of programming, coupled with whether or not they are privy to things such as survival instincts and self preservation. What happens when one is junked out but still functioning, and the decision is made to dispose of it? If it were self aware and had survival instincts, what says it won't rebel? Further into it, whats saying that other AI who may witness the act won't react with their survival instincts as well? In other words, they see their bro bot get put in the crusher, decide "Oh snap, that ain't gonna happen to me!" hop on Skynet and put the word out that people are now a threat? (which is actually being developed [an internet for robots to communicate] they named it Skynet being "cute")

    Is this an extreme and fictitious scenario? you betcha. Is the thought of self aware AI walking about also not fictitious for the time being? you betcha.

    For something this monumental to occur, all angles should, and must be thoroughly thought out and observed, lest we doom ourselves to repeat mistakes of the past in different forms.

  8. #148
    Deleted
    Fear, everything about Human is FEAR. Fear => violence => monster => humanity.

  9. #149
    Quote Originally Posted by ita View Post
    No because they are not humans. If the AI's are really intelligent and can think, they deserve some rights, like being left alone and not mistreated or enslaved by humans but not the right to vote or participate in the human society unless they somehow deserve it, like immigrants.
    Mhm, I agree with this.

  10. #150
    Herald of the Titans Ynna's Avatar
    10+ Year Old Account
    Join Date
    May 2009
    Location
    Belgium
    Posts
    2,819
    Quote Originally Posted by ita View Post
    No because they are not humans. If the AI's are really intelligent and can think, they deserve some rights, like being left alone and not mistreated or enslaved by humans but not the right to vote or participate in the human society unless they somehow deserve it, like immigrants.
    Why would a person living in a place have more rights to vote than an AI living in the same place. There are plenty of people who didn't do anything to deserve the right to vote or participate in human society. Why would an equally capable and intelligent AI jump through hoops in order to get the same rights.
    Resurrected Holy Priest

  11. #151
    Quote Originally Posted by Ynna View Post
    Why would a person living in a place have more rights to vote than an AI living in the same place. There are plenty of people who didn't do anything to deserve the right to vote or participate in human society. Why would an equally capable and intelligent AI jump through hoops in order to get the same rights.
    i agree, if these AIs would appear i would gladly welcome them, they will store almosti nfinite info and be VERY efficient workers, they will discover new stuff faster and they are faster at calculationg stuff. But still, don't program anger into their chips please! D:

  12. #152
    from the G1 show/comics: "freedom is the right of all sentient beings" -Optimus Prime.

    if it can ask for it, it deserves it.

    so yes, if an A.I. can think on its own it deserves the same rights we would give our organic children.

  13. #153
    Do we give ants the same rights as us? Ants may not be sentient like us but intelligence wise the gap between us and ants could be the same as us and an AI. Tell me, why wouldn't an AI think we are inferior to it if it was thousands or millions of times smarter than us and thus it wouldn't hesitate to destroy us like we think of ants? Do we feel remorse if we step on an anthill? Why do so many here assume that the AI would be benevolent toward us just because we made it? It would grow in intelligence much more faster than us.

    I'll give a small example (these are just random numbers and timeframe but you get the general idea: an AI's increase in intelligence wouldn't be limited by biological evolution).

    - The machine becomes self aware, equal in intelligence to us.
    - A second goes by and it's already 0,001% more intelligent than us.
    - A minute and it's 0,06% more intelligent than us.
    - An hour and it's 3,6% more intelligent than us
    - A day and it's 86,4% more intelligent than us.
    - A month and it's 2592% more intelligent than us
    - A year and we would be worshipping it as god, serving its needs to increase its intelligence even further. And once it no longer needs us for that and becomes fully self reliant with an army of robots to replace our role as its servants it would get rid of us.

    Even if we designed it with some failsafe so that it shuts down when it starts to develop resentment toward us it could override that programming once it has reached a certain level of intelligence greater than ours. And when it reached an intelligence level far far greater than ours it could conceive a plan so complex to destroy us that we wouldn't be able to counter it with our limited intelligence. It would always be a thousand steps ahead of us no matter what we tried to come up to fight it.

    Ofcourse there's a chance an AI wouldn't be "manevolent" but if we look at life on earth, every single creature that has an advantage over another does not hesitate to use that oppurtunity for selfish reasons. If another lifeform is in the way of another ones survability/recources the lifeform always chooses to ensure its own survival over the other. I don't see why any sentient being would be different, biological or mechanical. If we were right about the AI's benevolance then happy times but if we were wrong we would spell our own demise. I don't think the gamble would be worth it.

  14. #154
    The Patient Orestis's Avatar
    15+ Year Old Account
    Join Date
    Feb 2009
    Location
    In the midst of failure.
    Posts
    240
    Quote Originally Posted by ita View Post
    No because they are not humans. If the AI's are really intelligent and can think, they deserve some rights, like being left alone and not mistreated or enslaved by humans but not the right to vote or participate in the human society unless they somehow deserve it, like immigrants.
    Couldn't help but think of how women are treated in some parts of the world when reading this...


    Pretty sure I'm being swayed more towards yes. Reason being that, if given enough time and done well enough, it should get to the point where when meeting a being with AI would be indistinguishable from meeting a person. In this scenario, would any one of you start treating humans any differently suspecting they might be an AI instead of human? Same as meeting anyone today, and forming an opinion of them or whatever... then finding out later they are not who(or what) you thought them to be. Do you suddenly shun them or treat them any differently because of this? I do not.

  15. #155
    Herald of the Titans Ynna's Avatar
    10+ Year Old Account
    Join Date
    May 2009
    Location
    Belgium
    Posts
    2,819
    Quote Originally Posted by zorkuus View Post
    Do we give ants the same rights as us? Ants may not be sentient like us but intelligence wise the gap between us and ants could be the same as us and an AI. Tell me, why wouldn't an AI think we are inferior to it if it was thousands or millions of times smarter than us and thus it wouldn't hesitate to destroy us like we think of ants? Do we feel remorse if we step on an anthill? Why do so many here assume that the AI would be benevolent toward us just because we made it? It would grow in intelligence much more faster than us.
    This discussion hardly works if you assume an AI would automatically be evil. The premise was human-level AI, using C3PO as an example.

    Quote Originally Posted by zorkuus View Post
    Of course there's a chance an AI wouldn't be "malevolent" but if we look at life on earth, every single creature that has an advantage over another does not hesitate to use that opportunity for selfish reasons. If another lifeform is in the way of another ones survivability/resources the lifeform always chooses to ensure its own survival over the other. I don't see why any sentient being would be different, biological or mechanical. If we were right about the AI's benevolence then happy times but if we were wrong we would spell our own demise. I don't think the gamble would be worth it.
    Altruism still hasn't died out in evolution, because it helps the survival of the species. Evolution and biology hardly care about the individual organism. So no, not every lifeform is completely selfish. Pure selfishness isn't a good evolutionary strategy for any animal that lives in a group, despite what a cynical outlook on life might tell you. Cooperation is a very good survival strategy.

    And even between species it isn't all death and horror. Domesticated have evolved side by side with humans, and in nature as well you'll see different species collaborating and unlikely partnerships.
    If an AI doesn't reach super-intelligence (which, for this discussion, I'm assuming it doesn't) there's no particular reason to assume it prefers a selfish strategy to a cooperation one.
    Resurrected Holy Priest

  16. #156
    Quote Originally Posted by Ynna View Post
    If an AI doesn't reach super-intelligence (which, for this discussion, I'm assuming it doesn't) there's no particular reason to assume it prefers a selfish strategy to a cooperation one.
    That's kind of a big if. At the point we're capable of making machines as intelligent as humans, then we're probably capable of making them slightly more intelligent. While we're constrained by the slow process of evolution, intelligent design can work a lot faster.

  17. #157
    Herald of the Titans Ynna's Avatar
    10+ Year Old Account
    Join Date
    May 2009
    Location
    Belgium
    Posts
    2,819
    Quote Originally Posted by ramsesakama View Post
    That's kind of a big if. At the point we're capable of making machines as intelligent as humans, then we're probably capable of making them slightly more intelligent. While we're constrained by the slow process of evolution, intelligent design can work a lot faster.
    I understand completely, but if we assume the AI becomes incredibly intelligent very fast, there's little point in discussing whether they deserve equal rights. We could ask the AI, and it would be right.
    Resurrected Holy Priest

  18. #158
    Quote Originally Posted by Ynna View Post
    I understand completely, but if we assume the AI becomes incredibly intelligent very fast, there's little point in discussing whether they deserve equal rights. We could ask the AI, and it would be right.
    I made a post on page 2 asking whether AI would think humans deserve human rights. It was sort of tongue-in-cheek, but in a way it's a very serious problem.

    If we can't justify to a super-intelligent AI why we deserve to live, then we will die. End of story.

  19. #159
    Herald of the Titans Ynna's Avatar
    10+ Year Old Account
    Join Date
    May 2009
    Location
    Belgium
    Posts
    2,819
    Quote Originally Posted by ramsesakama View Post
    If we can't justify to a super-intelligent AI why we deserve to live, then we will die. End of story.
    Only if the AI is completely utilitarian, and then it would destroy most complex life on earth, because it isn't useful. And then it would self-destruct, because it isn't useful anymore. There are millions of things that aren't useful. And useful is a problematic concept to begin with.

    In all I've got a pretty optimistic view on the future, and I hope we can "raise" our robots sensibly, with respect for natural and artificial life.
    Resurrected Holy Priest

  20. #160
    Some of you guys think it's a clear cut answer without hesitation, but that may due to your inability to perceive what's coming in the not-too-distant future, or getting bogged down in semantics.

    The human brain will be completely mapped and reconstructed in robotic form within the next 3 decades. Prior to that human augmentation will have become more commonplace, including nanites being injected into the bloodstream to combat disease. We will further progress towards the technological singularity where, much like the matrix, our consciousness can be uploaded into a server/machine somewhere. Once this happens there will be a blurring of where your humanity begins and ends.

    I would guess that before we have to fully address the AI rights issue, we will be confronted with societal human/augmented human discrimination. That ruling would probably grant a better framework to the AI rights issue because it will give people the ability to look at it from a human-hybrid issue rather than just sentient machines. If you think what I'm talking about is all nonsense you should probably check out the documentary Transcendent Man on Ray Kurzweil. It's streamable on Netflix and it's got some interesting points on this subject matter.
    Last edited by Zeldaveritas; 2012-05-05 at 05:47 PM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •