View Poll Results: Is it morally right to build a mentally handicapped AI?

Voters
29. This poll is closed
  • No

    4 13.79%
  • I dont' know, I need more information

    5 17.24%
  • Yes

    20 68.97%
Page 2 of 3 FirstFirst
1
2
3
LastLast
  1. #21
    People can try to go on about AI rights, morality, blah blah.

    It's a string of code. You can treat it however you'd like, because at the end of the day, it's a string of code.
    i7-4770k - GTX 780 Ti - 16GB DDR3 Ripjaws - (2) HyperX 120s / Vertex 3 120
    ASRock Extreme3 - Sennheiser Momentums - Xonar DG - EVGA Supernova 650G - Corsair H80i

    build pics

  2. #22
    Quote Originally Posted by Hubcap View Post
    So it's a hundred years in the future and you have a robot manufacturing business where you make robots for all kinds of uses. You want to build a robot that will sweep floors all day long, mop and buff them after closing hours. Thing is, it's terribly boring work.

    You know if you use a complete AI software package the robot will get bored and do a lousy job, but if you purposefully mentally handicap the AI, it will blissfully sweep the floors all day long. When it comes to sweeping floors, intelligence is a bad thing.

    Is it morally right to build a mentally handicapped AI?
    Its more a question if AI is necessary for that type of task. I mean if its just sweeping floors why does it need AI? all it needs is a floor schematic, a path to take, sensors for capacity, and adequate sweeping/vacuuming mechanisms so you dont have to mess with it. Everything thats robotic does not need AI. Im kind of a robot nerd so that would be my thought process.

    This seems to remind me of the Tachokomas from Ghost In The Shell. But we are far from that type of AI.

  3. #23
    Quote Originally Posted by Hargalaten View Post
    It isn't human, so I don't see why not.
    I would draw the line when the AI has consciousness, and has desires similar to humans. Hindering a human's right to upgrade their mind would be cruel, so would hindering an AI with human desires and emotion's rights to upgrade.
    I've heard the question about simply creating AI that do not want rights, or give a damn about them. But not everyone will create AI that way, there will be plenty who will create AI with a pretty big range in human emotions and desires, and AI designed this way should not have handicapness designed into them.

  4. #24
    um.....



    already done...

    just because you have AI doesn't mean everything needs it.

  5. #25
    Quote Originally Posted by glo View Post
    People can try to go on about AI rights, morality, blah blah.

    It's a string of code. You can treat it however you'd like, because at the end of the day, it's a string of code.
    It's a possible future where we could create conscious machines, not just philosophical zombies. Right now we don't know for sure, but say for example ORCH-OR is what's needed to sustain consciousness, you could in theory create a machine that can sustain ORCH-OR in a medium like carbon nanotubes. And this is where ethical issues come into play as far as how we design them.

  6. #26
    If we have no choice but to create human-like slaves with no freedom, then creating ones that are happy doing the things we force them to do would probably be a little more ethical than creating ones that would just suffer.
    In reality the situation would be a lot more complicated but I really doubt this hypothetical situation will ever come to pass so there's no point really dwelling on it.

  7. #27
    Bloodsail Admiral Addict's Avatar
    7+ Year Old Account
    Join Date
    Sep 2014
    Location
    On Aiur.
    Posts
    1,160
    Quote Originally Posted by Vegas82 View Post
    Pretty much any AI we create will be "mentally handicapped" compared to later generations. Think of it this way, if the average IQ is 120 you're not mentally handicapped at that level. If the average rises to 200 in 50 years and you still have a 120 IQ you're suddenly "mentally handicapped". Compared to the rest of the population at least.
    Not really, you will be less intellegent than later generations, but not mentally handicapped.
    You could argue that the current intellegence might at some point be equal to future generations Intellectual disabled AIs, but that wouldnt make you handicapped, seeing as it was your standard at the time and not a fault in your programming.

  8. #28
    AI in my opinion falls into a grey area just because of how media and things we have seen can skew how we see things. Like can it develop into a sentient being? Even if it dosent have a physical heart brain or mind?


    what about the AI that these things have?

  9. #29
    Quote Originally Posted by Connal View Post
    To add to this, in response to glo, I think it is a bit funny that people place so much "woo"/"magic" on human personality/emotions, etc.

    Sure it is complex, and we do not fully understand consciousness. But our understanding is getting better, and the more we learn the more we can reproduce biology on a digital/mechanical level.

    We too were programed, but by the evolutionary process, over a long, long period of time. We respond to certain stimulus very stereotypically, just like current Weak AI and robots do. The only thing "special" about us, is that we are conscious. And once we can reproduce that, I feel we will be brought down a peg in our own "appreciation" of ourselves.
    interesting view

  10. #30
    Couldn't we just, make dumb robots clean WITHOUT giving them an AI package? If we discovered how to make AIs I still think we'd make non-AI robots for menial tasks.

  11. #31
    A sweeping algorithm need not require sentience. Also what would it mean to have a mentally handicapped AI? Is it handicapped in the way that it interacts with the world similar to a mentally handicapped human? Or is it just not operator at full capacity? Is it still sentient?

  12. #32
    Deleted
    Quote Originally Posted by Hubcap View Post
    Is it morally right to build a mentally handicapped AI?
    It would be a tool.

    Is it morally right to use Siri all day long? Or google?

  13. #33
    Quote Originally Posted by Hubcap View Post
    So it's a hundred years in the future and you have a robot manufacturing business where you make robots for all kinds of uses. You want to build a robot that will sweep floors all day long, mop and buff them after closing hours. Thing is, it's terribly boring work.

    You know if you use a complete AI software package the robot will get bored and do a lousy job, but if you purposefully mentally handicap the AI, it will blissfully sweep the floors all day long. When it comes to sweeping floors, intelligence is a bad thing.

    Is it morally right to build a mentally handicapped AI?
    The issue presented by this is the incorrect assumption that so long as hardware exists, it inherently has infinite computing capabilities.

    So in the scenario presented, you would not be gutting out an AI for a grey moral reasoned benevolence, you'd be gutting out a prepackaged AI because it'd be:
    1) It'd be wasteful to create hardware for anything beyond the job of sweeping floors.
    2) Because your hardware is limited by the job presented, you wouldn't run a full prepackaged AI, you'd run what the hardware allowed, and even ignoring memory requirements, there is also processing speed. The faster your processing data, the more power you consume. To extend battery life, you'd bring down clock speeds to something along the lines of less than 1 MHz. So even if you didn't dumb down the AI's algorithms, it'd be thinking slower because it'd be thinking less frequently, because otherwise it'd chew through batteries too quickly.
    3) This is a manufacturing business, not a hospital or a church. Minimize cost, and maximize profits. Don't grab something that exceeds your design specs. The job is to provide the customers with the best floor sweeper, not a happy or intelligent floor sweeper. This is another, businesses look to remove wasteful behaviors from manufacturing, keeping something irrelevant to the service of the product is part of R&D and QA.

    There is never a moral conflict to come into question because the situation can never arise. Let alone the fact that the moral conflict rests on a false premise that 'artificial intelligence' is automatically assumed to have human levels or higher by default because it's "intelligent". There are several levels of biological intelligences, all meeting the tasks of their survival (for the robot's case this would be its function). No one considers it morally wrong that a rat and a dolphin have different levels of intelligence, as their intelligence suits their needs.
    What are you willing to sacrifice?

  14. #34
    I think you're looking at it backwards. This is a robot we're talking about, they're made to spec.
    Humans on the other hand have a certain range that's considered "normal", by natural processes, and if they fall below those ranges they are considered "handicapped", mentally or otherwise.
    You're not downgrading the robot's software so that it's "handicapped", you're building it to meet particular specifications that are optimal for the job in the first place.
    Just because we have advanced AI available doesn't mean my refrigerator needs to be sentient, yet it too is a machine that potentially could be.

  15. #35
    Humans experience boredom as a consequence of a lack of chemical stimulants to the brain. A robot would (I think) have to be programmed to experience boredom/joy/etc since they won't have the chemical reactions that stimulate these emotions. Having intelligence does not guarantee they'll have emotion. If anything, emotion is the perfect example of the human ability to think unintelligently. There's no need to stunt a robot's intelligence to make it not experience boredom.

  16. #36
    Deleted
    Why not just make an ordinary AI which doesn't feel emotions?

  17. #37
    Deleted
    Yes.. it's its function. To me its no different that breeding a dog instead of making a baby. The baby would have much more intellectual potential but people still keep pets and use them for various tasks or for company. So nothing wrong in making animal/non-human AI. IT would probably be wrong to make a retarded human AI though but why do it when you can just make it more primitive .

  18. #38
    Quote Originally Posted by Fyre View Post
    Yes.. it's its function. To me its no different that breeding a dog instead of making a baby. The baby would have much more intellectual potential but people still keep pets and use them for various tasks or for company. So nothing wrong in making animal/non-human AI. IT would probably be wrong to make a retarded human AI though but why do it when you can just make it more primitive .
    This I agree with. I will add that when designing an AI dog, it would probably be immoral to design an AI dog that is as aggressive as some of the banned dog breeds though. In fact I would say we shouldn't even be designing any aggression into AI animals so lesson the threat poses to the dog owner or other sentient animals/people.
    Last edited by Sole-Warrior; 2015-05-10 at 07:51 PM.

  19. #39
    If AI is conscious enough to be bored, it's sentient. Forcing an intelligent being to do your work would be violation of whatever possible rights they have.

    Quote Originally Posted by Revi View Post
    It wouldn't, unless we program it into it.

    I don't see the problem. You'd be creating a computer program suited to your need.
    Dude, are you resisting to learn or what? The concept of "pre-programmed", "set of instruction to follow" being only way to go was no more (theoretically) by the late 50s, early 60s. Why are you still implying it?

  20. #40
    Quote Originally Posted by Kuntantee View Post
    If AI is conscious enough to be bored, it's sentient. Forcing an intelligent being to do your work would be violation of whatever possible rights they have.



    Dude, are you resisting to learn or what? The concept of "pre-programmed", "set of instruction to follow" being only way to go was no more (theoretically) by the late 50s, early 60s. Why are you still implying it?
    I believe Revi was stalking about designing an AI to feel negative emotions like boredom just as most humans are born with the ability to feel boredom and the full range of emotions. Interesting enough there are some humans, (rare as they are) who are literally incapable of feeling sadness. So I do not think it's far fetched to design a conscious AI that does not feel boredom or other negative emotions. Regardless on whether they can feel negative emotions or not, they should still have their desires and rights respected.
    Last edited by Sole-Warrior; 2015-05-10 at 08:19 PM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •