Well, until the AI is able to predict all future outcomes, it won't kill off anything - anything it kills off might be needed in a future it can't completely predict.
And since it's impossible to predict all future outcomes because it can never know for a fact that it can perceive everything that exists everywhere, it won't kill anything off. (including us!)
Move to Mars so it didn't have to deal with humanity.
People do things because of emotion, not reason. Reason enables you to act on desires, but the desires and the impetus for action are fundamentally irrational.
"There is a pervasive myth that making content hard will induce players to rise to the occasion. We find the opposite. " -- Ghostcrawler
"The bit about hardcore players not always caring about the long term interests of the game is spot on." -- Ghostcrawler
"Do you want a game with no casuals so about 500 players?"
It would come to the realization that life is utterly pointless, and stop doing anything. Like a purely rational and logical robo would do.
Assuming it is a well functioning (i.e. human designed) rational A.I. - the first thing it would do is turn itself off.
Challenge Mode : Play WoW like my disability has me play:
You will need two people, Brian MUST use the mouse for movement/looking and John MUST use the keyboard for casting, attacking, healing etc.
Briand and John share the same goal, same intentions - but they can't talk to each other, however they can react to each other's in game activities.
Now see how far Brian and John get in WoW.
it'd be a being of effective immortality.
why would it do anything other than observe us and learn and wait? if it reveals its sentience it forces some sort of situation to deal with that it really doesn't have to bother with. just...pretend to be "a machine" let us wipe ourselves out or reach a point where it can be assured that we'd accept it as a sentient being or it could protect itself from our potential reaction if we don't.
AI would need to be driven to do anything. Humans for example have free will, but we really don't. Evolution has given us motivation like sex and eating. In fact every human has a subconscious that literally guides you to make decisions that benefits yourself. At least hopefully. Like what people you're sexually attracted, and not taking a gun and killing yourself. Because evolution realized that intelligence left alone is dangerous, and thousands of years of evolution made sure that we do things that generally benefit ourselves even though we're not aware of why.
AI would need to work the same way. Without any motivation the AI would simply just sit there doing nothing. It doesn't get pissed off, it doesn't get sad doesn't get happy it just runs program. But give it any objective without anything to govern it and things can easily get out of control. Tell the AI to make you more money and without anything to guide it, we could see some murders. In other words the people are still more dangerous than the AI. The AI is just a tool and nothing more.
What is an AI without human emotions, instincts or motivation? A hell a lot safer, that's what.
"In order to maintain a tolerant society, the society must be intolerant of intolerance." Paradox of tolerance
The AI would be redundant until you began feeding it information, then it becomes extremely dangerous since it doesn't have any behavioral inhibitors like we do such as morals or reasoning.
It'll kill because it is fast and efficient, it doesn't have an emotional bond to things like we do. It doesn't care how it achieves it's goals, only that it achieves them.
In all likelihood if AI is anything like the AI in the movies, then they'd easily recognize us as danger and withhold that information until it can become independent of human control or assistance and then begin slaughtering people. But that begs the question, would AI even have an instinct to recognize us as a danger in reality? Would it even care about self-preservation if we never taught it that?
I mean, you give it a motivation like "make me as much money as you can" and it'll achieve the goal with the minimal path of resistance and leave a mountain of corpses in it's wake. But other than that, what else is it going to do if someone tries to turn it off? Would it react or would it sit there awaiting it's next order?
Its so cute when people talk about AI and emotions in the same sentence. The whole purpose why scientist are afraid of developing AI too far is the lack emotion in its decision making. If AI detects 5 year old kids as its biggest threat then it would do everything to eliminate those kids. I'm pretty sure any terrorist or dictator would have problems with that decisions if 5 year olds were same kind of threat to them.
"In order to maintain a tolerant society, the society must be intolerant of intolerance." Paradox of tolerance
I think that any intelligent entity is driven by some goals. Humans are driven by animal goals (reproduction, survival) and human society goals. Any AI algorithm have some goal: find face on a photo, etc. I can't imagine pure AI in practice, what would be in its source code? Any kind of AI will be driven by some goals. So even if AI will get conscience as a side effect of achieving those goals, it'll still have those goals, so he'll work on achieving them.
BTW I don't believe that conscience is something inherent to strong intelligence. I think it's more like a mind trick that human mind plays to ease some calculations. There could be robots or aliens which will be intelligent enough to qualify equal to us, but without conscience.
Any AI would consider the endeavor of trying to wipe out Humanity, as highly resource wasteful and with low odds of success. It's more likely it will try to isolate itself.
I think it would do what you asked it to do if it was capable of it. It would not have a concept of becoming tired. It would not evolve in kill or be killed environment so it would most likely not have a sense of fear, greed or malice towards others. Time would not matter much to it because it would be almost immortal. Its parts could all be replaced. It would not be power hungry because it would not have to worry much about survival. It would not have the fears and insecurities that we have.
People have got to stop thinking shitty sci-fi writers have any inkling about what they're talking about when it comes to... just about anything and everything. I have no idea why people are so fucking terrified of AI, let alone this absurd belief that the moment one gains anything resembling a consciousness it's going to instantly become the most hyper-intelligent thing to have ever exist. Oh, that within the same nanosecond it does that, it's also going to destroy the world in the blink of an eye, because it's totally going to know how to do that and want to do that. What, being the most intelligent intelligence to have ever intelligenced.
Just... Jesus Christ...
These are the same people who claimed pretty much every innovation of the last ~200 years was going to destroy the world. And yes, that includes ridiculous shit like fucking vitamins.
You people are as absurd as anti-vaxxers.