Wouldn't a real AI be clever enough to fail it on purpose?
Wouldn't a real AI be clever enough to fail it on purpose?
Luck is what happens when preparation meets opportunity
Walking with a friend in the dark is better than walking alone in the light.
So I chose the path of the Ebon Blade, and not a day passes where i've regretted it.
I am eternal, I am unyielding, I am UNDYING.
I am Zethras, and my blood will be the end of you.
Depending on it's judgement of humanity, it might be hostile and therefore in its best interest not to reveal itself immediately.
Luck is what happens when preparation meets opportunity
To demonstrate it wasn't too smart ... I mean ultimately we can't make AI too smart or we are screwed, but one that is just smart enough might want to hide it.
- - - Updated - - -
The AI wouldn't fool anyone, but the programmer would definitely fool himself.
There is an interesting discussion in the AI field about how to make a robot 'fail safe' - with proof that ultimately it is programatically impossible to have a useful robot failsafe at all.
Challenge Mode : Play WoW like my disability has me play:
You will need two people, Brian MUST use the mouse for movement/looking and John MUST use the keyboard for casting, attacking, healing etc.
Briand and John share the same goal, same intentions - but they can't talk to each other, however they can react to each other's in game activities.
Now see how far Brian and John get in WoW.
That's... not entirely true.
We already have AI (driverless cars) that use an algorithm for determining its actions. We programmed it to learn with machine learning, but the algorithm and determination system it uses is a mystery. We're starting to reach a point where we are creating things that know something we don't.
Gaming: Dual Intel Pentium III Coppermine @ 1400mhz + Blue Orb | Asus CUV266-D | GeForce 2 Ti + ZF700-Cu | 1024mb Crucial PC-133 | Whistler Build 2267
Media: Dual Intel Drake Xeon @ 600mhz | Intel Marlinspike MS440GX | Matrox G440 | 1024mb Crucial PC-133 @ 166mhz | Windows 2000 Pro
IT'S ALWAYS BEEN WANKERSHIM | Did you mean: Fhqwhgads"Three days on a tree. Hardly enough time for a prelude. When it came to visiting agony, the Romans were hobbyists." -Mab
If an AI were to pass the Turing Test, wouldn't that mean that it would be smart enough to fool us into thinking that it wasn't an AI in the first place?
The development of AI should be outlawed anyway.
Abominable Intelligence leads only to ruin. You can't allow it to be free, and slavery will eventually lead to rebellion. Better it not exist in the first place.
Are you sure? The machine learning I studied was about classification and gradient descent and stuff that was well defined.
I'm not sure what driverless cars use but I wouldn't expect it to make a decision that comes from knowledge outside of the training data we feed to it.
Gaming: Dual Intel Pentium III Coppermine @ 1400mhz + Blue Orb | Asus CUV266-D | GeForce 2 Ti + ZF700-Cu | 1024mb Crucial PC-133 | Whistler Build 2267
Media: Dual Intel Drake Xeon @ 600mhz | Intel Marlinspike MS440GX | Matrox G440 | 1024mb Crucial PC-133 @ 166mhz | Windows 2000 Pro
IT'S ALWAYS BEEN WANKERSHIM | Did you mean: Fhqwhgads"Three days on a tree. Hardly enough time for a prelude. When it came to visiting agony, the Romans were hobbyists." -Mab
No we wouldn't!
It is by caffeine alone I set my mind in motion. It is by the beans of Java that thoughts acquire speed, the hands acquire shakes, the shakes become a warning.
-Kujako-
.
"This will be a fight against overwhelming odds from which survival cannot be expected. We will do what damage we can."
-- Capt. Copeland
The mysterious part is how it makes the decisions. The system or algorithm is not a mystery. I think deep learning algorithms are using feed-forward networks with some optimization algorithm. It's is rather old and well-understood. I mentioned that we don't know the decisions, that's a vague term. You have various coefficients in a neural network, that on a given input, acts like a switch. The input values (signals) are routed (synapses) to the output channels and the route it takes of a particular signal of the input data depends entirely on the coefficients on the way. This feed-output network involves cryptic (not in a literal sense) information. How the data is reached there? Which part of the data couldn't make it and what does it mean for a particular data? These questions are really hard to answer. And in short, it is often manifested in the well-known statement: "we don't know what a neural network learns".
Let us assume we have a neural network that recognizes faces. Let's assume a particular neuron group blocks the signals when the face-shape is rounder. Now this kind of information, we can't analyze from a network. That's why people say it's a mystery.
Although my knowledge on Neural Networks is limited, especially on contemporary methods. I am stuck on 1980s, but it seems the fundamental problems persist.
Last edited by Kuntantee; 2017-04-22 at 06:53 AM.
I look forward to the day that human idiocy is corrected by AI.