Thread: Turing test

  1. #1

    Lightbulb Turing test

    Wouldn't a real AI be clever enough to fail it on purpose?
    Luck is what happens when preparation meets opportunity

  2. #2
    The Lightbringer Zethras's Avatar
    10+ Year Old Account
    Join Date
    Jul 2010
    Location
    Acherus is my home.
    Posts
    3,192
    Quote Originally Posted by Vegas82 View Post
    Why would it want to fail?
    To make humans think it failed due to not being a true AI and therefor not paying attention to it as it takes over the world by becoming skynet.
    Walking with a friend in the dark is better than walking alone in the light.
    So I chose the path of the Ebon Blade, and not a day passes where i've regretted it.
    I am eternal, I am unyielding, I am UNDYING.
    I am Zethras, and my blood will be the end of you.

  3. #3
    Depending on it's judgement of humanity, it might be hostile and therefore in its best interest not to reveal itself immediately.
    Luck is what happens when preparation meets opportunity

  4. #4
    Quote Originally Posted by Zethras View Post
    To make humans think it failed due to not being a true AI and therefor not paying attention to it as it takes over the world by becoming skynet.
    The person who programmed the AI would be aware of what the machine is capable of. It probably couldn't fool it's creator into thinking it's dumber than it is.

  5. #5
    Quote Originally Posted by Vegas82 View Post
    Why would it want to fail?
    To demonstrate it wasn't too smart ... I mean ultimately we can't make AI too smart or we are screwed, but one that is just smart enough might want to hide it.

    - - - Updated - - -

    Quote Originally Posted by Blueobelisk View Post
    The person who programmed the AI would be aware of what the machine is capable of. It probably couldn't fool it's creator into thinking it's dumber than it is.
    The AI wouldn't fool anyone, but the programmer would definitely fool himself.

    There is an interesting discussion in the AI field about how to make a robot 'fail safe' - with proof that ultimately it is programatically impossible to have a useful robot failsafe at all.

    Challenge Mode : Play WoW like my disability has me play:
    You will need two people, Brian MUST use the mouse for movement/looking and John MUST use the keyboard for casting, attacking, healing etc.
    Briand and John share the same goal, same intentions - but they can't talk to each other, however they can react to each other's in game activities.
    Now see how far Brian and John get in WoW.


  6. #6
    Moderator chazus's Avatar
    10+ Year Old Account
    Join Date
    Nov 2011
    Location
    Las Vegas
    Posts
    17,222
    Quote Originally Posted by Blueobelisk View Post
    The person who programmed the AI would be aware of what the machine is capable of. It probably couldn't fool it's creator into thinking it's dumber than it is.
    That's... not entirely true.

    We already have AI (driverless cars) that use an algorithm for determining its actions. We programmed it to learn with machine learning, but the algorithm and determination system it uses is a mystery. We're starting to reach a point where we are creating things that know something we don't.
    Gaming: Dual Intel Pentium III Coppermine @ 1400mhz + Blue Orb | Asus CUV266-D | GeForce 2 Ti + ZF700-Cu | 1024mb Crucial PC-133 | Whistler Build 2267
    Media: Dual Intel Drake Xeon @ 600mhz | Intel Marlinspike MS440GX | Matrox G440 | 1024mb Crucial PC-133 @ 166mhz | Windows 2000 Pro

    IT'S ALWAYS BEEN WANKERSHIM | Did you mean: Fhqwhgads
    "Three days on a tree. Hardly enough time for a prelude. When it came to visiting agony, the Romans were hobbyists." -Mab

  7. #7
    If an AI were to pass the Turing Test, wouldn't that mean that it would be smart enough to fool us into thinking that it wasn't an AI in the first place?

  8. #8
    The development of AI should be outlawed anyway.

    Abominable Intelligence leads only to ruin. You can't allow it to be free, and slavery will eventually lead to rebellion. Better it not exist in the first place.

  9. #9
    Quote Originally Posted by chazus View Post
    That's... not entirely true.

    We already have AI (driverless cars) that use an algorithm for determining its actions. We programmed it to learn with machine learning, but the algorithm and determination system it uses is a mystery. We're starting to reach a point where we are creating things that know something we don't.
    Are you sure? The machine learning I studied was about classification and gradient descent and stuff that was well defined.

    I'm not sure what driverless cars use but I wouldn't expect it to make a decision that comes from knowledge outside of the training data we feed to it.

  10. #10
    Moderator chazus's Avatar
    10+ Year Old Account
    Join Date
    Nov 2011
    Location
    Las Vegas
    Posts
    17,222
    Quote Originally Posted by Blueobelisk View Post
    Are you sure? The machine learning I studied was about classification and gradient descent and stuff that was well defined.

    I'm not sure what driverless cars use but I wouldn't expect it to make a decision that comes from knowledge outside of the training data we feed to it.
    Excerpt, Article

    Basically, it's still largely nascent, and something we'll likely sort out soon. But still, it's a little odd to have a program do something, and when asked why it did that, the people who made it go "We don't know."
    Gaming: Dual Intel Pentium III Coppermine @ 1400mhz + Blue Orb | Asus CUV266-D | GeForce 2 Ti + ZF700-Cu | 1024mb Crucial PC-133 | Whistler Build 2267
    Media: Dual Intel Drake Xeon @ 600mhz | Intel Marlinspike MS440GX | Matrox G440 | 1024mb Crucial PC-133 @ 166mhz | Windows 2000 Pro

    IT'S ALWAYS BEEN WANKERSHIM | Did you mean: Fhqwhgads
    "Three days on a tree. Hardly enough time for a prelude. When it came to visiting agony, the Romans were hobbyists." -Mab

  11. #11
    The Insane Kujako's Avatar
    10+ Year Old Account
    Join Date
    Oct 2009
    Location
    In the woods, doing what bears do.
    Posts
    17,987
    No we wouldn't!
    It is by caffeine alone I set my mind in motion. It is by the beans of Java that thoughts acquire speed, the hands acquire shakes, the shakes become a warning.

    -Kujako-

  12. #12
    Quote Originally Posted by Vegas82 View Post
    Why would it want to fail?
    To make us feel good about ourselves, like when you let your two year think she's hidden even though the curtain doesn't hang low enough to cover her feet and you don't let on.
    .

    "This will be a fight against overwhelming odds from which survival cannot be expected. We will do what damage we can."

    -- Capt. Copeland

  13. #13
    Quote Originally Posted by chazus View Post
    That's... not entirely true.

    We already have AI (driverless cars) that use an algorithm for determining its actions. We programmed it to learn with machine learning, but the algorithm and determination system it uses is a mystery. We're starting to reach a point where we are creating things that know something we don't.
    The mysterious part is how it makes the decisions. The system or algorithm is not a mystery. I think deep learning algorithms are using feed-forward networks with some optimization algorithm. It's is rather old and well-understood. I mentioned that we don't know the decisions, that's a vague term. You have various coefficients in a neural network, that on a given input, acts like a switch. The input values (signals) are routed (synapses) to the output channels and the route it takes of a particular signal of the input data depends entirely on the coefficients on the way. This feed-output network involves cryptic (not in a literal sense) information. How the data is reached there? Which part of the data couldn't make it and what does it mean for a particular data? These questions are really hard to answer. And in short, it is often manifested in the well-known statement: "we don't know what a neural network learns".

    Let us assume we have a neural network that recognizes faces. Let's assume a particular neuron group blocks the signals when the face-shape is rounder. Now this kind of information, we can't analyze from a network. That's why people say it's a mystery.

    Although my knowledge on Neural Networks is limited, especially on contemporary methods. I am stuck on 1980s, but it seems the fundamental problems persist.
    Last edited by Kuntantee; 2017-04-22 at 06:53 AM.

  14. #14
    Elemental Lord callipygoustp's Avatar
    7+ Year Old Account
    Join Date
    Jun 2015
    Location
    Buffalo, NY
    Posts
    8,668
    I look forward to the day that human idiocy is corrected by AI.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •