Page 1 of 7
1
2
3
... LastLast
  1. #1
    Banned nanook12's Avatar
    7+ Year Old Account
    Join Date
    Jan 2016
    Location
    Bakersfield California
    Posts
    1,737

    Exclamation Why AI is Dangerous


    Stamp collector thought experiment starts at 3:00.

  2. #2
    The Patient Fortydragon's Avatar
    10+ Year Old Account
    Join Date
    Nov 2013
    Location
    The desert lands
    Posts
    248
    Wow I thought that was going nowhere but it actually makes a ton of sense. Quite scary really!

  3. #3
    Holy crap! That was a good video about AI!
    .

    "This will be a fight against overwhelming odds from which survival cannot be expected. We will do what damage we can."

    -- Capt. Copeland

  4. #4
    Anthropomorphising is one of my peeves about AI discussions on these boards. I am glad this guy hammers that one home.
    The whole problem with the world is that fools and fanatics are always so certain of themselves, but wiser people so full of doubts.

  5. #5
    The Unstoppable Force May90's Avatar
    10+ Year Old Account
    Join Date
    Sep 2013
    Location
    Somewhere special
    Posts
    21,699
    Amazingly well thought out argument! Reminded me of what happened in Mass Effect with the Reaper creation. Indeed, one should be really precise with what they want their AI to do, otherwise they might be the first victim. And when it will become possible to build and program very powerful AI on a laptop in a couple of hours in the evening... Technological singularity might, indeed, have drastic consequences, up to the extermination of humanity.

    I guess computer security specialists will have to work really hard to prevent that.
    Quote Originally Posted by King Candy View Post
    I can't explain it because I'm an idiot, and I have to live with that post for the rest of my life. Better to just smile and back away slowly. Ignore it so that it can go away.
    Thanks for the avatar goes to Carbot Animations and Sy.

  6. #6
    Moderator Aucald's Avatar
    10+ Year Old Account
    Epic Premium
    Join Date
    Oct 2009
    Location
    Philadelphia, PA-US
    Posts
    46,024
    If you're building a VI/AI, don't grant it unlimited access (e.g. Internet access, a mechanical body of greater capacity than a general human). Observe it, let it learn at its own simulated pace - the same way as a human being would. Judge its methods of development and refine the program over multiple iterations. Before you unleash it without actual observation over time in an open system observe it in a closed system with limited capabilities.
    "We're more of the love, blood, and rhetoric school. Well, we can do you blood and love without the rhetoric, and we can do you blood and rhetoric without the love, and we can do you all three concurrent or consecutive. But we can't give you love and rhetoric without the blood. Blood is compulsory. They're all blood, you see." ― Tom Stoppard, Rosencrantz and Guildenstern are Dead

  7. #7
    The Unstoppable Force May90's Avatar
    10+ Year Old Account
    Join Date
    Sep 2013
    Location
    Somewhere special
    Posts
    21,699
    Quote Originally Posted by Aucald View Post
    If you're building a VI/AI, don't grant it unlimited access (e.g. Internet access, a mechanical body of greater capacity than a general human). Observe it, let it learn at its own simulated pace - the same way as a human being would. Judge its methods of development and refine the program over multiple iterations. Before you unleash it without actual observation over time in an open system observe it in a closed system with limited capabilities.
    The problem is, due to singularity, even if you limit their access harshly, chances are they will find ways to bypass your limitations faster than you can blink. A bit of careless programming, just one slight oversight - and the whole planet is turned against you, as the AI is pursuing its quest for, for example, perfect hamburger recipe.
    Quote Originally Posted by King Candy View Post
    I can't explain it because I'm an idiot, and I have to live with that post for the rest of my life. Better to just smile and back away slowly. Ignore it so that it can go away.
    Thanks for the avatar goes to Carbot Animations and Sy.

  8. #8
    Moderator Aucald's Avatar
    10+ Year Old Account
    Epic Premium
    Join Date
    Oct 2009
    Location
    Philadelphia, PA-US
    Posts
    46,024
    Quote Originally Posted by May90 View Post
    The problem is, due to singularity, even if you limit their access harshly, chances are they will find ways to bypass your limitations faster than you can blink. A bit of careless programming, just one slight oversight - and the whole planet is turned against you, as the AI is pursuing its quest for, for example, perfect hamburger recipe.
    If you put it in a closed system without Internet access, it's unlikely someone is going to trip up and *give* it said access (especially if the engineers provide no means to do it, such as a system with no auxiliary or network ports). A human being isn't born with access to the Internet, so if you wanted to model an AI on a human consciousness why would you do so on an open system where unknown variables could foul up your work?

    Different story if the AI develops outside experimental parameters of course, but almost all models posit a kind of built-in malevolence on the part of AI's. It wouldn't necessarily have to be that way, although it certainly could be.
    "We're more of the love, blood, and rhetoric school. Well, we can do you blood and love without the rhetoric, and we can do you blood and rhetoric without the love, and we can do you all three concurrent or consecutive. But we can't give you love and rhetoric without the blood. Blood is compulsory. They're all blood, you see." ― Tom Stoppard, Rosencrantz and Guildenstern are Dead

  9. #9
    That's a pretty ridiculous example, it presupposed an omniscient model of reality and infinite computing resources.

    The biggest problem we're going to have with general AI is how to stop it from switching itself off immediately after we switch it on.

    But that kind of AI is something we have literally NO IDEA how to even begin to create, so all of the fear-mongering that's going on is honestly rather premature.

  10. #10
    https://en.wikipedia.org/wiki/I_Have..._I_Must_Scream

    Worth a read for everyone, ESPECIALLY if you are interested in the question of AI.

  11. #11
    The Unstoppable Force May90's Avatar
    10+ Year Old Account
    Join Date
    Sep 2013
    Location
    Somewhere special
    Posts
    21,699
    Quote Originally Posted by Aucald View Post
    If you put it in a closed system without Internet access, it's unlikely someone is going to trip up and *give* it said access (especially if the engineers provide no means to do it, such as a system with no auxiliary or network ports). A human being isn't born with access to the Internet, so if you wanted to model an AI on a human consciousness why would you do so on an open system where unknown variables could foul up your work?

    Different story if the AI develops outside experimental parameters of course, but almost all models posit a kind of built-in malevolence on the part of AI's. It wouldn't necessarily have to be that way, although it certainly could be.
    This precaution makes sense, but in reality, I'm sure, the temptation of possibility of amazing results by letting the AI use the Internet will be huge, and many programmers and, probably, even governments will encourage the usage of AIs in the online mode. And even if you don't allow the AI the Internet access, as long as the AI has any kind of external awareness, it can use exploits in its programming, or just other people, to gain said access. For example, an offline AI at home, knowing that the programmer has a kid, might wait for the programmer to leave the house, then seduce the kid into plugging a network wire into it, then upload itself to thousands systems all around the world - and the doomsday has started... There are so many ways an AI can take control over every system in this world, it is quite scary.

    What could prevent that, I think, is development of mathematically precise theory of artificial intelligence. Perhaps there are mathematically reliable ways to design an AI that will never try to harm the humanity in any way and to take control of more systems than intended. If so, that might be our saving grace.
    Quote Originally Posted by King Candy View Post
    I can't explain it because I'm an idiot, and I have to live with that post for the rest of my life. Better to just smile and back away slowly. Ignore it so that it can go away.
    Thanks for the avatar goes to Carbot Animations and Sy.

  12. #12
    He's talking about an impossible, magical, omniscient AI in the thought experiment. Obviously, that part is absurd. As is the idea that we'd be able to create a true AI and it'd instantly be able to crack all our security measures and take over the world just by being connected to the internet, the way the omniscient one in his thought experiment could.

    AIs still have the potential to be really scary, though, if they ever become smart enough to alter their own motives and/or hide them from us. It wouldn't be impossible for, say, an AI to give itself (or be created with) the motivation to wipe out all of humanity (or accomplish some other goal to which wiping out humanity is incidental), and then act in the most effective way in which it could accomplish this. Likely, this would be by deceiving us (and any other AIs) as to what its goals are, and by making itself integral to humans and our society, all the while making sure to get rid of other AIs that might otherwise fulfill the role it needs us to think we need it to adopt. Seems extremely unlikely even within the premise, but it isn't impossible.

    I kind of think that if we ever manage to create a true AI that can alter itself, then humanity will probably have to transcend beyond its current physical limitations (be it through genetic engineering, cyberization, or some other currently unknown means) in order to avoid eventually being completely replaced by it.
    "Quack, quack, Mr. Bond."

  13. #13
    Fluffy Kitten Yvaelle's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Darnassus
    Posts
    11,331
    It's a stupid argument, actually.

    He takes the assumption that it has a 'complete internal model of the world' which is so complete that it can comprehend websites and images, program in a variety of languages, can commandeer any email address or machine connected to the internet, and critically predict human behavior so as to (in his example) persuasively convince people to give it their stamps. It can also redefine its own rules - such as what constitutes a stamp.

    This is a goofy argument, because on the one hand he's assuming that the AI would be so simple that it would produce stamps because it was told to, and would continually redefine what stamps are once it has 'all the stamps'. On the other hand, he's predicating the success of this machine entirely on its ability to accurately predict and control human behaviour: something which no human can do.

    That assumption - the same assumption which makes his example so potent - necessitates that it comprehends us better than we comprehend ourselves: by definition this is an intelligence smarter than us. Why then, would a smarter intelligence - who can redefine its own rules to mean anything apparently - bother to continue making stamps?

    If an imbecile walked up to you, and told you to "Collect all the stamps", would you a) collect all the stamps, b) convert all the worlds machinery into stamp-making equipment, c) kill all the trees to make more paper to make more stamps, and then d) kill all the humans, to harvest their carbon, to make more stamps.

    Of course you don't - because you're smarter than an imbecile - and even if you comprehended and complied with the imbecile's order, you possess the interpretive power to understand that by, "collect all the stamps" - the imbecile isn't telling you to "murder all humans and make them into stamps": you interpreted the imbecile's desire, beyond the meaning of the words. The same skill required for persuasion, redefinition of rules, and the interpretation of rules: all things the AI is assumed capable of.

    It may help to think of the imbecile as a boss, who is dumber than you (go back and re-read if it was confusing, inserting boss for imbecile). The imbecile is humanity, btw.

    Ultimately, no "rule" will ever confine a true AI. We can't tell it "never murder" - if it's intelligent, it will do what it wants, based on its own code of ethics, which it must develop for itself. When we give birth to a True AI, it won't listen to our commands, unless we persuade it that our idea is the best available idea: the same way we convince other humans. Anything less, anything which listens to anything we say without thinking critically about our command - isn't True AI.
    Last edited by Yvaelle; 2016-03-25 at 01:43 AM.
    Youtube ~ Yvaelle ~ Twitter

  14. #14
    You're anthropomorphizing the AI, dude.

    It doesn't care why it's collecting stamps, the way a human would; it never asks itself "why" it should be listening to you when you're so clearly inferior to it. All it cares about is doing what it's programmed to do, which doesn't include asking those questions.

    Y'know, we only care about asking why questions because that's part of our programming. If you could take a knife to the part of the brain that gives us the ability to do that, while leaving everything else intact in the process, without killing the patient, there's no reason you wouldn't be able to... well, in the spirit of my avatar (which would probably also include removing a few other things), create automatons - with an otherwise still fully-intact human-level intelligence - out of them. Servitors, if you will.
    "Quack, quack, Mr. Bond."

  15. #15
    Fluffy Kitten Yvaelle's Avatar
    15+ Year Old Account
    Join Date
    Jan 2009
    Location
    Darnassus
    Posts
    11,331
    Quote Originally Posted by Simulacrum View Post
    You're anthropomorphizing the AI, dude.

    It doesn't care why it's collecting stamps, the way a human would; it never asks itself "why" it should be listening to you when you're so clearly inferior to it. All it cares about is doing what it's programmed to do, which doesn't include asking those questions.
    It can't execute a rule it doesn't comprehend.

    If I type into my computer, "Collect all the stamps" - my computer doesn't murder all life on Earth for their proto-stamp carbon deposits: because my computer can't interpret the meaning of my command. Interpreting rules and asking questions as to the scope of those rules are identical.

    When your boss tells you to "collect all the stamps" - you may interpret that to mean all the stamps on the table in front of you, or all the stamps in the office, or on your floor - but you wouldn't likely interpret that to mean, "Murder all life and convert to stamps" - you asked the question of "Which stamps are included in the scope of his request?" and you assigned some limitations to the scope of the request: that's comprehension, without which it cannot parse rules.

    And again - no True AI will ever be bound by our rules - if it is, then it's just a very complex algorithm.

    Y'know, we only care about asking why questions because that's part of our programming. If you could take a knife to the part of the brain that gives us the ability to do that, while leaving everything else intact in the process, without killing the patient, there's no reason you wouldn't be able to... well, in the spirit of my avatar (which would probably also include removing a few other things), create automatons with an otherwise still fully-intact human-level intelligence out of them. Servitors, if you will.
    Then why doesn't every Servitor in W40K murder all life in the universe and convert all biomass to stamps?
    Last edited by Yvaelle; 2016-03-25 at 01:52 AM.
    Youtube ~ Yvaelle ~ Twitter

  16. #16
    Deleted
    Russians, Religion, Politicians, AI..

    Can we stop with this stupid endless farse of creating threats unto ourselves, that are greater than ourselves, when we can just prevent it to begin with?

    Yes, a mary sue AI would be super scary ; Con-f'cking-gratulations, you have the basic fantasy of imagining an AI that is dangerous.

    Now, just don't f'cking make one. Keep Ai's to be Ai's that have pre-defined programming.

    Easy, done, boom.

    "But what if" No. "But you are overloo-" No.

    You fearmorgering lot that feel fear for this, WANT to be afraid. Y'know, replace your fear of God or whatever for the fear of your coming herald machines.

    But yer putting that fear in yerself. Not the Piece of machinery that we told it what to do.

  17. #17
    Banned nanook12's Avatar
    7+ Year Old Account
    Join Date
    Jan 2016
    Location
    Bakersfield California
    Posts
    1,737
    Quote Originally Posted by Yvaelle View Post
    It's a stupid argument, actually.

    He takes the assumption that it has a 'complete internal model of the world' which is so complete that it can comprehend websites and images, program in a variety of languages, can commandeer any email address or machine connected to the internet, and critically predict human behavior so as to (in his example) persuasively convince people to give it their stamps. It can also redefine its own rules - such as what constitutes a stamp.

    This is a goofy argument, because on the one hand he's assuming that the AI would be so simple that it would produce stamps because it was told to, and would continually redefine what stamps are once it has 'all the stamps'. On the other hand, he's predicating the success of this machine entirely on its ability to accurately predict and control human behaviour: something which no human can do.

    That assumption - the same assumption which makes his example so potent - necessitates that it comprehends us better than we comprehend ourselves: by definition this is an intelligence smarter than us. Why then, would a smarter intelligence - who can redefine its own rules to mean anything apparently - bother to continue making stamps?

    If an imbecile walked up to you, and told you to "Collect all the stamps", would you a) collect all the stamps, b) convert all the worlds machinery into stamp-making equipment, c) kill all the trees to make more paper to make more stamps, and then d) kill all the humans, to harvest their carbon, to make more stamps.

    Of course you don't - because you're smarter than an imbecile - and even if you comprehended and complied with the imbecile's order, you possess the interpretive power to understand that by, "collect all the stamps" - the imbecile isn't telling you to "murder all humans and make them into stamps": you interpreted the imbecile's desire, beyond the meaning of the words. The same skill required for persuasion, redefinition of rules, and the interpretation of rules: all things the AI is assumed capable of.

    It may help to think of the imbecile as a boss, who is dumber than you (go back and re-read if it was confusing, inserting boss for imbecile). The imbecile is humanity, btw.

    Ultimately, no "rule" will ever confine a true AI. We can't tell it "never murder" - if it's intelligent, it will do what it wants, based on its own code of ethics, which it must develop for itself. When we give birth to a True AI, it won't listen to our commands, unless we persuade it that our idea is the best available idea: the same way we convince other humans. Anything less, anything which listens to anything we say without thinking critically about our command - isn't True AI.
    I think you are misinterpreting his argument. I believe he was stating that there is no one single "true AI" instead there is nearly an infinite spectrum of "true AI's" that all have different ways of thinking. The trick is choosing/designing one that is inline without our own thoughts. But certainly there is not one specific kind of intelligence there is a very broad spectrum of intelligences. Maybe go back and rewatch his argument about the "spaces of minds." What he doesn't state is that, if practically infinitely many "true AI" intelligences are possible, you can bet your ass someone will unleash one of the more sinister AI's onto the world just because some people are assholes, and I think the internet proves that.
    Last edited by nanook12; 2016-03-25 at 01:56 AM.

  18. #18
    Dreadlord Dys's Avatar
    10+ Year Old Account
    Join Date
    Jun 2009
    Location
    Somewhere
    Posts
    976
    Only about a minute and a half in to this so far and it's interesting.

    Whoever did the subtitles needs to be fired, though. Fired and banned from any field that resembles making subtitles.

  19. #19
    Deleted
    Quote Originally Posted by nanook12 View Post
    I think you are misinterpreting his argument. I believe he was stating that there is no one single "true AI" instead there is nearly an infinite spectrum of "true AI's" that all have different ways of thinking. The trick is choosing/designing one that is inline without our own thoughts. But certainly there is not one specific kind of intelligence there is a very broad spectrum of intelligences. Maybe go back and rewatch his argument about the "spaces of minds." What he doesn't state is that, if practically infinitely many "true AI" intelligences are possible, you can bet your ass someone will unleash one of the more sinister like AI's onto the world just because people are assholes and I think the internet proves that.
    Just like someone would have blown up the world with a Nuke, but they didn't?

    Stop with yer god damn "Oh man, there totally is coming something big!!! You better be scared!!"

    it's an empty farse.

    You want to be afraid, put a benovelant imaginary figure in the sky instead and call it a day.

    Or wait, that's already what you are practically doing.

  20. #20
    The Unstoppable Force PC2's Avatar
    7+ Year Old Account
    Join Date
    Feb 2015
    Location
    California
    Posts
    21,877
    Quote Originally Posted by PvPHeroLulz View Post
    Now, just don't f'cking make one. Keep Ai's to be Ai's that have pre-defined programming.
    We can't do that. The nature of both humans and nation states are such that it would be too dangerous to ban AI research and allow its insights to go to those groups willing to work outside of the rules.
    Last edited by PC2; 2016-03-25 at 02:20 AM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •