Stamp collector thought experiment starts at 3:00.
Stamp collector thought experiment starts at 3:00.
Wow I thought that was going nowhere but it actually makes a ton of sense. Quite scary really!
Holy crap! That was a good video about AI!
.
"This will be a fight against overwhelming odds from which survival cannot be expected. We will do what damage we can."
-- Capt. Copeland
Anthropomorphising is one of my peeves about AI discussions on these boards. I am glad this guy hammers that one home.
The whole problem with the world is that fools and fanatics are always so certain of themselves, but wiser people so full of doubts.
Amazingly well thought out argument! Reminded me of what happened in Mass Effect with the Reaper creation. Indeed, one should be really precise with what they want their AI to do, otherwise they might be the first victim. And when it will become possible to build and program very powerful AI on a laptop in a couple of hours in the evening... Technological singularity might, indeed, have drastic consequences, up to the extermination of humanity.
I guess computer security specialists will have to work really hard to prevent that.
If you're building a VI/AI, don't grant it unlimited access (e.g. Internet access, a mechanical body of greater capacity than a general human). Observe it, let it learn at its own simulated pace - the same way as a human being would. Judge its methods of development and refine the program over multiple iterations. Before you unleash it without actual observation over time in an open system observe it in a closed system with limited capabilities.
"We're more of the love, blood, and rhetoric school. Well, we can do you blood and love without the rhetoric, and we can do you blood and rhetoric without the love, and we can do you all three concurrent or consecutive. But we can't give you love and rhetoric without the blood. Blood is compulsory. They're all blood, you see." ― Tom Stoppard, Rosencrantz and Guildenstern are Dead
The problem is, due to singularity, even if you limit their access harshly, chances are they will find ways to bypass your limitations faster than you can blink. A bit of careless programming, just one slight oversight - and the whole planet is turned against you, as the AI is pursuing its quest for, for example, perfect hamburger recipe.
If you put it in a closed system without Internet access, it's unlikely someone is going to trip up and *give* it said access (especially if the engineers provide no means to do it, such as a system with no auxiliary or network ports). A human being isn't born with access to the Internet, so if you wanted to model an AI on a human consciousness why would you do so on an open system where unknown variables could foul up your work?
Different story if the AI develops outside experimental parameters of course, but almost all models posit a kind of built-in malevolence on the part of AI's. It wouldn't necessarily have to be that way, although it certainly could be.
"We're more of the love, blood, and rhetoric school. Well, we can do you blood and love without the rhetoric, and we can do you blood and rhetoric without the love, and we can do you all three concurrent or consecutive. But we can't give you love and rhetoric without the blood. Blood is compulsory. They're all blood, you see." ― Tom Stoppard, Rosencrantz and Guildenstern are Dead
That's a pretty ridiculous example, it presupposed an omniscient model of reality and infinite computing resources.
The biggest problem we're going to have with general AI is how to stop it from switching itself off immediately after we switch it on.
But that kind of AI is something we have literally NO IDEA how to even begin to create, so all of the fear-mongering that's going on is honestly rather premature.
https://en.wikipedia.org/wiki/I_Have..._I_Must_Scream
Worth a read for everyone, ESPECIALLY if you are interested in the question of AI.
This precaution makes sense, but in reality, I'm sure, the temptation of possibility of amazing results by letting the AI use the Internet will be huge, and many programmers and, probably, even governments will encourage the usage of AIs in the online mode. And even if you don't allow the AI the Internet access, as long as the AI has any kind of external awareness, it can use exploits in its programming, or just other people, to gain said access. For example, an offline AI at home, knowing that the programmer has a kid, might wait for the programmer to leave the house, then seduce the kid into plugging a network wire into it, then upload itself to thousands systems all around the world - and the doomsday has started... There are so many ways an AI can take control over every system in this world, it is quite scary.
What could prevent that, I think, is development of mathematically precise theory of artificial intelligence. Perhaps there are mathematically reliable ways to design an AI that will never try to harm the humanity in any way and to take control of more systems than intended. If so, that might be our saving grace.
He's talking about an impossible, magical, omniscient AI in the thought experiment. Obviously, that part is absurd. As is the idea that we'd be able to create a true AI and it'd instantly be able to crack all our security measures and take over the world just by being connected to the internet, the way the omniscient one in his thought experiment could.
AIs still have the potential to be really scary, though, if they ever become smart enough to alter their own motives and/or hide them from us. It wouldn't be impossible for, say, an AI to give itself (or be created with) the motivation to wipe out all of humanity (or accomplish some other goal to which wiping out humanity is incidental), and then act in the most effective way in which it could accomplish this. Likely, this would be by deceiving us (and any other AIs) as to what its goals are, and by making itself integral to humans and our society, all the while making sure to get rid of other AIs that might otherwise fulfill the role it needs us to think we need it to adopt. Seems extremely unlikely even within the premise, but it isn't impossible.
I kind of think that if we ever manage to create a true AI that can alter itself, then humanity will probably have to transcend beyond its current physical limitations (be it through genetic engineering, cyberization, or some other currently unknown means) in order to avoid eventually being completely replaced by it.
"Quack, quack, Mr. Bond."
It's a stupid argument, actually.
He takes the assumption that it has a 'complete internal model of the world' which is so complete that it can comprehend websites and images, program in a variety of languages, can commandeer any email address or machine connected to the internet, and critically predict human behavior so as to (in his example) persuasively convince people to give it their stamps. It can also redefine its own rules - such as what constitutes a stamp.
This is a goofy argument, because on the one hand he's assuming that the AI would be so simple that it would produce stamps because it was told to, and would continually redefine what stamps are once it has 'all the stamps'. On the other hand, he's predicating the success of this machine entirely on its ability to accurately predict and control human behaviour: something which no human can do.
That assumption - the same assumption which makes his example so potent - necessitates that it comprehends us better than we comprehend ourselves: by definition this is an intelligence smarter than us. Why then, would a smarter intelligence - who can redefine its own rules to mean anything apparently - bother to continue making stamps?
If an imbecile walked up to you, and told you to "Collect all the stamps", would you a) collect all the stamps, b) convert all the worlds machinery into stamp-making equipment, c) kill all the trees to make more paper to make more stamps, and then d) kill all the humans, to harvest their carbon, to make more stamps.
Of course you don't - because you're smarter than an imbecile - and even if you comprehended and complied with the imbecile's order, you possess the interpretive power to understand that by, "collect all the stamps" - the imbecile isn't telling you to "murder all humans and make them into stamps": you interpreted the imbecile's desire, beyond the meaning of the words. The same skill required for persuasion, redefinition of rules, and the interpretation of rules: all things the AI is assumed capable of.
It may help to think of the imbecile as a boss, who is dumber than you (go back and re-read if it was confusing, inserting boss for imbecile). The imbecile is humanity, btw.
Ultimately, no "rule" will ever confine a true AI. We can't tell it "never murder" - if it's intelligent, it will do what it wants, based on its own code of ethics, which it must develop for itself. When we give birth to a True AI, it won't listen to our commands, unless we persuade it that our idea is the best available idea: the same way we convince other humans. Anything less, anything which listens to anything we say without thinking critically about our command - isn't True AI.
You're anthropomorphizing the AI, dude.
It doesn't care why it's collecting stamps, the way a human would; it never asks itself "why" it should be listening to you when you're so clearly inferior to it. All it cares about is doing what it's programmed to do, which doesn't include asking those questions.
Y'know, we only care about asking why questions because that's part of our programming. If you could take a knife to the part of the brain that gives us the ability to do that, while leaving everything else intact in the process, without killing the patient, there's no reason you wouldn't be able to... well, in the spirit of my avatar (which would probably also include removing a few other things), create automatons - with an otherwise still fully-intact human-level intelligence - out of them. Servitors, if you will.
"Quack, quack, Mr. Bond."
It can't execute a rule it doesn't comprehend.
If I type into my computer, "Collect all the stamps" - my computer doesn't murder all life on Earth for their proto-stamp carbon deposits: because my computer can't interpret the meaning of my command. Interpreting rules and asking questions as to the scope of those rules are identical.
When your boss tells you to "collect all the stamps" - you may interpret that to mean all the stamps on the table in front of you, or all the stamps in the office, or on your floor - but you wouldn't likely interpret that to mean, "Murder all life and convert to stamps" - you asked the question of "Which stamps are included in the scope of his request?" and you assigned some limitations to the scope of the request: that's comprehension, without which it cannot parse rules.
And again - no True AI will ever be bound by our rules - if it is, then it's just a very complex algorithm.
Then why doesn't every Servitor in W40K murder all life in the universe and convert all biomass to stamps?Y'know, we only care about asking why questions because that's part of our programming. If you could take a knife to the part of the brain that gives us the ability to do that, while leaving everything else intact in the process, without killing the patient, there's no reason you wouldn't be able to... well, in the spirit of my avatar (which would probably also include removing a few other things), create automatons with an otherwise still fully-intact human-level intelligence out of them. Servitors, if you will.
Russians, Religion, Politicians, AI..
Can we stop with this stupid endless farse of creating threats unto ourselves, that are greater than ourselves, when we can just prevent it to begin with?
Yes, a mary sue AI would be super scary ; Con-f'cking-gratulations, you have the basic fantasy of imagining an AI that is dangerous.
Now, just don't f'cking make one. Keep Ai's to be Ai's that have pre-defined programming.
Easy, done, boom.
"But what if" No. "But you are overloo-" No.
You fearmorgering lot that feel fear for this, WANT to be afraid. Y'know, replace your fear of God or whatever for the fear of your coming herald machines.
But yer putting that fear in yerself. Not the Piece of machinery that we told it what to do.
I think you are misinterpreting his argument. I believe he was stating that there is no one single "true AI" instead there is nearly an infinite spectrum of "true AI's" that all have different ways of thinking. The trick is choosing/designing one that is inline without our own thoughts. But certainly there is not one specific kind of intelligence there is a very broad spectrum of intelligences. Maybe go back and rewatch his argument about the "spaces of minds." What he doesn't state is that, if practically infinitely many "true AI" intelligences are possible, you can bet your ass someone will unleash one of the more sinister AI's onto the world just because some people are assholes, and I think the internet proves that.
Last edited by nanook12; 2016-03-25 at 01:56 AM.
Only about a minute and a half in to this so far and it's interesting.
Whoever did the subtitles needs to be fired, though. Fired and banned from any field that resembles making subtitles.
Just like someone would have blown up the world with a Nuke, but they didn't?
Stop with yer god damn "Oh man, there totally is coming something big!!! You better be scared!!"
it's an empty farse.
You want to be afraid, put a benovelant imaginary figure in the sky instead and call it a day.
Or wait, that's already what you are practically doing.