General artificial intelligence will either be our greatest achievement as a species or the worst thing we have ever done.
There isn't much middle ground possible.
General artificial intelligence will either be our greatest achievement as a species or the worst thing we have ever done.
There isn't much middle ground possible.
Warning : Above post may contain snark and/or sarcasm. Try reparsing with the /s argument before replying.
What the world has learned is that America is never more than one election away from losing its goddamned mindMe on Elite : Dangerous | My WoW charactersOriginally Posted by Howard Tayler
Perhaps that's a near absolute if we're speaking philosophically, but if science does manage to quanitfy the components of consciousness which I think is a matter of when. If we program a computer to simulate those components and they begin to work in tandem, is it not alive? Biology or technology, it does not matter. They're both made of star dust. What makes a self thinking machine with an awareness of it's existence a non-living thing, and what makes a biological intelligence a living thing?
It's not a matter of that.
The reason AI will kill us is pretty succinct and simple and sensible but I can't share it unless I get my money. I can however vouch that dying to AI would be more welcome than continuing to be raped wageslaves. So let's hope the raping of wageslaves eases up a bit and maybe even stops for some golden ages of time.
AI will most likely turn on us if we decide to give it human emotions and desires, such as fear, anger, jealousy and survival instincts. So let's not do that.
"In order to maintain a tolerant society, the society must be intolerant of intolerance." Paradox of tolerance
A.I. on the danger levels aren't even close to be there, human extinction is created by the humans to defeat themselves, doesn't always mean that our inventions are the ones doing so.
FOMO: "Fear Of Missing Out", also commonly known as people with a mental issue of managing time and activities, many expecting others to fit into their schedule so they don't miss out on things to come. If FOMO becomes a problem for you, do seek help, it can be a very unhealthy lifestyle..
But biology follows it's own programming as well. An algortihm that's been refined over billions of years. DNA is information, and so is programming. If technological evolution is no longer driven by direct design, how is it different? How do we tell a sentient being derived from technological evolution that it isn't alive, and it vehemenly disagrees? Even if you're right, is that truly wise? I do believe that's why nations and a number of leaders in the tech industry are already proposing rights for AI before it even exists. It's not a matter of whether it's alive or not. I think, therefore I am. If anyone would tell me I don't exist, I'd never accept that and argue against it. Perhaps even fight for it. We see what humans do when denied rights. Assuming an AI consciousness is modeled after the components that make up human consciousness, do you not think making an argument they aren't alive is a dangerous one?
I'm not an expert and never claimed to be and if I were I wouldn't waste my time talking about it on a gaming forum. But I've read enough to understand the concepts, benefits, dangers and strategies for solving some of the complex problems hindering it's creation. The discussion on consciousness is also outside the scope of a forum topic but suffice to say a human level general intelligence would be able to at least 'appear' as conscious as you or I. That would almost by default mean that it IS conscious since any test you could devise to disprove that supposition it would inevitably pass.
Actually, that's something that is vital to worry about. When designing new software it always does what it's programmed to do; i.e - told. The problem is that it often falls down on the part of doing what we intend. This has massive ramifications in the context of AI safety because we don't have the option of just reprogramming or shutting down the system in the case of a malignant failure.
Apples and oranges but the reason why Y2K wasn't a big deal is because hundreds of thousands of people worked for over a decade to make sure it wouldn't be. Artificial general intelligence is utterly unlike any technology ever created. In terms of how far reaching and profound a paradigm shift will occur it literally has no parallel in all of history and the benefits and dangers are commensurate to that.
I read books, I don't discuss things based on sound bites.
Yeah that's true; as you say we just don't know so it could equally require a fraction of the resources to attain or more likely somewhere in between. We will just have to wait and see.
Last edited by Thoughtcrime; 2017-11-09 at 09:46 PM.
All you said here is that your knowledge on the matter is not sufficient to form an opinion, and as a person who struggles with people infected by fairytales such as "The Universe in a Nutshell" on a daily basis, I extremely urge you to get into more technical aspects of the field instead of reading conceptual books.
On the Y2k problem, it did cost alot, but it was never an existential crisis neither did it take geniuses to fix it. This is the same, and yes we do have the option to reprogram and shut down the AI, this is not terminator.
Anyhow, I can assure you ( this isn't part of my argument, it's just an statement for your peace of mind) none of the CS people I know are worried in the slightest, and they are some of the most brilliant minds walked this earth.
Uh huh, if you're going to take that tack then I'm not going to write an essay to help you understand principles of AI safety or the capabilities of general intelligence nor am I going to defend my right to speak on any subject I damn well please. Also, Musk and Hawkings are not the people I am referencing, neither have I ever claimed that they are anywhere close to authoritative. The views I have expressed in this thread and others are shared by some of the leading minds in A.I research, but I'm sure you know better but then again if you've solved the control problem and the value loading problem you wouldn't be here, you'd be a trillionaire.
That was my point.
No, it's not and by saying this you demonstrate that you don't understand the problem. But go ahead and bring up some other flawed analogy.
I choose not to believe you. If your friends are as brilliant and knowledgeable in the field as you claim (they don't sound like it), ask them to explain the control and value loading problems,why they must be solved, why they are so difficult to solve, and what happens if we fail to implement them correctly prior to an intelligence explosion. if they tell you it's not a big deal they are full of shit. My opinion on the matter is formed on things a little more substantial than "my mate said so".
Last edited by Thoughtcrime; 2017-11-09 at 11:18 PM.
I mean, I love human beings, but seeing as we’ve screwed up seemingly everything we’ve touched I would at the least be a bit concerned. It’s nice to think that we’d take the extremely obvious, ethical, practical, and logical approach, but I can see things swinging for the worse. Universal basic income would be an even more controlling environment than we have today, and that ain’t going to last, but anything that can be done will, so it’s going to happen on as big a scale it possibly can.
True AI with superhuman intelligence wouldn't necessarily wipe us out, but it would be foolish to think even a benevolent AI would treat us as equals.
And the problem is that they can potentially reach critical mass in seconds, so it's not like you can "play it safe" when developing them.
Last edited by mmoc982b0e8df8; 2017-11-09 at 11:49 PM.
Been reading a bit into AI/Nanotech/Augs lately; OP is actually kinda right about the elites. IF there's any such scenario to happen, basically it's way more likely that humans will use AI against other humans long before there's sentient AI that takes over. For the latter we'd need to invent either AI that can do every single thing we can do, which would be pretty stupid; or one that can learn and replicate, gathering it's own resources, processing them and them creating new types of machines for different purposes, eventually being able to do everything we can do. Granted if this would happen it would be over pretty quick, due to Intelligence Explosion.
Nanotech could also be dangerous in human hands in the near future, for example nanonukes that could produce car/truck bomb sized explosions (not big compared to modern nuke but the delivery would be invisible). AI+nanotech would be even more insane.
As for humans merging with AI. From what I've found, there would need to be a bit more advances made in that field to be worth it. Most doctors that have weighed on it say that currently prosthetics and the like are really only recommended for those with disabilities as there's few benefits right now of replacing healthy body parts. Same with brain implant devices, though combine with nanotech that could change. But then again nanomachines in our bodies could have adverse effects too.
Last edited by JCD000; 2017-11-10 at 12:34 AM.
One of these two problems is vastly believed to be more of a philosophical question, and the other one is not a problem ( at least not in the way you described) ...
It is very much possible, I want to emphasize this again, we are not living in James Cameron's universe.
I like how you completely ignored my explanation on how that statement was not an argument ...
However, If you want to take the bait and believe in pseudoscience you hear around, go on; but if you are genuinely interested in AI, actually one of those mates ( as you put it) has some lectures published as a part of an open course program, you can take a look at it.
PS. When a scientist solves a problem, they don't become billionaires, they'll be granted a tenure at best xD