We are currently creating "A.S" - artificial stupidity. A.K.A internet forums
We are currently creating "A.S" - artificial stupidity. A.K.A internet forums
"It’s not the job of the artist to give the audience what the audience wants. If the audience knew what they needed, then they wouldn’t be the audience. They would be the artists. It is the job of artists to give the audience what they need."
— Alan Moore
Interesting story. Which was somewhat soiled by the fact her new body looks... Well, like that.
Stand-up comedian robots. WTB.
Walking with a friend in the dark is better than walking alone in the light.
So I chose the path of the Ebon Blade, and not a day passes where i've regretted it.
I am eternal, I am unyielding, I am UNDYING.
I am Zethras, and my blood will be the end of you.
I think so, in a way. It won't become anything like a human though, it wont make decisions based on ever-changing emotional states, it wont be "inspired" or spontaneously get ideas, it's actions/thoughts wont be motivated by dreams, lust, fears or agendas. It will just be a machine programmed with an incredibly complex system of rules, using the logic that it's been programmed with or "learned".
I've been studying computer science for years now and I am no longer sure we are actually intelligent. I suppose it's a matter of definitions. We appear to have qualities that computers don't. We are apparently creative, we have emotions and we appear to have free will and sentience. However, if we can build a computer system that is creative, expresses emotions that believes it has free will and sentience too, did we create an artificial intelligence? or did we show that we are simply machines as well?
And on the subject of free will, you are reading this now and because of it neurons in your brain are firing in an attempt to comprehend and you will form a decision as to whether or not you will read the next sentence. Since apparently you decided that you will, ponder this: neurons in your brain were triggering other neurons which eventually led to this decision. If we can make a computer program capable of simulating those biological processes, doesn't that mean that we could predict every choice? and if we can predict every choice, doesn't that mean there was never a choice to begin with?
I don't think this matters nearly as much as you think it does.
BattleNet Tag: Supremus#2450
CASE: Coolermaster Cosmos 2 / MOBO: Asus Maximus Hero VI / CPU: i7 4770k @ 4.2Ghz 1.184v (delidded) / GPU: Gigabyte Windforce 780 3GB 1173MHz, 6589MHz, 1.2v / RAM: G.Skill Trident-X 16GB 2400Mhz / PSU: Corsair AX860 Platinum / BOOT: Samsung 840 256GB x2 Raid 0 /
Well, first we'd have to prove that consciousness actually exists in the first place. If you were to make a 1:1 copy of a human, would it have its own, separate consciousness? What if the functions of the brain were emulated perfectly by electronics? Would there be a distinction?
Hard questions to answer. I don't doubt that we'll create an AI that can perform at human levels, we're already working on it and seeing some results (ANNs, Neuron-based computer chips, etc...). But it'll be a tough cookie to crack whether or not consciousness can even be created, much less proving that it even exists (or doesn't...)
CPU: i7 3770k @ 4.9GHz | GPU: 2x EVGA GTX 670FTW LE 2GB (SLI) | RAM: 16GB Corsair Vengeance LP 1600 MHz | Case: NZXT Switch 810 White | Motherboard: Asus P8Z77-V Pro | CPU Cooler: Corsair H110 CLC | SSD: Intel 520 160GB | HDD: 3xWestern Digital Caviar Black 1TB | PSU: Antec TruePower Quattro 1200W | Displays: 3x BL2710PT IPS | Keyboard: Filco Majestouch 2 Tactile | Mouse: Razer Deathadder/Naga Epic | Car: Andromeda
I wish we weren't stupid enough to do so, but I know we are stupid enough to try. I doubt we will achieve a true artificial consciousness. If we do, it’s all over unless we destroy it before it inevitably gets out of hand and then never make another.
And I looked, and behold a pale horse: and his name that sat on him was Death, and Hell followed with him.
Will mankind ever create a machine that is capable of reasoning?
Yes, we can already do that. Wolfram alpha is a great example. It analyses a problem (input string), does pattern matching to find similar problems, bridges gaps of information, formalizes the problem, and forwards that problem to a specialized code path that is capable of answering it. That's a really impressive feat of engineering!
But however clever this website is, it's of course not intelligent. It's just feeding you answers from a database, by figuring out what your question really is. It's artificial intelligence. By which we mean it's not real intelligence. It's also certainly not learning by itself.
Will mankind ever create a machine that is capable of learning?
Yes, we can already do that. Neural networks.
However, learning is again only a part of the puzzle to what makes up an intelligent being.
While we have largely modeled a system that is capable of getting gradually better at doing whatever it does through feedback, it's a long distance between having such a machine, and using that build something that is capable of adapting intelligence. But it's certainly a very important building block.
Will mankind ever create a machine that is capable of thinking?
By which it can reason, can learn, and can use this to generate meaningful ideas, hold a conversation, or at least understand what you are saying.
In short, something that can pass the turing test. We have not gotten to this point yet.
The best kind of intelligence we have created is chat bots like the classic Eliza.
It doesn't take an average human very long to discern that Eliza is not really a human. There is a competition going every year to create a better chat bot. Some of these are really really good.
At some point, humanity will create an artificial intelligence that passes the turing test. The question is then really whether it is artificial intelligence any longer. A bot like Eliza is obviously artificial. If it takes you a year to figure out that your conversation partner is an artificial intelligence, that's a pretty damned good machine. But if you can no longer tell in a lifetime, does it really matter if it is artificial?
If we break down the neural paths of a living organism and understanding them all, then reproducing them, simplified or not, into something that is capable of a similar feat as a living organism inside a constructed machine? Even if we have to model every subatomic particle in this neural path; we can do so eventually. At some point we have recreated an intelligent being. This will happen, though probably not in our lifetime. At this point, it's not really artificial intelligence anymore. It's real intelligence.
Now the question is... will we then have created something with a soul?
Will mankind ever create a machine that is capable of caring?
Now that's the question isn't it?
Non-discipline since 2006. Also: fails. "Chakra, when the walls fell"
When we do, one of the companies responsible for producing large-scale AI structures will inevitably decide that it's cheaper to leave out pesky things like ethical programming.
After that, it's only a matter of time.
Yes, eventually we will create AI.
The first forms might already exist and are completely foreign to us, similar to how dolphins might use language but our perceptions differ so greatly that we can not relate or comprehend their experience.
What most people think of as AI is strong, human-like artificial intelligence. I believe that is further off, but it will happen before 2050 barring any existential event.
However, the creation of strong, not-human-like AI is one of the greatest existential threats of the 21st century.
It is completely conceivable that before we create a human-like program, we'll create some expert efficiency system that is without the compassion, empathy and emotions that make us human. These programs would be more personally profitable and more easily achieved ventures than a human-like AI and are thus more likely to precede it.
What we're waiting on isn't the technology. The technology is already there.
We're waiting on someone to make that intuitive leap that transforms the computing world. We're waiting on the Einstein of computer learning.
Brute force techniques will get us there eventually, but some one will be born one day who can formulate intelligence into its most basic, irreducible form. It will be the simplest algorithm for intelligence possible.
It will be the egg from which consciousness grows; a looping computer algorithm that builds a soul.
That'll be a pretty big day.
We've already got technology that makes decisions and choices for us. That's A.I., the only real think lacking is Emotion.
I don't see it happening, not for real at least. We can make an AI that is so real that we can't tell the difference, but an AI with a self-awareness is not something I expect to see ever.
True AI will be some way off I think , Not in my lifetime at least.
That being said , we live in a great time for technology and I can't wait to see what other advancements we can make , the most depressing thing I can think of is the human race reaching it's innovation peak and being unable to produce anything that will change everything up again.