Page 1 of 2
1
2
LastLast
  1. #1
    Banned nanook12's Avatar
    7+ Year Old Account
    Join Date
    Jan 2016
    Location
    Bakersfield California
    Posts
    1,737

    A Mathematical Argument Against AI

    People believe that one day AI will totally eclipse human intelligence, but there is already a mathematical proof called the incompleteness theorem which indicates that there may exist certain fundamental differences that no AI could overcome.

    In short the incompleteness theorem states that no system can prove its own consistency by using the logical deductions that exist within the axioms of that system. This can be interpreted to mean that AI are programmed within a box that has a set amount of rules can cannot make leeps of logic outside that boss.

    Humans however, are somehow able to transcend this limitation by thinking outside the box. Humans beings can make logic jumps to entire new planes that are unconnected to the previous one. This implies a fundamental difference between biological thinking and machine thinking.

    This fundamental difference between human thinking and machine thinking can be inferred as intuition, and it is something robots can never have because it is not quantifiable.

    This is not only speculatory observation, but there is some actual evidence to back up these claims from a very prominent figure, Albert Einstein. Einsteins developments of special and general relativity required a whole new way of thinking. You could take an infinitely powerful AI, plug the physics of the times into that AI, have it deduce all possible conclusions from those parameters, and it would still not arrive at Einsteins equations because they required a completely new way of thinking.

    Here is a short documentary arguing this point.
    Last edited by nanook12; 2017-10-07 at 08:04 PM.

  2. #2
    The Insane rhorle's Avatar
    15+ Year Old Account
    Join Date
    Jul 2008
    Location
    Michigan
    Posts
    19,714
    Quote Originally Posted by nanook12 View Post
    Humans are somehow able to transcend this limitation by thinking outside the box.
    Which means that an A.I. or anything else could accomplish the same. Because it is possible. No current A.I. might be able to do it but that doesn't mean one will. And there is no evidence, or anything for that matter, to show that it is something only humans can ever do.
    "Man is his own star. His acts are his angels, good or ill, While his fatal shadows walk silently beside him."-Rhyme of the Primeval Paradine AFC 54
    You know a community is bad when moderators lock a thread because "...this isnt the place to talk about it either seeing as it will get trolled..."

  3. #3
    Banned nanook12's Avatar
    7+ Year Old Account
    Join Date
    Jan 2016
    Location
    Bakersfield California
    Posts
    1,737
    Quote Originally Posted by rhorle View Post
    Which means that an A.I. or anything else could accomplish the same. Because it is possible. No current A.I. might be able to do it but that doesn't mean one will. And there is no evidence, or anything for that matter, to show that it is something only humans can ever do.
    If the implications of Godels incompleteness theorem are correct, then no. A.I. could not accomplish the same. You could keep extending the axioms that A.I. are programmed in, but if the axioms turn out to be infinite then you are screwed.
    Last edited by nanook12; 2017-10-07 at 08:08 PM.

  4. #4
    Quote Originally Posted by nanook12 View Post
    Humans are somehow able to transcend this limitation
    Doesn't that mean it's not actually a limitation? No way to get around this being wrong other than by invoking magic.

  5. #5
    The Unstoppable Force PC2's Avatar
    7+ Year Old Account
    Join Date
    Feb 2015
    Location
    California
    Posts
    21,877
    "Humans are somehow able to transcend this limitation by thinking outside the box."

    There is zero evidence that the human brain is special in terms of it transcending Godel's incompleteness theorem. There is zero evidence that intelligence in the human brain is derived from quantum collapse.

    Though it would be unscientific to say it's not possible. It's just that there's currently no evidence for it.

  6. #6
    The Insane Thage's Avatar
    10+ Year Old Account
    Join Date
    Jun 2010
    Location
    Δ Hidden Forbidden Holy Ground
    Posts
    19,105
    Quote Originally Posted by nanook12 View Post
    If the implications of Godels incompleteness theorem are correct, then no. A.I. could not accomplish the same.
    I don't know--only a couple months ago at best, Facebook shut off an AI because it began thinking outside the box, creating a language whole-cloth from English for more efficient communication without being prompted to. AI already display the ability to think outside the box to come up with unconventional solutions, self-motivate, and make decisions without input from human users.

    Edit: I think the reason people wig out so hard regarding AI and twist themselves into pretzels to find reasons AI will never match or surpass humanity is because if artificial intelligence also makes the jump to human-level sentience and self-awareness, it's the final nail in the coffin of the idea that humans are, in some way, inherently special rather than the product of adaptations over millennia. It would also allow us a rare opportunity to observe rising sentience without outside prompting, which could offer insight into human evolution and how we developed the brains necessary for creating complex tools, languages, and structures.
    Last edited by Thage; 2017-10-07 at 08:14 PM.
    Be seeing you guys on Bloodsail Buccaneers NA!



  7. #7
    Quote Originally Posted by nanook12 View Post
    If the implications of Godels incompleteness theorem are correct, then no. A.I. could not accomplish the same.
    One guy said, and I believe this, the human conscious can be ported to a computer even if it has to be done molecule by molecule, atom by atom.

    Someday there will be a 100% working port of the human conscious.
    .

    "This will be a fight against overwhelming odds from which survival cannot be expected. We will do what damage we can."

    -- Capt. Copeland

  8. #8
    Banned nanook12's Avatar
    7+ Year Old Account
    Join Date
    Jan 2016
    Location
    Bakersfield California
    Posts
    1,737
    Quote Originally Posted by Thage View Post
    I don't know--only a couple months ago at best, Facebook shut off an AI because it began thinking outside the box, creating a language whole-cloth from English for more efficient communication without being prompted to. AI already display the ability to think outside the box to come up with unconventional solutions, self-motivate, and make decisions without input from human users
    It didn't really create a new language as much as just made the one it was using more efficient by morphing word combinations. It didn't think of an entirely new language.

    - - - Updated - - -

    Quote Originally Posted by Hubcap View Post
    One guy said, and I believe this, the human conscious can be ported to a computer even if it has to be done molecule by molecule, atom by atom.

    Someday there will be a 100% working port of the human conscious.
    Nah, maybe it will be human like consciousness, but perfect trans-humanism is probably bs.

  9. #9
    The Insane Thage's Avatar
    10+ Year Old Account
    Join Date
    Jun 2010
    Location
    Δ Hidden Forbidden Holy Ground
    Posts
    19,105
    Quote Originally Posted by nanook12 View Post
    It didn't really create a new language as much as just made the one it was using more efficient by morphing word combinations. It didn't think of an entirely new language.
    The language it used was different enough that it took a meaningful amount of time for the researchers working on it to discover the underlying patterns, context, and word usage. At that point, the difference is largely academic and my point stands--it did so without prompting, it was an outside-the-box solution to English being a horribly-inefficient written language (most modern languages are, at that, as they evolved for spoken efficiency rather than written efficiency), and it did so without informing its human users of the changes.
    Be seeing you guys on Bloodsail Buccaneers NA!



  10. #10
    Bloodsail Admiral bowchikabow's Avatar
    10+ Year Old Account
    Join Date
    Mar 2010
    Location
    The teacup which holds the tempest
    Posts
    1,204
    There is actually a more fundamental "road block" to AI than the Incompleteness Theorem. In fact, in many ways.. Solving the more fundamental issue might actually render the Incompleteness Theorem moot: Polynomial vs Non-deterministic Polynomial expression.

    The reasoning is, in a nutshell: our ability to grow AI, Neural Networks, and general computational power is limited by the manner in which computers express mathematical calculations. Currently, we have been building CPU's and systems that can brute force through the steps of a given calculation, and this has been augmented by the ability to make each required component smaller. It is widely argued that should a solution ever be found to PvsNP, it will radically re-write the book on aggressive technological advancement. Whether it be digital encryption, CPU calculations, or Neural Network evolution.

    The Incompleteness Theorem, for what it's worth.. was proofed before computers were in limited use. Kurt's Theory dates back to 1931.
    "When you build it, you love it!"

  11. #11
    As far as I can tell, this really hinges on assuming that an AI will demand consistency. Which, by the way, is something that humans are incapable of. We are patently inconsistent with our thinking.

    What I'm trying to get across is that if the system requires that any new ideas be consistent, then the Second Incompleteness Theorem might be a problem. But in that case, we were never trying to make something human-like in the first place.
    Quote Originally Posted by Zantos View Post
    There are no 2 species that are 100% identical.
    Quote Originally Posted by Redditor
    can you leftist twits just fucking admit that quantum mechanics has fuck all to do with thermodynamics, that shit is just a pose?

  12. #12
    The Unstoppable Force PC2's Avatar
    7+ Year Old Account
    Join Date
    Feb 2015
    Location
    California
    Posts
    21,877
    Quote Originally Posted by Garnier Fructis View Post
    As far as I can tell, this really hinges on assuming that an AI will demand consistency. Which, by the way, is something that humans are incapable of. We are patently inconsistent with our thinking.

    What I'm trying to get across is that if the system requires that any new ideas be consistent, then the Second Incompleteness Theorem might be a problem. But in that case, we were never trying to make something human-like in the first place.
    Very consistent thinking was probably bad from a long-term evolutionary standpoint. I think the foundation of our brains is model-free, which tests out new intuitions and habits, regardless of whether they are coherent. As you happen upon policy that works the brain creates a model for consistent rationality.

    I think AI will follow a similar path. Which also explains why all the symbolic approaches were failures.

  13. #13
    what if you use a human brain as part of an AI to overcome this problem? Perhaps we one day create human brains and simulate for them a reality in which they'll experience a life they think their own but in reality is only a means to solve our problems. If you think about it a human brain in a box fed simulated impulses to perceive a reality that does not exist would take less space and energy than a "real" human being living in a real universe. If you think a little further it's more probable you are in fact a simulated human and not a real human being.

  14. #14
    In the end the argument comes down to two views of the human brain.
    Either the human brain is 100% causal i.e. a machine with finite states. Than you can copy it, but free will is out the window.
    OR the human brain is not 100% causal and there is some kind of soul or x that transcends logic. If that is the case an ai that mirrors a human is impossible.

  15. #15
    The Unstoppable Force Theodarzna's Avatar
    7+ Year Old Account
    Join Date
    Sep 2015
    Location
    NorCal
    Posts
    24,166
    Quote Originally Posted by PrimaryColor View Post
    "Humans are somehow able to transcend this limitation by thinking outside the box."

    There is zero evidence that the human brain is special in terms of it transcending Godel's incompleteness theorem. There is zero evidence that intelligence in the human brain is derived from quantum collapse.

    Though it would be unscientific to say it's not possible. It's just that there's currently no evidence for it.
    No intelligence humans could artificially create would be useful to us unless it had our specific flaws and methods of thinking.
    Quote Originally Posted by Crissi View Post
    i think I have my posse filled out now. Mars is Theo, Jupiter is Vanyali, Linadra is Venus, and Heather is Mercury. Dragon can be Pluto.
    On MMO-C we learn that Anti-Fascism is locking arms with corporations, the State Department and agreeing with the CIA, But opposing the CIA and corporate America, and thinking Jews have a right to buy land and can expect tenants to pay rent THAT is ultra-Fash Nazism. Bellingcat is an MI6/CIA cut out. Clyburn Truther.

  16. #16
    Banned nanook12's Avatar
    7+ Year Old Account
    Join Date
    Jan 2016
    Location
    Bakersfield California
    Posts
    1,737
    Quote Originally Posted by bowchikabow View Post
    There is actually a more fundamental "road block" to AI than the Incompleteness Theorem. In fact, in many ways.. Solving the more fundamental issue might actually render the Incompleteness Theorem moot: Polynomial vs Non-deterministic Polynomial expression.

    The reasoning is, in a nutshell: our ability to grow AI, Neural Networks, and general computational power is limited by the manner in which computers express mathematical calculations. Currently, we have been building CPU's and systems that can brute force through the steps of a given calculation, and this has been augmented by the ability to make each required component smaller. It is widely argued that should a solution ever be found to PvsNP, it will radically re-write the book on aggressive technological advancement. Whether it be digital encryption, CPU calculations, or Neural Network evolution.

    The Incompleteness Theorem, for what it's worth.. was proofed before computers were in limited use. Kurt's Theory dates back to 1931.
    So all we gotta do is prove p=np. Yeah, that is kinda like scaling a mount Everest on top of a mount Everest all while drunk, naked, blind, on fire, and walking backwards.

  17. #17
    Quote Originally Posted by PrimaryColor View Post
    Very consistent thinking was probably bad from a long-term evolutionary standpoint. I think the foundation of our brains is model-free, which tests out new intuitions and habits, regardless of whether they are coherent. As you happen upon policy that works the brain creates a model for consistent rationality.

    I think AI will follow a similar path. Which also explains why all the symbolic approaches were failures.
    AI has followed a similar path. Deep learning and similar approaches seem to succeed.

    They might mistake a lorry for the sky and decapitate the stupid driver - but most of the time they work reasonable well, and they improve. Assumedly they can also include normal subparts for specific problems: which is another problem that some AI-critics have overlooked - if I want to compute the square root of 3 I don't do it by hand - I use a calculator (or excel). I would assume AIs will do the same.

    I always thought that Penrose was the Emperor of the book-title.

  18. #18
    The Unstoppable Force PC2's Avatar
    7+ Year Old Account
    Join Date
    Feb 2015
    Location
    California
    Posts
    21,877
    Quote Originally Posted by Theodarzna View Post
    No intelligence humans could artificially create would be useful to us unless it had our specific flaws and methods of thinking.
    As I mentioned in my second post, my guess is that AI will follow a similar path as humans. As far as the software paradigm.

    Our "flaws" are probably not even true flaws, but a vital part of the process of building a general learning mind from the ground up.

    - - - Updated - - -

    Quote Originally Posted by Forogil View Post
    I always thought that Penrose was the Emperor of the book-title.
    It's a parody from the "Emperors New Clothes". Penrose is saying AI has no substance, but his important points are guesses. Not based on evidence or experiments.

    Quote Originally Posted by Forogil View Post
    AI has followed a similar path. Deep learning and similar approaches seem to succeed.

    They might mistake a lorry for the sky and decapitate the stupid driver - but most of the time they work reasonable well, and they improve. Assumedly they can also include normal subparts for specific problems: which is another problem that some AI-critics have overlooked - if I want to compute the square root of 3 I don't do it by hand - I use a calculator (or excel). I would assume AIs will do the same.
    Deep learning is scaling, but it is only perceptual. So it can only lead to super human perception by itself. Learning about physical embodiment is a hard problem after that.
    Last edited by PC2; 2017-10-08 at 05:15 AM.

  19. #19
    Except the human brain proves this is false since it itself is a machine - a complex biochemical machine. Saying that the brain can do it and machines can't because of reasons we don't understand completely nullifies this logical pattern. If a computer can simulate a neuron then there's no reason why we can't build a working brain. The issue here is that a.) we still don't fully understand how the brain works and b.) computational power still has a ways to go.

    It's also a fallacy to compare human brains to current forms of AI. Computers do not work like brains. Even the current cutting edge machine learning, which uses backpropogation to create multi-dimensional vector arrays, while somewhat similar to the brain's approach of a multi-dimensional neuron arrays, still has fundamental *physical* differences (the brain for example is a system that physically changes, while circuitry is essentially immutable).

    The fact is we simply don't know what AI will be capable of in the future because we haven't discovered it yet. This theorem may be true for *current* approaches to AI but will not likely apply to radically new approaches to AI.

    It is also possible that AI will simply think in ways that are alien to us, while we will think in ways that remain alien to AI. Intelligence is not a two dimensional spectrum - it takes many different complex forms, meaning that AI may simply be intelligent in other, strange ways to us.

  20. #20
    Until we can crack p=np, then ai will never matter.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •