Page 1 of 3
1
2
3
LastLast
  1. #1

    What is consciousness and can computers become conscious?

    Famous guy goes and debates another guy about can computers become conscious or not. The room was packed with philosophers and computer people.







    Really really long article at the link.

    http://www.scottaaronson.com/blog/?p=2756

    [start off by explaining consciousness]

    In the present case, I suggested a crude operational definition, along the lines of, “you consider a being to be conscious iff you regard destroying it as murder.” Alas, the philosophers in the room immediately eviscerated that definition, so I came back with a revised one: if you tried to ban the word “consciousness,” I argued, then anyone who needed to discuss law or morality would soon reinvent a synonymous word, which played the same complicated role in moral deliberations that “consciousness” had played in them earlier. Thus, my definition of consciousness is: whatever that X-factor is for which people need a word like “consciousness” in moral deliberations. For whatever it’s worth, the philosophers seemed happier with that.

    Next, a biologist and several others sharply challenged Penrose over what they considered the lack of experimental evidence for his and Hameroff’s microtubule theory. In response, Penrose doubled or tripled down, talking about various experiments over the last decade, which he said demonstrated striking conductivity properties of microtubules, if not yet quantum coherence—let alone sensitivity to gravity-induced collapse of the state vector! Audience members complained about a lack of replication of these experiments. I didn’t know enough about the subject to express any opinion.

    At some point, Philip Stamp, who was moderating the session, noticed that Penrose and I had never directly confronted each other about the validity of Penrose’s Gödelian argument, so he tried to get us to do so. I confess that I was about as eager to do that as to switch to a diet of microtubule casserole, since I felt like this topic had already been beaten to Planck-sized pieces in the 1990s, and there was nothing more to be learned. Plus, it was hard to decide which prospect I dreaded more: me “scoring a debate victory” over Roger Penrose, or him scoring a debate victory over me.

    But it didn’t matter, because Penrose bit. He said I’d misunderstood his argument, that it had nothing to do with “mystically seeing” the consistency of a formal system. Rather, it was about the human capacity to pass from a formal system S to a stronger system S’ that one already implicitly accepted if one was using S at all—and indeed, that Turing himself had clearly understood this as the central message of Gödel, that our ability to pass to stronger and stronger formal systems was necessarily non-algorithmic. I replied that it was odd to appeal here to Turing, who of course had considered and rejected the “Gödelian case against AI” in 1950, on the ground that AI programs could make mathematical mistakes yet still be at least as smart as humans. Penrose said that he didn’t consider that one of Turing’s better arguments; he then turned to me and asked whether I actually found Turing’s reply satisfactory. I could see that it wasn’t a rhetorical debate question; he genuinely wanted to know! I said that yes, I agreed with Turing’s reply.

    Someone, I forget who, mentioned that Penrose had offered a lengthy rebuttal to at least twenty counterarguments to the Gödelian anti-AI case in Shadows of the Mind. I affirmed that I’d read his lengthy rebuttal, and I focused on one particular argument in Shadows: that while it’s admittedly conceivable that individual mathematicians might be mistaken, might believe (for example) that a formal system was consistent even though it wasn’t, the mathematical community as a whole converges toward truth in these matters, and it’s that convergence that cries out for a non-algorithmic explanation. I replied that it wasn’t obvious to me that set theorists do converge toward truth in these matters, in anything other than the empirical, higgedly-piggedly, no-guarantees sense in which a community of AI robots might also converge toward truth. Penrose said I had misunderstood the argument. But alas, time was running out, and we never managed to get to the bottom of it.

    There was one aspect of the discussion that took me by complete surprise. I’d expected to be roasted alive over my attempt to relate consciousness and free will to unpredictability, the No-Cloning Theorem, irreversible decoherence, microscopic fluctuations left over from the Big Bang, and the cosmology of de Sitter space. Sure, my ideas might be orders of magnitude less crazy than anything Penrose proposes, but they’re still pretty crazy! But that entire section of my talk attracted only minimal interest. With the Seven Pines crowd, what instead drew fire were the various offhand “pro-AI / pro-computationalism” comments I’d made—comments that, because I hang out with Singularity types so much, I had ceased to realize could even possibly be controversial.

    So for example, one audience member argued that an AI could only do what its programmers had told it to do; it could never learn from experience. I could’ve simply repeated Turing’s philosophical rebuttals to what he called “Lady Lovelace’s Objection,” which are as valid today as they were 66 years ago. Instead, I decided to fast-forward, and explain a bit how IBM Watson and AlphaGo work, how they actually do learn from past experience without violating the determinism of the underlying transistors. As I went through this, I kept expecting my interlocutor to interrupt me and say, “yes, yes, of course I understand all that, but my real objection is…” Instead, I was delighted to find, the interlocutor seemed to light up with newfound understanding of something he hadn’t known or considered.

    Similarly, a biologist asked how I could possibly have any confidence that the brain is simulable by a computer, given how little we know about neuroscience. I replied that, for me, the relevant issues here are “well below neuroscience” in the reductionist hierarchy. Do you agree, I asked, that the physical laws relevant to the brain are encompassed by the Standard Model of elementary particles, plus Newtonian gravity? If so, then just as Archimedes declared: “give me a long enough lever and a place to stand, and I’ll move the earth,” so too I can declare, “give me a big enough computer and the relevant initial conditions, and I’ll simulate the brain atom-by-atom.” The Church-Turing Thesis, I said, is so versatile that the only genuine escape from it is to propose entirely new laws of physics, exactly as Penrose does—and it’s to Penrose’s enormous credit that he understands that.

    Afterwards, an audience member came up to me and said how much he liked my talk, but added, “a word of advice, from an older scientist: do not become the priest of a new religion of computation and AI.” I replied that I’d take that to heart, but what was interesting was that, when I heard “priest of a new religion,” I’d expected that his warning would be the exact opposite of what it turned out to be. To wit: “Do not become the priest of a new religion of unclonability, unpredictability, and irreversible decoherence. Stick to computation—i.e., to conscious minds being copyable and predictable exactly like digital computer programs.” I guess there’s no pleasing everyone!
    .

    "This will be a fight against overwhelming odds from which survival cannot be expected. We will do what damage we can."

    -- Capt. Copeland

  2. #2
    The Patient Fortydragon's Avatar
    10+ Year Old Account
    Join Date
    Nov 2013
    Location
    The desert lands
    Posts
    248
    I feel like I'm reading science fiction but this is actually real stuff. This is super interesting thanks for sharing.

  3. #3
    Deleted
    It isn't likely just due to how computers "think". Even the concept of random is extremely hard to program. I think they could become decent mimics but you would need to write some kind of self coding program for them to achieve consciousness and I can't imagine how that would be possible.

  4. #4
    I Don't Work Here Endus's Avatar
    10+ Year Old Account
    Join Date
    Feb 2010
    Location
    Ottawa, ON
    Posts
    79,215
    The only real argument that computers CAN'T, with future advances, become conscious essentially makes a religious, not a scientific, argument. That humanity is somehow "unique" and has a "soul" that cannot be replicated in silicon and wire.

    The same people draw an artificial distinction between humanity and the rest of the animal kingdom, on similar grounds.

    Beyond that, it's a matter of determining exactly what you mean by "consciousness". I'd argue that a good measure is any system that can motivate itself to question its own purpose might fit that bill. Not confusion over what to do, but an actual "why am I here" experience. Most of the good films and stories around true AI revolve around this same question; a recent example would be the film Ex Machina. It isn't about computational power or accuracy at all; we have no issues with people being wrong about all kinds of stuff, and we don't dispute that they are conscious beings, unto themselves.

    It's also more than programming the system to ask that question; that's obeying an order, not expressing unique curiosity. A non-conscious system, however smart, may be entirely satisfied counting rice grains for a billion years, because it has no capacity to wonder if it could do anything more than it is programmed to do. It's that capacity, I'd argue, that makes the difference.

    But establishing that, experimentally, is a nightmare proposition. I just don't see how you could determine that, not without something like the extended Turing test that the abovementioned Ex Machina proposed.


  5. #5
    I Don't Work Here Endus's Avatar
    10+ Year Old Account
    Join Date
    Feb 2010
    Location
    Ottawa, ON
    Posts
    79,215
    Quote Originally Posted by Redtower View Post
    It isn't likely just due to how computers "think". Even the concept of random is extremely hard to program. I think they could become decent mimics but you would need to write some kind of self coding program for them to achieve consciousness and I can't imagine how that would be possible.
    We already have self-coding programs. That's not a science-fiction hurdle, we're already past that.

    - - - Updated - - -

    Quote Originally Posted by Connal View Post
    My opinion is that computers/AI will become conscious once we start using quantum CPU's, and figure out how our brain cells interact with the quantum world.

    I have posted this before but:
    Discovery of Quantum Vibrations in “Microtubules” Inside Brain Neurons Corroborates Controversial 20-Year-Old Theory of Consciousness
    https://www.elsevier.com/about/press...-consciousness

    I am in the Panpsychist + Epiphenomena (in data processing) camp when it comes to the philosophy of conciousness.
    While it's certainly possible that some level of quantum uncertainty is necessary for the intuitive and self-reflective nature we consider to be "consciousness", the idea that they'd just "wake up" is also more than a little bit based in fantasy. It's like looking at a human brain and saying that any hunk of meat could "wake up", because that particular kind did.

    We'd likely still need to arrange for pretty specific architecture and such to create the right circumstances, even with quantum processes as a functional tech option.


  6. #6
    We already have computers winning at Jeopardy and Go - and beginning to pass the Turing test.
    I guess it will not be long until the computers find the answer to the question 'What is consciousness?'

    The ideas that philosophers will find the answer, or that macroscopic quantum effects matter, are as remote as ever.

  7. #7
    Do you think dogs or even rats are conscious?
    .

    "This will be a fight against overwhelming odds from which survival cannot be expected. We will do what damage we can."

    -- Capt. Copeland

  8. #8
    Deleted
    Quote Originally Posted by Endus View Post
    We already have self-coding programs. That's not a science-fiction hurdle, we're already past that.

    - - - Updated - - -



    While it's certainly possible that some level of quantum uncertainty is necessary for the intuitive and self-reflective nature we consider to be "consciousness", the idea that they'd just "wake up" is also more than a little bit based in fantasy. It's like looking at a human brain and saying that any hunk of meat could "wake up", because that particular kind did.

    We'd likely still need to arrange for pretty specific architecture and such to create the right circumstances, even with quantum processes as a functional tech option.
    Isn't quantum tech a hoax? Just a pipe dream? At least from what little i have heard of, many doubters.

  9. #9
    I Don't Work Here Endus's Avatar
    10+ Year Old Account
    Join Date
    Feb 2010
    Location
    Ottawa, ON
    Posts
    79,215
    Another important point to bring up is the "Chinese Room" argument. This is a deeply flawed (IMO) argument against the possibility of AI, and while I'll freely jump into greater details, here's the short breakdown of both the argument and the best explanation I've seen as to why it doesn't "work".

    The basic gist of the idea is that imagine you've got a room, and in that room is a guy, who doesn't know Chinese. But he has a book, and in that book are Chinese symbols, showing what a proper response to each set of symbols would be. There is also a slot in the room, and occasionally, a message comes in the slot, in Chinese. The man flips through the book, finds the matching symbol set, and sends out the indicated reply.

    Searle (the guy whose thought experiment this was) argues that because the man doesn't understand Chinese, there's no intelligence behind his responses; the book is a program, and he's merely executing it. No matter how complex the program (the book) may be, no matter how thorough, the guy still doesn't understand Chinese.


    I'd argue that this ignores the point. The room, the entire collection of systems it entails, does understand Chinese. The man isn't the room, he's just one component in the room. Separating out and focusing on his understanding makes as much sense as analyzing whether our hippocampus "understands" English. That's not its role. You could go a step further, and include the writer of the book as an integral part of the "room".

    In that latter case, you've got an argument that a strong AI like this isn't truly "conscious", but an extension/expression of its programmer's consciousness, but the moment we include the capacity to let the system write/overwrite itself (and again, we're there, technologically), that's where we start to cross this critical line. If the man in the room notices that some of the responses provide "bad" reactions, and others good, and starts adjusting his responses accordingly, ignoring the strict guidelines in the book, maybe the Room is conscious, now, taken as a whole rather than each part as itself.


  10. #10
    Its an interesting thought that the physical particles that create consciousness are contained inside the Universe.

    By arranging these particles in such a way, it brings consciousness to a living entity.

    It therefore begs the question if consciousness exists inside the universe, outside of living entities.

    More interestingly, it begs the question if certain pockets of the universe are conscious, or even if the universe as a whole is conscious.

  11. #11
    I Don't Work Here Endus's Avatar
    10+ Year Old Account
    Join Date
    Feb 2010
    Location
    Ottawa, ON
    Posts
    79,215
    Quote Originally Posted by Finnish Nerd View Post
    Isn't quantum tech a hoax? Just a pipe dream? At least from what little i have heard of, many doubters.
    Nah. It's a technological leap, but the issue is in engineering, not science. We know it could work, if we could build one. We've tested it at the single-bit level and such. We've got prototypes in testing. The problem is, due to the quantum nature, you can't look "under the hood" so to speak, to make sure it's working right. You have to just fire questions at it that only a quantum computer could answer, and see if it can handle them.


  12. #12
    Quote Originally Posted by Hubcap View Post
    Do you think dogs or even rats are conscious?
    They're aware of themselves, their surroundings etc so yes. They're conscious.
    Quote Originally Posted by Lansworthy
    Deathwing will come and go RAWR RAWR IM A DWAGON
    Quote Originally Posted by DirtyCasual View Post
    There's no point in saying this, even if you slap them upside down and inside out with the truth, the tin foil hat brigade will continue to believe the opposite.

  13. #13
    I Don't Work Here Endus's Avatar
    10+ Year Old Account
    Join Date
    Feb 2010
    Location
    Ottawa, ON
    Posts
    79,215
    Quote Originally Posted by Connal View Post
    Yes, just not as conscious as humans are. But again, my outlook is based on Eastern philosophy, and its western version Panpsychism.
    As I hinted earlier, this is where we run into troubles of definitions. If "consciousness" is just awareness that one exists, then sure, lots of animals have it. Plenty of animals can even recognize themselves in a mirror, which is a big step up in this respect. I'd argue that there's one final step which I'm not even convinced great apes have passed, which is questioning why you exist, a question which frames humanity's relatively unique desire to improve our collective lot. You can find great apes and such who want to make things better for themselves, and even have empathy for others, but they still fall short of that final step. Not that I'm arguing that it can't be replicated; I just don't think it has been, in another species.

    But I also freely admit that this isn't a universally-accepted interpretation of what "consciousness" means.


  14. #14
    Partying in Valhalla
    Annoying's Avatar
    15+ Year Old Account
    Join Date
    Aug 2008
    Location
    Socorro, NM, USA
    Posts
    10,657
    Quote Originally Posted by Finnish Nerd View Post
    Isn't quantum tech a hoax? Just a pipe dream? At least from what little i have heard of, many doubters.
    Do you own a thumb drive? Quantum tech.

  15. #15
    Deleted
    Organic life began in a puddle of primordial muck, from the smallest building blocks, governed by a certain set of rules, with the environment being manipulated by certain factors. It then evolved, and the end result was sentience.

    I believe that's where people trying to "create" an AI are going the wrong way; instead of trying to create the AI itself, I think they should rather create the environment, the set of rules (codebase and so on), and let the 0s and 1s be the primordial much from which through a gazillion concurrent executions and simulations certain "building blocks of digital life" would start to emerge. Then, have those building blocks interact by manipulating the digital environment they're in, and see if something doesn't start to evolve.

    Now, I know I'm speaking in abstract here, but, for example, what you might do is break the code down to functions, and then functions down to lines and operators and so on, and then just let those spring forth from the random mess of 0s and 1s. I mean, they say that enough chimps on typewriters will write the collected works of whatshisface, so do that with code.

    Just have some kind of framework for it, just like life in the Universe has a framework as well.

  16. #16
    Quote Originally Posted by Sydänyö View Post
    Organic life began in a puddle of primordial muck, from the smallest building blocks, governed by a certain set of rules, with the environment being manipulated by certain factors. It then evolved, and the end result was sentience.

    I believe that's where people trying to "create" an AI are going the wrong way; instead of trying to create the AI itself, I think they should rather create the environment, the set of rules (codebase and so on), and let the 0s and 1s be the primordial much from which through a gazillion concurrent executions and simulations certain "building blocks of digital life" would start to emerge. Then, have those building blocks interact by manipulating the digital environment they're in, and see if something doesn't start to evolve.

    Now, I know I'm speaking in abstract here, but, for example, what you might do is break the code down to functions, and then functions down to lines and operators and so on, and then just let those spring forth from the random mess of 0s and 1s. I mean, they say that enough chimps on typewriters will write the collected works of whatshisface, so do that with code.

    Just have some kind of framework for it, just like life in the Universe has a framework as well.
    What you have stated has already been acheived...

    Its why conscious life exists on Earth. Something already successfully tried to do exactly that

  17. #17
    Quote Originally Posted by Hubcap View Post
    Similarly, a biologist asked how I could possibly have any confidence that the brain is simulable by a computer, given how little we know about neuroscience. I replied that, for me, the relevant issues here are “well below neuroscience” in the reductionist hierarchy. Do you agree, I asked, that the physical laws relevant to the brain are encompassed by the Standard Model of elementary particles, plus Newtonian gravity? If so, then just as Archimedes declared: “give me a long enough lever and a place to stand, and I’ll move the earth,” so too I can declare, “give me a big enough computer and the relevant initial conditions, and I’ll simulate the brain atom-by-atom.”
    This is probably the most straightforward and bulletproof argument for the technological singularity I've ever read. What a gem of a quote.

    Quote Originally Posted by Endus View Post
    As I hinted earlier, this is where we run into troubles of definitions. If "consciousness" is just awareness that one exists, then sure, lots of animals have it. Plenty of animals can even recognize themselves in a mirror, which is a big step up in this respect. I'd argue that there's one final step which I'm not even convinced great apes have passed, which is questioning why you exist, a question which frames humanity's relatively unique desire to improve our collective lot. You can find great apes and such who want to make things better for themselves, and even have empathy for others, but they still fall short of that final step. Not that I'm arguing that it can't be replicated; I just don't think it has been, in another species.

    But I also freely admit that this isn't a universally-accepted interpretation of what "consciousness" means.
    One interesting definition of consciousness that I'm fond of, which is roughly in this vein: The quality that a being capable questioning the meaning of consciousness of possesses.
    Last edited by Draeth; 2016-06-02 at 06:52 PM.

  18. #18
    Deleted
    Quote Originally Posted by Bobbiedob View Post
    Its why conscious life exists on Earth. Something already successfully tried to do exactly that
    This is either a religious argument, or a conspiracy theory, and neither is an allowed topic here.

    Anyways, humans think of codebases in the sense that they have letters and numbers. However, 0s and 1s can represent so much more; sound and color, for example. You wouldn't, as a human, create a function that is part letters, part numbers, part sound and part color, though. However, that might be a way for digital life to exist. It'd be much more complex and complicated for us to even start thinking about, but I mean, that's what 0s and 1s end up being. Just like quarks and such end up being atoms, which end up being molecules, and so on.

  19. #19
    Quote Originally Posted by Annoying View Post
    Do you own a thumb drive? Quantum tech.
    But not of the kind needed for the conscious pipe dream. The thumb drives use quantum tunneling - but that is localized to individual bits.

    The problems appear when you try to have multiple quantum states super-positioned for a long time - especially at normal temperatures; and it becomes religion when you derive odd effects from this.

  20. #20
    Partying in Valhalla
    Annoying's Avatar
    15+ Year Old Account
    Join Date
    Aug 2008
    Location
    Socorro, NM, USA
    Posts
    10,657
    Quote Originally Posted by Forogil View Post
    But not of the kind needed for the conscious pipe dream. The thumb drives use quantum tunneling - but that is localized to individual bits.

    The problems appear when you try to have multiple quantum states super-positioned for a long time - especially at normal temperatures; and it becomes religion when you derive odd effects from this.
    Oh, certainly. Quantum computing is way far from thumb dr8ves, and isn't related to consciousness or anything, but they're still quantum tech, heh.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •