1. #1

    Question Super Computer Concept: Processing power Required to simulate the entire Universe?

    I am no expert on computing power, but I recently crossed a recent concept that's quite interesting:

    The idea that in the future, several hundreds to thousands and thousands of years into the future we will have the technology within computers to simulate entire universes, from supernovas, to gas giants, down to life and possibly consciousness itself of the beings and animals that live in this idea might be a possibility. (We are not 100% sure what creates consciousness, but this may change in the future). Assuming that we keep finding new methods and materials that will allow for greater processing power, this may happen. Silicon itself is said to be maxed out by the 2020s, hopefully by then we should move beyond silicon and would be using molecular computing such as graphene based chips or some other sort. According to Michio Kaku, after we move beyond molecular computing we may go deeper into Quantum computing.

    The idea of a super computer with enough processing power to copy everything including how atoms and their protons and electrons interact with one another, our consciousness and our thoughts, the thoughts of quadrillions and quadrillions of animals and beings that exist throughout the universe. This computed universe will have a radius of 13.7 Billion Light years. According to http://www.universetoday.com/36302/a...-the-universe/ there should be between 10^78 to 10^82 atoms in the visible universe. You will also have to simulate even invisible things like dark matter and dark energy, law of thermodynamics as well as the 2nd law etc.

    Processing power, back in the middle of the 20th century, it was doubling every 2-3 years, now its every 11 months or so.
    What is the upper limit of Quantum computing?
    Is there any other concept that may have more power than Quantum computing?

    I think this technology maybe possible, but it would depend very heavily on whether there is an ultimate limit to processing power in the universe, if so what is that upper limit?

    "Now Lloyd has applied the same approach to the Universe as a whole. “The ideas are cosmic but the physics is mundane,” he says, involving mostly well-established laws. The Universe’s energy is locked up primarily in matter according to E = mc2, and is basically constant. Taking rough estimates of the age and energy density of the Universe, and assuming that gravity’s total energy is equal in magnitude to that of matter, he finds that the Universe could have performed 10^120 basic operations, or ops, on all its bits so far.

    Because entropy is closely related to temperature, he also imagines maxing out the entropy by turning all of the Universe’s matter into radiation and using the so-called blackbody radiation formula to get a temperature. Subtotal: 10^90 bits. If gravitational fields can contribute entropy, as theorists believe, the total could be much larger. They deduced years ago that a black hole’s entropy is proportional to its surface area. A more recent conjecture–that the universe itself stores information this way–leads Lloyd to a speculative grand total of 10^120 bits." Assuming that we could keep moores law going with future technologies, it would take till the middle of the 24th century before our laptops have acquired such power assuming that computing power continues to double every 11 months.
    http://physics.aps.org/story/v9/st27

    Sources and links greatly appreciated.
    Other thoughts in regards to a simulation such as what you would like to use with such technology IF it becomes available are encouraged. Create a super realistic game? Create a near-perfect world or universe? A place to extend existing relationships? Or to turn imaginary friends into conscious beings that you can interact with? Perhaps to evade our death by downloading our consciousness into it?
    Last edited by Sole-Warrior; 2013-08-15 at 11:16 PM.

  2. #2
    It's the same science fantasy as nano-bots. There have been developments of little wheels and animated bits using a few molecules. The problem comes in that computer programming is a combination of 1s and 0s. It's not just having the charge or no charge there. There has to be pathways for those bits to travel along, which would require conductive atoms and insulators to keep them from affecting other conductors. Within processors there are tiny diodes, transistors, resistors, capacitors, and so forth that form oscillating circuits, logic circuits, and so forth. Then there's the matter of the controlling program for that little nano-bot. The nano-bots in games and sci-fi have some navigation and recognition software, which would be many megabytes if not gigabytes in size. Each byte being 8 bits, or a combination of 8 1s or 0s. That's not even covering the size of a power supply. Even a super efficient thermocouple will still be relatively large. All that takes up space, requiring the nano-bot to be much too large for the purpose intended.

    My overall point being that processing power is limited by the laws of physics. Unless we find some way to shrink atoms, it isn't likely we will develop such computing power. Besides, I still don't have my personal jet-pack I was promised in the 1950s.

  3. #3
    Quote Originally Posted by Mongo42 View Post
    It's the same science fantasy as nano-bots. There have been developments of little wheels and animated bits using a few molecules. The problem comes in that computer programming is a combination of 1s and 0s. It's not just having the charge or no charge there. There has to be pathways for those bits to travel along, which would require conductive atoms and insulators to keep them from affecting other conductors. Within processors there are tiny diodes, transistors, resistors, capacitors, and so forth that form oscillating circuits, logic circuits, and so forth. Then there's the matter of the controlling program for that little nano-bot. The nano-bots in games and sci-fi have some navigation and recognition software, which would be many megabytes if not gigabytes in size. Each byte being 8 bits, or a combination of 8 1s or 0s. That's not even covering the size of a power supply. Even a super efficient thermocouple will still be relatively large. All that takes up space, requiring the nano-bot to be much too large for the purpose intended.

    My overall point being that processing power is limited by the laws of physics. Unless we find some way to shrink atoms, it isn't likely we will develop such computing power. Besides, I still don't have my personal jet-pack I was promised in the 1950s.
    Our current methods are definitely limited by the laws of physics, but there are other methods still in their childhoods.

    There are alternative to zeros and ones like those in quantum computing.
    Scientists from IBM and Stanford University successfully demonstrated Shor's Algorithm on a quantum computer. Shor's Algorithm is a method for finding the prime factors of numbers (which plays an intrinsic role in cryptography). They used a 7-qubit computer to find the factors of 15. The computer correctly deduced that the prime factors were 3 and 5. Take 5 atoms and prove 3 times 5 is 15. Quantum computing exploits quantum entanglement.

    According to physicist David Deutsch, this parallelism allows a quantum computer to work on a million computations at once, while your desktop PC works on one. A 30-qubit quantum computer would equal the processing power of a conventional computer that could run at 10 teraflop
    http://computer.howstuffworks.com/quantum-computer1.htm
    Quantum computing still has many issues to solve such as coherence. But it could happen. And this would be useful for nanobots.

    For modern day computing, there will inevitably come a time when the laws of quantum physics prevent any further shrinkage using conventional methods. That is where atomic-scale computing comes into play with a fundamentally different approach to the problem.
    Atomic-scale computing, in which computer processes are carried out in a single molecule or using a surface atomic-scale circuit
    oachim, the head of the CEMES Nanoscience and Picotechnology Group (GNS), is currently coordinating a team of researchers from 15 academic and industrial research institutes in Europe whose groundbreaking work on developing a molecular one.
    Developing a molecular replacement for transistors has brought the vision of atomic-scale computing.
    http://www.sciencedaily.com/releases...1222113532.htm

    Atomic and quantum computer may hold the promise of nanobots in the future.
    As for computers that could simulate entire planets or universes, such computers will be radically different from modern day computer, as different as our brain is to our lap top one day.
    Last edited by Sole-Warrior; 2013-08-15 at 07:01 PM.

  4. #4
    http://rationalwiki.org/wiki/Simulation_argument

    Found some annswers here. For absolute complexity which would be ridiculous to simulate its 10^120, but we won't need to simulate anywhere that high to represent 93 billion LY radius. ONLY simulate anything being observed to the complexity able to be observed. Only simulate subatomic partials if you are being observed under a microscope. This would dramatically reduce that 10^120 number massively.

  5. #5
    Deleted
    Quote Originally Posted by Alasuya View Post
    http://rationalwiki.org/wiki/Simulation_argument

    Found some annswers here. For absolute complexity which would be ridiculous to simulate its 10^120, but we won't need to simulate anywhere that high to represent 93 billion LY radius. ONLY simulate anything being observed to the complexity able to be observed. Only simulate subatomic partials if you are being observed under a microscope. This would dramatically reduce that 10^120 number massively.
    the problem is that it's not that easy. stars thousand of lightyears away can have an impact on our own solar system. an exploding star at 100 lightyears aligned correctly can end life on a garden world. A rogue star or rogue planet can also deal incredible damage. and all of those systems affect one another. An exploding star might not affect us, but it might send one of its planets careening towards us. a rogue star might hit our star, sending demolishing shockwaves throughout the solar system. events on a planetary scale like this likely happen daily, if not hourly, in a 93B LY universe.

    want another example? on a planet the far side of the milky way, intelligent life could evolve, develop interstellar travel, conquer the universe and enslave all other species. We are unaware of this, but it could happen. OR halfway during their evolution, they could get hit by a rock and go extinct. You just don't know that unless you simulate an entire universe. and if you make such things happen randomly, it's not realistic and evenly distributed.

    and now that we're talking about an alien race: what means "being observed"? does it mean that if any type of life form looks at it, it needs to be simulated? a large amount of the universe is being observed in that case. And you can't just simulate for 1 chosen race, because it could end up with the technological equivalent of plot holes when they look at stuff that wasn't there first.

    also, if you want to be accurate, you can't cut corners. people will ask questions if they see weird stuff, like if all of a sudden, the earth would turn to wireframe model.

  6. #6
    Quote Originally Posted by nzall View Post
    the problem is that it's not that easy. stars thousand of lightyears away can have an impact on our own solar system. an exploding star at 100 lightyears aligned correctly can end life on a garden world. A rogue star or rogue planet can also deal incredible damage. and all of those systems affect one another. An exploding star might not affect us, but it might send one of its planets careening towards us. a rogue star might hit our star, sending demolishing shockwaves throughout the solar system. events on a planetary scale like this likely happen daily, if not hourly, in a 93B LY universe.

    want another example? on a planet the far side of the milky way, intelligent life could evolve, develop interstellar travel, conquer the universe and enslave all other species. We are unaware of this, but it could happen. OR halfway during their evolution, they could get hit by a rock and go extinct. You just don't know that unless you simulate an entire universe. and if you make such things happen randomly, it's not realistic and evenly distributed.

    and now that we're talking about an alien race: what means "being observed"? does it mean that if any type of life form looks at it, it needs to be simulated? a large amount of the universe is being observed in that case. And you can't just simulate for 1 chosen race, because it could end up with the technological equivalent of plot holes when they look at stuff that wasn't there first.

    also, if you want to be accurate, you can't cut corners. people will ask questions if they see weird stuff, like if all of a sudden, the earth would turn to wireframe model.
    I probably should had clarified. My bad, only simulate ENOUGH to convince the observers. For example: to simulate a water bottle, if I am the only one looking and feeling it, the super computer will not simulate all the sub atomic structures in the water bottle. If I placed it under a microscope and zoom in in a 100 nano meter square, than only that 100 nano meter square of the bottle will need to simulate the atoms. Only compress the things we are observing just enough to convince us.
    It's a bit similar to how some MMOs work. It will only load the game to the extent to convince us. Zoom out further the tiny details become compressed. Zoom in you see more details.

    This is the scenario I'll give. One alien is effected by the supernova, the other one stands 50,000 light years away, a safe place.
    If a conscious being is affected by the supernova, than the part of the supernova around that being will have to be simulated realistically, the rest of the supernova can be compressed.
    The other one using a telescope sees the supernova, he could see it in compressed format, just as we in real life would see supernovas, we do not see the tiny specific details of one.

    This simulation which is basically a giant MMORPG, any conscious being, whether it be human aliens or primitive animals could be considered avatars, and this giant game could load enough for just the the conscious beings.

    - - - Updated - - -

    Also I would like to add, it is possible to have a script that runs for billions of years rather than a few minutes like in MMOs.
    I recorded the digital wave movements of games, and recognize that it loops. NPC's what they say also loops. It maybe possible to create a script to a computed universe which includes the big bang, collisions, formations and such.
    Last edited by Sole-Warrior; 2013-08-17 at 12:35 AM.

  7. #7
    Deleted
    Quote Originally Posted by Alasuya View Post
    I probably should had clarified. My bad, only simulate ENOUGH to convince the observers. For example: to simulate a water bottle, if I am the only one looking and feeling it, the super computer will not simulate all the sub atomic structures in the water bottle. If I placed it under a microscope and zoom in in a 100 nano meter square, than only that 100 nano meter square of the bottle will need to simulate the atoms. Only compress the things we are observing just enough to convince us.
    It's a bit similar to how some MMOs work. It will only load the game to the extent to convince us. Zoom out further the tiny details become compressed. Zoom in you see more details.

    This is the scenario I'll give. One alien is effected by the supernova, the other one stands 50,000 light years away, a safe place.
    If a conscious being is affected by the supernova, than the part of the supernova around that being will have to be simulated realistically, the rest of the supernova can be compressed.
    The other one using a telescope sees the supernova, he could see it in compressed format, just as we in real life would see supernovas, we do not see the tiny specific details of one.

    This simulation which is basically a giant MMORPG, any conscious being, whether it be human aliens or primitive animals could be considered avatars, and this giant game could load enough for just the the conscious beings.

    - - - Updated - - -

    Also I would like to add, it is possible to have a script that runs for billions of years rather than a few minutes like in MMOs.
    I recorded the digital wave movements of games, and recognize that it loops. NPC's what they say also loops. It maybe possible to create a script to a computed universe which includes the big bang, collisions, formations and such.
    That's not what i meant. What I mean is: how will you know that that supernova actually occured? You will have had to simulate that star from the start, and maybe not entirely in depth, but you still need to simulate the life of that star. which means you'll need to simulate the birth of that star. which means you'll need to simulate the dust cloud that star was made of. And you can't go all random on that. There is logic behind what occurs in space. if something doesn't make sense, people will ask questions. And in theory, you could just ignore that star until it matters, but that also means you'll have to do a lot of catchup math when it suddenly does matter, which could cause momentary delays.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •