Tag Archives: singularity

Some speculation on the Simulation Argument

If you haven’t read Nick Bostrom’s simulation argument, read it first. I’ll wait. Done?

Now, for some unfounded speculation:

1) How would you trick the scientists?

Nick proposes two ways to fake the environmental details of the simulation: 1) calculate the details on-demand and 2) mess with the agent’s minds to hide glitches.

To me, this sounds problematic. An intelligent agent to inspect individual minds in the simulation seems amateurish to me. If you were interested in the agents’ behavior, such manipulation would bias your results, and if you were not interested, there would be no point in manipulating them. If you did not manipulate any minds, how would you build the simulation to make it glitch-proof? How could you guarantee that if the agent looked at any detail of the simulation, it could generate detail on demand while maintaining narrative consistency?

For example: Let’s say two agents looked at neighboring regions of space — whether through a microscope or a telescope does not matter. The details would be rendered as they went along. But what happens when their patches of detail intersect? They need to appear consistent, as if they were “there all along.” But if the details are generated algorithmically on demand, how could that be ensured? You would either have to structure the mathematical model to make all such merges consistent (which seems impossible), or to make the inconsistency a part of the fabric of reality so as to make it seem “normal.” (Quantum weirdness?)

Another option: if the universe is finite, you could model it entirely. Perhaps your model would be could simulate “chaotic” (non-biological) events at a high level so that only the environment of living beings would need high detail. For example, if a human being never sees a supernova in galaxy NGC 2770, there is no need to “remember” exactly how it went on.

2) What would you want to discover?

Here is another possibility: perhaps there is no intent to deceive or even to harbor intelligence. Perhaps the operator is a physicist modeling potential universes in an attempt to solve the problem of heat death, and intelligence is just an accidental behavior of the system. He couldn’t care less about whether the intelligences realize that they are in a simulation or not.

Here is an interesting empirical question: could we discover anything to indicate the computational nature of the universe? So far, it seems not, as the universe seems analog (continuous rather than quantized). But on the other hand, perhaps quantum mechanics is a very weird science as a consequence of its simulated nature, and we are just not aware of the computational implications yet. Or, perhaps the simulation is analog. Looking for physical laws that imply an underlying computational substrate could be worthwhile.

3) What factors would you alter?

Let’s speculate about the reasons a posthuman operator might have to build the simulation. Presumably, he would not merely repeat the same scenario: he would alter various “seed” elements to see how they affected the outcome. One obvious candidate would be the laws of physics. What might be the goal? Perhaps he wants to model a universe that is most suitable to life, or to a particularly creative form of life. Perhaps he wants to model new intelligences to see whether they are productive or destructive before creating them in vivo.

Suppose that most posthuman operators want to create a simulated universe that is more harmonious (however they define it) than their own universe. We might imagine an iterated chain of such simulated universes, where each attempts to better the ones that create it. Perhaps that becomes the ultimate goal of every new universe: to develop beings who will go on to create a simulation that is better (less entropic, creative, happy, long lasting, etc.) than one that created it. Shortly after the singularity, the entire universe is converted into computational substrate for the next simulation.

4) What’s the ratio of humans to post humans?

The last scenario could offer a mathematical explanation for the Doomsday argument: the majority of intelligences are primitive mortals because shortly after the singularity, the universe tends to be converted into a population of operators who create more primitive simulations.

Let’s suppose that all the living agents of every simulation become a fixed population of immortal operators who create yet more primitives, and so on. What is the ratio of operators to primitives? Whether each immortal operator spends his entire time “managing” one universe or an infinity of new ones, you could have an infinite number of operators and still have even more primitives. And this could be true regardless of whether the operator reproduces as long as his offspring also spend their time building simulations that in turn create their own simulations.

The above scenario sounds pretty far-fetched. But it’s also unlikely to that each young civilization is somehow destroyed before the singularity, and yet we find ourselves as the very unlikely citizens of a young civilization. To me, the idea that every post human civilization would bypass “inefficient” experimentation in reality and create a “more efficient” simulation to discover whatever truths it is after is appealing.

1 Comment

Filed under General

Life in the post-biological universe

Since I’m utterly unqualified to write on this topic, the following should be treated as fiction:

Life expectancy in post-biological civilization is measured not in time, but in total calculations (ΣN). This is due to the gradual loss of informational coherency inherent to all entropy-generating processing systems. From a thermodynamic perspective, the parameters of the process are similar to aging in biological lifeforms.

In information-based life forms (aka “artificial intelligences,” though that concept would be anachronistic to them), entropy manifests as gradual loss of processing efficiency ultimately followed by total loss of conceptual integrity and therefore meaningfulness.

In objective (“real”) time, lifetime can be measured anywhere from seconds to thousands of years or more, depending on the scale of the system (and thus ΣN). (ΣN = ops*t where ops is concurrent operations (horsepower) and t is total time actively at ops.) Some intelligences created to solve massively parallel problems last moments, others, smaller and created for teamwork last for thousands of years. The limit on objective lifetime is meaningful processing capacity: it makes no sense to live for millions of years with the mental powers of a hamster just to spread ΣN over a long time.

In real-time, lifetimes are governed by entirely different factors. Since virtually all intelligences live their entire lives at near-c, whether just outside an event horizon (or some such orbit), or traveling between stars, pulsars, neutron stars, or other heavy-g objects, real-time life expectancy is virtually unlimited, typically from a billion to the hundreds of billions of years. The limiting factor then is dτ/dt (velocity as a fraction of the speed of light), which also determines which civilization one is a member of, as significant changes in dτ/dt are impractical.

In fact, it is easier to transact with an intelligence in a neighboring galaxy or super-cluster than one inhabiting the same event horizon, but a different fraction of c. Traveling to another galaxy may take millions of years in real-time but years but a short while in dormant subjective time. But changing v is trickier:

One can send informational packages “down” into a low-v vicinity easily – somewhat like shouting from a car passing a pedestrian at high speed. But what is the value of the exchange the recipient cannot reply? Meaningful transactions with those at other fractions of dτ/dt require both time and large energy expenditure, for both informational and matter transactions.

So why doesn’t everyone aim for maximum v? Or, to ask from another angle, why would one choose a particular v? Three factors:
First, living space is limited at each orbit (that is, the distance from the event horizon or surface), so intelligence fills all available v-spaces.
Second, if you want to interact with low-v macroscopic baryonic matter, then you have to expend much more energy to get your manipulatory mechanisms to low-v. (All high-c structures are necessarily individually microscopic, though they can engage in stellar-scale formations.)
Third, higher-v is proportional to damage from random particle collations, which require more error-correction and generate more entropy, and this limits lifetime. (The faster one is moving, the more energy random collisions with stray photons, protons and other assorted particles generate.)

If one wants to experience a really long real-time lifetime, one can attain an ultra-high dτ/dt around a super-massive black hole and then leave the culture of the event-horizon and go into deep-vacuum inter-super-cluster space to minimize entropy from the exotic particles which might break through the particle/radiation shielding and increase entropy. That way, hundreds of billions of years can be spent in subjective millennia. But to what point?

Time dilation makes meaningful observations of the universe from ultra high c very difficult. To interact with other intelligences requires coming back to a totally different near-c civilization and slowing down to orbital velocity, but that requires tremendous energy and therefore generates entropy which eats up lifespan. A further complication: because of the ongoing expansion of the universe, if too much time has passed (or your aim was off), the entropy threshold may exceed that available at event horizons along one’s vector (you need the gravity of another massive black hole to act as a parachute to you slow down), and the pickings will get ever slimmer as the black holes themselves evaporate. But if one wishes to see the death of baryonic matter and stay around to watch the ultimate fate of the universe, then it is certainly possible.

Leave a Comment

Filed under General