Happy Birthday Bro.
Spoilers for: Snow Crash by Neal Stephenson, Accelerando by Charles Stross, The Android's Dream by John Scalzi, the Otherland quartet by Tad Williams, and Mother of Storms by John Barnes. Warning: Very Long.
I've been reading books recently in which the humans sometimes exist as simulated brains in simulated digital environments. Off the top of my head, this encompasses the Hyperion
Cantos by Simmons, Accelerando
by Stross, The Android's Dream
by Scalzi, and the Otherland
quartet by Williams.
In order of technological complexity those books are in reverse order and so the world in which Otherland
is set has the most difficulty simulating the human mind, while in the AI universe of Simmons or the computronium of Stross the process is fairly easy.
There are a few problems with modeling the staggering complexity of the human mind that I haven't see addressed yet, and I'd like to consider that for a moment.
First, none of the books go through the painful phase in which a human mind has been adequately mapped but the computational power required to process even a few simulated moments takes a distributed computer network minutes to produce.
Let's consider The Android's Dream
for a moment. Scalzi suggests that the only way to model a human brain is through a capture at the subatomic level, at the level where quantum uncertainty provides the unpredictable nature of human thought. Thus, the boys go in, scan themselves, and leave.
Years later, when the simulation of Brian Javna is compiled on a supercomputer the brain runs incredibly quickly. In fact, it turns into a super-program, able to rewrite, exploit, and interact with computers at a superhuman level.
Why would it run that fast though?
We're talking about a simulation of a human brain in which not only is the behavior of every electrical impulse through a brain is mapped, but the behavior of every molecule and atom in every molecule is accounted for.
To get some idea of how complicated this is, remember that the question of how proteins "fold" (i.e. how atoms fit together to create the complex organic molecules that form the building blocks of living creatures) requires huge distributed computing networks made up of thousands of machines. If you've got a PS3, you might be part of this effort through Folding@home
To model the brain is worse. The chemicals that surround each neuron modify the electrical impulses of thought to provide memory and emotion. Which means that for every thought that occurs in your head not only do you have to account for the very complicated path that the thought travels through, but you have to know what molecules surround the path and what effect they'll have on the thought.
Neurons are connected in sequence, true, but not in direct linear or binary sequence either, which might have made it easier on the computer. If an electrical impulse has four potential paths in a human head the computer will have to attempt to figure out where it goes in binary questions (Does it travel through path one? No, so it continues on. Does it travel through path two? No, so it continues on. Does it travel through path three? No. Does it travel through path four? Yes, and now there are another four possibilities that need to be accounted for). This means that for the work that the neurons sitting in the grey goop on your head can do relatively easily, a computer has a much more complicated path to travel.
There are problems on the other side too: Assuming that the computer can accurately run the physical simulation of a human mind, how does it know what all those chemicals and impulses mean? So now the computer has to figure out what all of those slight changes mean in terms of mood, thought and memory and convert all that meaning into binary again so that the artificial human intelligence can express itself.
So I found that it strained credibility in my mind when the character in The Android's Dream woke up in a computer and found that he processed information faster than normal. In a world with binary electronic computers the electronic modeling of a human brain would require so much computer processing power that it's highly unlikely that it could be simulated as a whole without serious latency between sections.
I would have expected instead to find that the character woke up in a fog, in which part of his brain was simulated first, and then another part, and then another part, and then all melded together in stages as the computer struggles to process all that minutia. In the end, the artificial mind might think that it was alive, but each moment that passed for it would actually be several "real" moments.
Imagine what that would be like for the brain; you receive a stimulus, perhaps a bang. Instead of being able to flinch from the sound, it would require the computer program to figure out that you would want to flinch from the sound, and then figure out what your response would be . . . in sections. Instead of being able to access memories and parts of your mind all at once, your memories might lag behind your current thoughts for a moment while the computer struggled with a particularly complex chemical reaction, or vice versa.
That previous paragraph is a bit vague so let me try to lay this out even more clearly. If the sound I mentioned before were a voice instead of a bang, the computer program would have to figure out how the voice affected the ear, which in turn affected the auditory nerve, which in turn would communicate with the brain. The brain would access voice and try to interpret it by accessing memory of voice, and so the computer would have to track each of the chemicals as the memories were accessed and then back as the voice was decoded. Each of these would be processed separately, and so would your reaction to it.
Do you know what it feels like to realize that different sections of your brain are operating at different speeds? I can't imagine that the feeling is pleasant.
So, to hide the fact that your brain is running at different speeds, the computer simulation will have to slow the entire process down to the lowest common denominator. For something that's already running slowly, this is only more time passing that the computer simulated brain can't work with.
The end result is that not only do you have a mind in a fog, you have a mind running very slowly until you have nearly incomprehensibly fast computers, and elegantly written simulations to run on them. I'm not a computer guy, but I suspect that this is currently beyond the horizon in any form, even assuming that Moore's Law
holds up for the next twenty years.
This leads to the next two major problems that I have with simulated humans. These came to me while I was reading Accelerando
, and they're obviously related.
The first of these two questions that occurred to me was, what prevents the brain from being hacked?
You might be able to tell the brain that it is a brain floating in a jar, but it can't be self aware of that fact. Due to the fact that the computer is simulating meat for the brain to run on, it feels like meat. It doesn't feel the computer processing it's every action. So this opens the possibility of major security problems related to the way the simulation is run.
For example, if someone manages to change the simulation a little to prevent something like the dissolution of serotonin, you'll get a happier simulated brain. Another change might result in a brain with migraine headaches. Another might lead to certain kinds of memory problems.
You could also narrow the "visible" band of light so that a simulated brain could only perceive things in blue. Or yellow. Or so that it was constantly stoned, or high, or on PCP.
These could all be from slight programmer error but the scarier thought is what if they were intentionally inflicted on people? Stross addresses this slightly in Glasshouse
but he doesn't go as deeply as I would have liked. What if your simulation was rewritten to make you feel good when you taste Hershey's candy simulation and bad when you taste Nestle's candy simulation? Pretty soon you're probably going to like Hershey's better than Nestle.
With even subtler rewriting you could be changed so that you (as a simulation) would desire buying products only from certain companies, or have crushes on specific people, or even desire to give your confidential data to another person. And this doesn't take into account the thousands of people that live on a planet of seven billion people and want to cause random destruction and chaos.
And that's on a subconscious level, where the simulation might be completely unaware of the changes. On the conscious level, there are even more problems. In a computer, the computer controls all of your input, all of your sensory data from the touch of a wooden door to the buzzing of a bee nearby.
A few hours of simulated pain would be a pretty effective torture method, especially since in a virtual simulation you wouldn't need to actually damage the person physically, just convince the computer to stimulate their pain nerves.
At the point where the computer can easily run a mind simulation like I mentioned before there is an even greater problem. A human mind was designed to run on a closed system without direct interface. Computers today aren't closed systems (and if Stross is writing them, they certainly aren't). The human mind has no software protections against intrusion, corruption, or even piracy (if you think that DRM is a pain in the ass, just think about what Memory Rights Management is going to be like).
The assumption that we have a simulation also assumes that the memory encoding problems have been solved, so what's to stop someone from breaking into your mind and stealing your secrets direct? They don't even need the whole brain, just whatever bit of the digital database contains chunks of memory.
Related to this little bit is Neal Stephenson's Snow Crash
plot, in which there is a programming language for the brain. If you've got a direct connection to a brain via a simulation, you can throw certain symbols and words and stimuli at it to hack the meat brain just as easily as you can hack the computer side. If nothing else, you could certainly run DNS attacks on a simulated brain that would serious impair it's ability to function.
The brain is not secure, and the likely probability is that an simulation complicated enough to handle simulating one is going to be full of more holes than all of the Microsoft products combined have ever had.
I can only think of one way to keep the human brain secure: keep it meatware.
Then there's the final problem. Which for me stemmed from the previous issue, and that is compatibility with the native life forms. Not that there are any right now, but that might eventually change.
On meatware, we are state of the art, but because of the complexity of simulation when it comes to brains, in a computer we'd be massively unwieldy and clunky.
In both the Accelerando
universes, there are artificial intelligences that are not human, never were human, and don't particularly like humans. Particularly, I'm thinking of the cat (what is her name again?) in Accelerando
I'm going to quickly sidetrack for a moment and then tie this all together and point out the Economics 2.0 zone in Accelerando is what happens when a whole bunch of sentient corporations take over the economy and everything crashes as the supply and demand architecture falters and then fails (or at least, that's the way I understand it). But no matter how much computronium there is, there will always be a lack of processing power because use will expand to fill availability. Thus, Stross has his sentient corporations attacking each other with a space for digitally simulated humans on the sidelines but the corporations are more likely to expand to fill the volume.
No matter how big the capacity of a system like that, full of actual binary beings living their self destructive little lives, there will never be enough space for a human simulation to easily exist on the edges. There will always be a race for one AI computer program to have more processing power than it's neighbors so that it can more adeptly defend and attack those same neighbors.
Further, those AIs are going to be better adapted to the environment that they exist in. They are written in binary and can interact with their environment on a much more basic level than the simulated human mind can. Instead of requiring a complicated simulation to think, they can use their binary brains to get the same usage without the intervening necessity of calculating the serotonin levels inside a human brain.
Why would an AI, no matter how simple, want to share the same space with a gigantic slow brain sim? It's apparently a dream of ours, but I can't imagine that they'd see the need to run an emulator for legacy software that can't even protect itself in a digital exchange. How likely is it that the AIs in Accelerando
are going to respect a human mind when they've already taken the economy to shreds?
It's a values things and the AIs probably won't have our values.
And let's say that they do have our values to some extent and believed that humanity was worth the space, would that change anything? Wouldn't it be easier for them to keep the meat brains running than to have to put up with meat brain sims?
Again, let's look at it from the opposite perspective: say that we brought dinosaurs back from the dead. Would we allow them free access to Los Angeles? No, because it might cause untold damage and they'd eat resources (and people) uncontrollably. We'd probably do just what Crichton did in the Jurassic Park
books: find a nice secluded island somewhere and visit them for special occasions.
So the question becomes, what humans want to live in a zoo for the entertainment of computer programs?
I'm sure there are lots of other problems with digitally simulating a human brain but these are the ones that I don't think the writers that have done it have addressed yet. I'd ask if anyone has any other problems, but that would assume that someone had the stamina to read twenty five hundred words to get to the end of this post.
Update: The day after I wrote this, before I even had a chance to post it online, I read the later two thirds of Mother of Storms
by John Barnes in which two people become weakly god-like computer programs. Oddly, I have fewer problems with their method of ascension than I do with those in Accelerando
's or The Android's Sheep
even though Mother of Storms
was written back in 1994. Given the assumption that the little programs running around the net are benevolent and can make significant judgment calls (which you must do to accept the ways computers are used in the book), it seems to make sense.
At the time that I wrote this, I didn't even know that Mother of Storms
was going to touch on these themes. The coincidences are eerie sometimes.
Labels: books, science fiction, writing