News

23

Henry Markram Talks Brain Simulation

posted on
Henry Markram Talks Brain Simulation

Artificial intelligence is progressing rapidly, and its impact on our daily lives will only increase. Today, there are still many things humans can do that computers can’t. But will it always be that way? Should we worry about a future in which the capabilities of machines rival those of humans across the board? 

When will we have computers as capable as the brain?

The brain is an infinite-dimensional network of networks of genes, proteins, cells, synapses, and brain regions, all operating in a dynamically changing cocktail of neurochemicals. Our perceptions and movements, thoughts and feelings emerge as electrical, chemical, and mechanical chain reactions explode and weave through these networks. Because there is no scientific evidence that we can ignore any of these reactions, the only way to get close to the capabilities of the brain is to simulate or emulate all of them. When that will happen depends on the level of resolution that we need to capture all these reactions.

Let’s not speculate when we’ll be able to simulate every single molecule in all its possible states or assume that a coarse-grained simulation, where molecules are simulated in groups, would have enough resolution to capture the brain’s reactions. To simulate the human brain at that resolution, we would need supercomputers on the yotta scale, with a million times more computing power than the exascale machines now on the horizon. A mouse brain would need a zettascale computer, and a lobster brain would need exascale. Today’s petascale computers are just enough for a coarse-grained simulation of a worm, like Rotifera.

An important factor to keep in mind is that the computers we would need to run high-resolution simulations of the human brain would probably consume the output of a dedicated nuclear power plant. When one considers that the brain only needs a banana to run even higher resolution operations, we can see that it will take a while for our technology to catch up with the brain.

If we succeed in abstracting away the detailed molecular reactions while retaining a detailed, cellular-level resolution, human brain simulation comes much closer. In the next few decades, mobile devices will probably reach peta- and exascale computing power—or at least have access to it through the cloud. This means we could have low-resolution digital copies of the human brain on mobile devices before the end of this century. We could also have reasonably high-resolution mouse brain models, and even higher resolution digital copies of the brains of birds, flies, bees, and ants.

If we now assume the intricacies of the cellular structure of the brain and its cells are unnecessary to recreate the essential reactions of the brain, we can treat cells as simple nodes of integration and collapse the whole brain to a point neuron network. If this works then, in principle our computing power is already sufficient to simulate the human brain. The problem is that in order to collapse the complexity down to this resolution in a formal and systematic series of scientific abstractions, we need a high-resolution digital reconstruction of the human brain to work with. To start building such a digital copy of the human brain is in principle possible today, but the level of alignment and cooperation among scientists and politicians is still not there.

If we could now go further and just bypass billions of years of iterations in biological design, leaving aside all the detailed biological reactions, and mimicking just all the input/output transfer functions of the human brain in some kind of deep learning network, we might be able to achieve brainlike capabilities even earlier. The ifs are big, but the riches and power that this would shower on those that succeed are even bigger; hence the resurgence of artificial intelligence.

The biggest question today is whether we can outsmart evolution to reach and surpass human brain intelligence. The important thing to realize is that every part of the brain’s machinery is deeply immersed in the body and its physical, social, and cultural environment. Each of the billion or so molecules in each of the body’s trillions of cells dances in concert with the world. We are not observers of the universe; we are inseparable parts of it. The layers of immersion are practically infinite. Some believe that with Buddha as a guide and enough meditation, they can peel away the layers, climb out, and unplug themselves completely from this immersed reality. A seizure, near-death experience, magic mushroom trip, or wild dream may transiently disrupt the immersion, but the immersion is powerful and profound and we generally snap compulsively back to earth.

Human intelligence is not about being able to play chess, jeopardy, or Go; driving a car; or even making lightning-fast decisions on massive data sets—it is about immersion. The deeper an artificial system is immersed in the world, the more intelligent it becomes. How deep does one have to go?

Every biological species builds its own tailored infinite-dimensional models of the world and simulates the entities in these worlds with as many layers of immersion as it needs to survive and reproduce. Birds swooping down to catch crumbs in the street learn to understand the behavior of people, cars, and restrictions on entering certain places. But they don’t need to go deeper and understand why cars or people or places behave the way they do. Humans immerse far deeper.

Human brains, just like the brains of birds and other animals, become what they are through biological evolution and individual experience with changes cascading down to the orchestration of molecular reactions. But unlike these simpler brains, they are also immersed in a cultural context that offers them practically infinite layers of knowledge and experience. Each brain uses this knowledge in its own unique way; each develops its own model of the world, including models of other brains; each uses these models to simulate, understand, and act in the world it has re-created. It is unclear what a human brain in a vacuum even means. It is these infinitely deeper interactions, I believe, that make human intelligence so unique.

The infinite dimensional network of the brain has evolved to mimic the infinite dimensional world to allow the deepest possible immersion. If AI systems can achieve this kind of immersion, they will indeed match and then surpass human intelligence.

Do you have any qualms about a future in which computers have human-level (or greater) intelligence?

Only now are humans realizing that the human brain, as an organ belonging to an individual, already has superhuman capabilities: Every human brain embodies layers upon layers of knowledge and experience developed by all the other brains, present and past, who have contributed to building our societies, cultures, and physical environment. So even if we create algorithms with higher “IQ” and better problem-solving abilities than any individual, even if we give them the ability to build ever more intelligent versions of themselves, we will still be far from achieving the superhuman intelligence we observe in individual humans today. Furthermore, in the same way that the human brain embodies the physical and cultural world, it also embodies the technologies we create, including any artificial brains we may create. If artificial intelligent life-forms add to human intelligence it will be difficult for them to surpass it.

I am not as worried about whether we will succeed in achieving superhuman artificial intelligence as I am about partial successes in our attempts at it; the graveyard of partial evolutionary successes is vast and full of aberrations. To cross the valley of death will pose many dangers for the human race.

To make sure that superintelligent artificial beings don’t become to us what a car is to a bird, or a GPU is to our grandmother, we have to consider how our creations are immersed into society: what tasks we give them, what tasks we reserve for ourselves, how and what we allow them to learn from us, how and what we manage to learn from them.

More important, if we give them human emotions and desires, we will get the full range of good and bad scenarios we get from Homo sapiens. Fear, for example, short-circuits the most evolved parts of the brain, breaking the deep immersion that gives us our intelligence and allows us to focus on our own immediate survival. This is not how we want superintelligent beings to react, particularly if we have given them critical decision-making powers.

Genuine superhuman artificial intelligence is perhaps less scary. Humans are only conscious of a minuscule part of the knowledge we gather through our immersion into the world. This unconscious knowledge has many layers we cannot process consciously. Most of the time, humans do not really know why they act the way they do and spend a lifetime in self-discovery. Superintelligent artificial systems would not have this problem. Not only will they be able to immerse themselves much deeper than humans do, they will have full access to all the layers they have embodied—they will be fully conscious beings.

If I am right that intelligence is the product of immersion, superhuman artificial beings might then turn out to be far more rational than we are. They will be seeing and understanding the deepest meaning even in the simplest of life-forms and are also likely to be much more compassionate than we are.

What we need to look out for is the shallower versions of artificial intelligence, complete with human desires and emotions such as fear, but without the deep immersion that can put these desires and emotions in context.

| Categories: | Tags: Brain, model, neurons, transfer of consciousness, digitization of the brain | Comments: (0) | View Count: (1243) | Return

Post a Comment