Caveat lector: This blog is where I try out new ideas. I will often be wrong, but that's the point.

Home | Personal | Entertainment | Professional | Publications | Blog

Search Archive


Ethics of brain simulation

As I mentioned last month, this weekend I will be attending the annual Singularity Summit. I was invited to attend to provide some neuroscience input. While I remain a strong skeptic with regards to some of the claims of transhumanism, thinking about the singularity and talking with friends and colleagues about the future of neuroscience has been really interesting in the lead-up to the Summit. Sometimes it's fun to wildly speculate.

Generally when I think about a new topic a lot of ideas pop into mind. I've made the best attempt to semi-formalize this tendency of mine through an iterative process of writing and culling ideas. This blog post is the result of the only novel idea I've had regarding neuroscience and the singularity that I didn't end up trashing because it sounded stupid or strange a week later.

Now I'm gonna go out on a crazy limb here and say that I think one of the motivating goals of neuroscience (insofar as a "science" can be said to have a "goal") is to understand how the human brain—by which I mean the biological mass of fat, water, proteins, and amino acids—can give rise to what we perceive or interpret as consciousness, self-awareness, intelligence, and emotion (hereafter: "cognition"). And I think that in all likelihood we neuroscientists will eventually have to move beyond the currently-dominant paradigm of event-related behavioral experimentation combined with neuroimaging if we want a true and accurate scientific model of the human brain and cognition.

Okay, with that established: Where to from here? How can we proceed? Well there are researchers who are already well underway in building digital "brains" built using biologically-constrained rules. I've already talked a little bit about the Blue Brain Project which is one such endeavor (that I tritely summarized by saying that "thinking that modeling neurons makes a brain is like thinking that soldering together transistors makes a MacbookPro"... maybe a bit harsh). Another effort is being led by Eugene Izhikevich, who gave an excellent talk at Berkeley a few years ago where he showed a video of his large-scale neuronal model running for about one second of "real world" time.

So why try and build a digital brain? Well if we are going to understand how all of these intricate pieces work together we're going to need a better way to run neuroscientific experiments. Research on animals and humans involves many uncontrolled (and uncontrollable) variables which is why there are still so many big, open questions in the field (e.g., Nature/Nurture, the biological basis for psychological disorders, etc.). The best experiment is one where all variables are known and controlled for. Of course this is currently impossible in human neuroscience, but if we could build a biologically-plausible digital brain we might be able to conduct such an experiment.

On the flip side, we might not be able to build such a brain model without bootstrapping the brain from smaller, more tractable elements. This would be the neuroscientific equivalent of proof by exhaustion in mathematics where we would try all possible combinations of neuronal connectivity, synaptic plasticity, coding models, etc. to see which combination matches best the observed biological findings.

Regardless of how digital brains are used, I believe they pose a potential philosophical problem for researchers: what is our ethical responsibility with regards to activating a digital brain? What about deactivating it? I don't think this is just idle navel-gazing (omphaloskepsis!); I believe that a digital brain will be created (though I'm agnostic with regards to the time-scale of when). I believe that now is the best time to consider these kinds of ethical issues. Best to be proactive in this regard. After all, scientists are often criticized for allowing technology to exceed our humanity (as Einstein is oft-quoted as having said).

If we assume that our cognition and consciousness reduces to physical properties, and that those physical properties can be digitally represented and recreated, then in theory I could create a digital person. That person could be a copy of someone specific or it could be a random combination of neurons built in a person-like way. It doesn't matter for this argument.

Currently we can combine such neuronal elements in a crude manner that are based on—but are unlike—real animal brains. Currently there is no ethical problem in simulating millions of interconnected neurons. But what about when these models improve? What about when the models become 90% human? 99%? At what point do we say that the models are human-like enough that by stopping them we're effectively quenching a life?

This issue gets at the root of many philosophical problems: what are life and intelligence? What is consciousness? What are our ethical responsibilities as scientists? This is not unlike the current debate regarding animal research. To paraphrase Bill Crum's correspondence to Nature:

the stronger the evidence that [digital brains] provide excellent experimental models of human cognition, the stronger the moral case against using them for... experiments.


  1. Hey Bradley,

    I'm glad your skeptical of the transhumanist ideas. I've recently attended some H+ meetings here in the UK (its fun to wildly speculate) and consider myself quite skeptical too.

    With regard to emulations such as the Blue Brain Project, I extrapolate from Moore's Law, that should it hold for the next 30 years (a possibility) then we might scale up their simulation of 10^5 neurons to a human brain scale 10^11 neurons some time around 2040.

    Now 2040 is a lot further away than some transhumanists think (i.e. the singularity is near - I'll be 60 in 2040!) but to go any faster than that would be to exceed the rate of doubling predicted by Moore's Law.

    It gets worse though - awesome as a model of the human brain would be, what we'd have would effectively be a baby's brain - and possibly a highly disabled one at that.

    Lets be generous and assume that the flaws are ironed out by 2042. So now we have a non disabled simulated baby's brain. Assuming this artificial intelligence actually works (and has artificial eyes and ears and a way to communicate with the world) you know have to spend some 25 years educating it until it can produce a PhD. I don't see a singularity on the horizon.

    Some transhumanist thinkers have protested to me that the rate of increase of data processing won't stop at 2040 and that the simulation would keep getting faster. But I counter that it might be impossible for humans to educate a human brain simulation operating a 32 times normal processing speed - could you cope with a 32 speed toddler???

    Ethics is something I hadn't even considered. If you had made a realistic simulation of a human brain, there'd be a bunch of scientists who'd want to perform lesion studies on that simulation. For the study to be 100% valid, the model would be a 1:1 mapping of a human brain, in which case I feel a lesion study would be equivalent to giving a human deliberate brain damage. Turning it off might be equivalent to murder. Hard neuroethics here.

    One final point. At the H+ meetings in the UK, there was a distinct lack of psychologists and neuroscientists, but a lot of claims about AI. I hope you have a great time at the Singularity summit, but please try to talk some skeptical sense into the people you meet about how damned complicated the human brain actually is!


  2. Tom:

    Actually in the Singularity is Near, Kurzweil predicts the Singularity "will happen" in 2045. So not too far off from you! (I'm reading the book right now in preparation for this weekend's Summit).

    Anyway, I completely agree with you about lesioning the model! That strikes me as being quite unethical if we accept the idea that a digital brain is equivalent to a real brain.

    I'll post again once the Summit is over and let you know how it went!

  3. Hi, I have written an open letter to the Human Brain Project because of ethical concerns, see http://asifoscope.org/2012/11/20/an-open-letter-to-the-human-brain-project/

  4. Hi, I have written an open letter to the Human Brain Project because of ethical concerns, see http://asifoscope.org/2012/11/20/an-open-letter-to-the-human-brain-project/