Generally when I think about a new topic a lot of ideas pop into mind. I've made the best attempt to semi-formalize this tendency of mine through an iterative process of writing and culling ideas. This blog post is the result of the only novel idea I've had regarding neuroscience and the singularity that I didn't end up trashing because it sounded stupid or strange a week later.
Now I'm gonna go out on a crazy limb here and say that I think one of the motivating goals of neuroscience (insofar as a "science" can be said to have a "goal") is to understand how the human brain—by which I mean the biological mass of fat, water, proteins, and amino acids—can give rise to what we perceive or interpret as consciousness, self-awareness, intelligence, and emotion (hereafter: "cognition"). And I think that in all likelihood we neuroscientists will eventually have to move beyond the currently-dominant paradigm of event-related behavioral experimentation combined with neuroimaging if we want a true and accurate scientific model of the human brain and cognition.
Okay, with that established: Where to from here? How can we proceed? Well there are researchers who are already well underway in building digital "brains" built using biologically-constrained rules. I've already talked a little bit about the Blue Brain Project which is one such endeavor (that I tritely summarized by saying that "thinking that modeling neurons makes a brain is like thinking that soldering together transistors makes a MacbookPro"... maybe a bit harsh). Another effort is being led by Eugene Izhikevich, who gave an excellent talk at Berkeley a few years ago where he showed a video of his large-scale neuronal model running for about one second of "real world" time.
So why try and build a digital brain? Well if we are going to understand how all of these intricate pieces work together we're going to need a better way to run neuroscientific experiments. Research on animals and humans involves many uncontrolled (and uncontrollable) variables which is why there are still so many big, open questions in the field (e.g., Nature/Nurture, the biological basis for psychological disorders, etc.). The best experiment is one where all variables are known and controlled for. Of course this is currently impossible in human neuroscience, but if we could build a biologically-plausible digital brain we might be able to conduct such an experiment.
On the flip side, we might not be able to build such a brain model without bootstrapping the brain from smaller, more tractable elements. This would be the neuroscientific equivalent of proof by exhaustion in mathematics where we would try all possible combinations of neuronal connectivity, synaptic plasticity, coding models, etc. to see which combination matches best the observed biological findings.
Regardless of how digital brains are used, I believe they pose a potential philosophical problem for researchers: what is our ethical responsibility with regards to activating a digital brain? What about deactivating it? I don't think this is just idle navel-gazing (omphaloskepsis!); I believe that a digital brain will be created (though I'm agnostic with regards to the time-scale of when). I believe that now is the best time to consider these kinds of ethical issues. Best to be proactive in this regard. After all, scientists are often criticized for allowing technology to exceed our humanity (as Einstein is oft-quoted as having said).
If we assume that our cognition and consciousness reduces to physical properties, and that those physical properties can be digitally represented and recreated, then in theory I could create a digital person. That person could be a copy of someone specific or it could be a random combination of neurons built in a person-like way. It doesn't matter for this argument.
Currently we can combine such neuronal elements in a crude manner that are based on—but are unlike—real animal brains. Currently there is no ethical problem in simulating millions of interconnected neurons. But what about when these models improve? What about when the models become 90% human? 99%? At what point do we say that the models are human-like enough that by stopping them we're effectively quenching a life?
This issue gets at the root of many philosophical problems: what are life and intelligence? What is consciousness? What are our ethical responsibilities as scientists? This is not unlike the current debate regarding animal research. To paraphrase Bill Crum's correspondence to Nature:
the stronger the evidence that [digital brains] provide excellent experimental models of human cognition, the stronger the moral case against using them for... experiments.