darb.ketyov.com

Caveat lector: This blog is where I try out new ideas. I will often be wrong, but that's the point.

Home | Personal | Entertainment | Professional | Publications | Blog

Search Archive

Loading...

10.8.10

Ethics of brain simulation

As I mentioned last month, this weekend I will be attending the annual Singularity Summit. I was invited to attend to provide some neuroscience input. While I remain a strong skeptic with regards to some of the claims of transhumanism, thinking about the singularity and talking with friends and colleagues about the future of neuroscience has been really interesting in the lead-up to the Summit. Sometimes it's fun to wildly speculate.

Generally when I think about a new topic a lot of ideas pop into mind. I've made the best attempt to semi-formalize this tendency of mine through an iterative process of writing and culling ideas. This blog post is the result of the only novel idea I've had regarding neuroscience and the singularity that I didn't end up trashing because it sounded stupid or strange a week later.

Now I'm gonna go out on a crazy limb here and say that I think one of the motivating goals of neuroscience (insofar as a "science" can be said to have a "goal") is to understand how the human brain—by which I mean the biological mass of fat, water, proteins, and amino acids—can give rise to what we perceive or interpret as consciousness, self-awareness, intelligence, and emotion (hereafter: "cognition"). And I think that in all likelihood we neuroscientists will eventually have to move beyond the currently-dominant paradigm of event-related behavioral experimentation combined with neuroimaging if we want a true and accurate scientific model of the human brain and cognition.

Okay, with that established: Where to from here? How can we proceed? Well there are researchers who are already well underway in building digital "brains" built using biologically-constrained rules. I've already talked a little bit about the Blue Brain Project which is one such endeavor (that I tritely summarized by saying that "thinking that modeling neurons makes a brain is like thinking that soldering together transistors makes a MacbookPro"... maybe a bit harsh). Another effort is being led by Eugene Izhikevich, who gave an excellent talk at Berkeley a few years ago where he showed a video of his large-scale neuronal model running for about one second of "real world" time.

So why try and build a digital brain? Well if we are going to understand how all of these intricate pieces work together we're going to need a better way to run neuroscientific experiments. Research on animals and humans involves many uncontrolled (and uncontrollable) variables which is why there are still so many big, open questions in the field (e.g., Nature/Nurture, the biological basis for psychological disorders, etc.). The best experiment is one where all variables are known and controlled for. Of course this is currently impossible in human neuroscience, but if we could build a biologically-plausible digital brain we might be able to conduct such an experiment.

On the flip side, we might not be able to build such a brain model without bootstrapping the brain from smaller, more tractable elements. This would be the neuroscientific equivalent of proof by exhaustion in mathematics where we would try all possible combinations of neuronal connectivity, synaptic plasticity, coding models, etc. to see which combination matches best the observed biological findings.

Regardless of how digital brains are used, I believe they pose a potential philosophical problem for researchers: what is our ethical responsibility with regards to activating a digital brain? What about deactivating it? I don't think this is just idle navel-gazing (omphaloskepsis!); I believe that a digital brain will be created (though I'm agnostic with regards to the time-scale of when). I believe that now is the best time to consider these kinds of ethical issues. Best to be proactive in this regard. After all, scientists are often criticized for allowing technology to exceed our humanity (as Einstein is oft-quoted as having said).

If we assume that our cognition and consciousness reduces to physical properties, and that those physical properties can be digitally represented and recreated, then in theory I could create a digital person. That person could be a copy of someone specific or it could be a random combination of neurons built in a person-like way. It doesn't matter for this argument.

Currently we can combine such neuronal elements in a crude manner that are based on—but are unlike—real animal brains. Currently there is no ethical problem in simulating millions of interconnected neurons. But what about when these models improve? What about when the models become 90% human? 99%? At what point do we say that the models are human-like enough that by stopping them we're effectively quenching a life?

This issue gets at the root of many philosophical problems: what are life and intelligence? What is consciousness? What are our ethical responsibilities as scientists? This is not unlike the current debate regarding animal research. To paraphrase Bill Crum's correspondence to Nature:

the stronger the evidence that [digital brains] provide excellent experimental models of human cognition, the stronger the moral case against using them for... experiments.

4.8.10

Santiago Ramón y Cajal

Santiago Ramón y Cajal

So this has turned out to be neuroscientific biography week for me, what with two whole biographic pieces now! But after my last post about Hans Berger I just couldn't pass up writing about my favorite neuroscientific forebear, Santiago Ramón y Cajal.

While I had come across Cajal's name prior to graduate school, I didn't really become familiar with him until the beginning of the second year of my PhD during the fall of 2005. That was the first semester where I was supposed to work as TA (or, as Berkeley refers to them, Graduate Student Instructors (GSIs)). Because I have a very strong love for neuroanatomy (after all, neuroanatomy provides the rules that constrain our brains), I opted to teach the laboratory section of MCB 163: Mammalian Neuroanatomy. This was an undergraduate course through the Department of Molecular and Cellular Biology—a field in which I had never even taken a class prior to teaching that lab—and was known as one of the most difficult courses available.

The Professor for the course, the late Jeffrey Winer, was an excellent teacher; we spent many hours together with the other GSI preparing for the upcoming lab sections (ultimately I ended up receiving the outstanding GSI award for my work in this course, which I accredit to two phenomena: Dr. Winer's fantastic guidance and the increased respect I got from my students when they found out that the reason I had ditched my first week of teaching was so that I could go to Burning Man...)

Anyway, over the course of the semester Dr. Winer and I had many interesting conversations that quickly lead him to introduce me to the work of Cajal. Dr. Winer lent me one of Cajal's autobiographies, Recollections of My Life (Recuerdos de Mi Vida). Although I delayed in reading it for several months, I was eventually drawn in (so much so that, when I eventually created the neuroscience wiki for the Berkeley PhD students, I dubbed the site "Cajal"). I later went on to read Cajal's Advice for a Young Investigator, which has advice that I believe is still relevant for any modern researcher.

Mind you, Cajal was an unrepentant misogynist and Spanish nationalist. There were many times where I would read passages out loud to my wife in amusement and astonishment. I think I was most amazed by how rarely he spoke of his wife and five children in his autobiography; just a few pages here and there.

But damn if the guy wasn't interesting and talented. As a child Cajal was kicked out of most of his schools. He was sent off to various academies and he would often sneak away. When he was 11 years old he was thrown in jail for a night because he destroyed his town's gate. He did this by building his own cannon and blowing it up.

As an adult he was drafted into the Spanish army and served as a military surgeon in Cuba where he contracted malaria and TB. He was so severely ill that he was sent back to Spain where he eventually resumed his research. His work as a neuroanatomist involved many hours spent in front of a microscope examining different types of neurons and the patterns they form. Cajal is most famous for the work that ultimately won him the 1906 Nobel Prize: his work proving the "neuron doctrine". He shared this prize with his intellectual opponent Camillo Golgi.

Golgi received the prize for discovering what is now known as Golgi's method, a technique for staining neurons, their axons, and dendrites, rendering them visible under the microscope and available for study. This method stains approximately 5-10% of the actual neurons that are prepared, though it works through a mechanism that is still unknown but widely used. This method is the one that Cajal used and improved upon to show that neurons are not actually contiguous, but have spaces (synapses) between them. This is the neuron doctrine.

What's amusing is that Golgi still believed that neurons were contiguous and all physically connected (known as reticular theory now, but previously as link theory, among other names) and said as much in his acceptance speech. Golgi gave his speech first, then Cajal went up to give his. In his Nobel acceptance speech Cajal took several jabs at Golgi's theory, making for what was apparently quite an amusing lecture. For example, he says:

...[l]ike many scientific errors professed in good faith by distinguished scientists the link theory is the result of two conditions: one subjective, and the other objective. The first is the regrettable but inevitable tendency of certain impatient minds, to reject the use of elective methods, such as those of Golgi and of Ehrlich which do not lend themselves easily to improvisation...

He flair for the intellectual smack-down aside, Cajal had quite a way with words. Another example from his Nobel speech: "[u]nfortunately, nature seems unaware of our intellectual need for convenience and unity, and very often takes delight in complication and diversity."

Santiago Ramón y Cajal retinaAs I said, the Golgi method for staining neurons is still used today (among many other methods, of course). What's striking to me is how accurate and intricate Cajal's drawings were. The example at the left is a drawing he made of the chick retina. In modern neuroanatomy such drawings are captured digitally or the images are projected onto the table next to the microscope for manual tracing. In Cajal's day such methods weren't available. In order to create such accurate, intricate representations of the neuronal fields, Cajal would stare into the microscope for long stretches of time. Once he felt he had the images well memorized he would head down to a local cafe where he would drink absinthe while drawing the cells from memory. His reproductions were so accurate that they are hardly discernible from the images used in modern texts (aside from their artistic superiority).

I suppose my reasons for liking Cajal are many-fold. Misogyny and nationalism aside, the man expertly melded art and science, both in his drawings and in his writings. In reading over older scientific manuscripts I can't help but want a little more of a narrative in modern scientific writing. The dry, technical, academic style that pervades the peer-review system hides the actual process of science: one which is often messy, difficult, and frustrating, but ultimately a wonderfully rewarding experience. I think that adding back a little bit of that narrative focus on the process would benefit not only the readability of the science, but perhaps even engage the public more as well.

2.8.10

EEG, Hans Berger, and psychic phenomena

Hans Berger

Because electroencephalography (EEG) forms the backbone of much of my neuroscientific research, during my first and second years of graduate school I studied a lot about the history and nature of EEG recording and EEG signals. This research is what lead me to Hans Berger, the inventor of EEG. There is an excellent historical treatment of Hans Berger by David Millett published in Perspectives in Biology and Medicine where a lot of this information comes from. But Berger's story is so interesting that I wanted to tell a little piece of it here.

Currently I'm interested in the role that neuronal oscillations play in cognition, so EEG tends to be my research method of choice when I do any neuroimaging. (For a more in-depth treatment of what the EEG signals mean, check out the Scholarpedia entry or my more brief explanation). My research involves work with normal, non-invasive (that is, not surgical) scalp EEG as well as invasive EEG (referred to variously as iEEG, ECoG, or ICE). (For an explanation as to why someone might get electrodes surgically implanted into their brains, check out this part of one of my talks).

What does all this have to do with Berger? Well that starts with why Berger was trying to record electrical signals from the brain at all. It turns out, Berger was a big believer in psychic phenomena: namely telepathy. He believed that there was an underlying physical basis for mental phenomena, and that these mental processes—being physical in nature—could be transmitted between people. Thus, in order to show that psychic phenomena exist, Berger sought to show the nature of the underlying physical processes of thoughts and emotions.

He initially studied blood flow and used it as an index to measure "P-energy" (psychic energy) associated with mentation and feelings. Of course, this being prior to the advent of neuroimaging, there was no way to actually measure cerebral blood flow from a person. So Berger made a leap. The brain receives so much blood from the heart (about 20% of the cardiac output), that the brain pulses with each heartbeat (you can check out a video of the human brain pulsing here). Parents with newborns might even be able to notice this phenomenon if they lightly touch the soft spot at the top of their baby's head.

After brain surgery, some patients still have large spots where parts of their skull are missing, and it was from these types of surgical openings that Berger first made his measurements of blood flow from the active human brain by measuring the pulsation pressure. These patients provided Berger with an opportunity to measure the brain's pulsations from a behaving person! Berger continued working with such patients and demonstrated that blood flow changes were associated with different mental states (fear vs. pleasure, for example), but he wanted to press further. Thus he began his experiments on the electrical nature of brain functioning, and in 1925 he recorded the first human EEG from a 17-year-old boy with electrodes placed over a large surgical skull defect:

...only when the two clay electrodes were placed 4 cm apart in the vicinity of a scar running vertically from above downwards through the middle of the enlarged trephine opening, was it possible with large magnifications to obtain continuous oscillations of the galvanometer string.

Very early on Berger noticed that there are fairly regular brain oscillations, such as the one seen below, in the human EEG (the top trace is the EEG, the bottom is a constant 10Hz timing signal).

Hans Berger EEG

These oscillations are very strong and can be seen in the ongoing EEG activity in many people quite clearly.

But what's cool about Berger's first EEG is how he was able to see any signal at all. The patient's surgical holes in the skull allowed Berger to get better signals because the skull wasn't in the way to affect his recordings. In fact, for several years Berger only recorded from such patients. Coming back to my work, qualitatively the EEG and iEEG signals are quite obviously different when a researcher compares them, but I became curious about quantifying these differences. That's where my hemicraniectomy study comes into play. As I wrote in my post about that paper:

Working with these patients gave us a unique opportunity as cognitive neuroscientists... One of the things about EEG is that you can't accurately locate where in the brain something is happening, but you can know when it happens with excellent accuracy. However, because these patients literally have a window onto the brain we can get a much better idea of where the signal we're recording is coming from.

One of my favorite anecdotes that my advisor told me when we first started this study was that when he was doing EEG in the 1970s on people who had previously had brain surgery, he noticed that the signals from any electrodes over the small holes in their skulls were abnormally large. So he would just move the electrodes off those sites to avoid this "artifact" (an observation that he says he's kicking himself for not taking advantage of 30 years sooner). Or, as Wired amusingly quoted him as saying, the signals, "looked really weird because they were freakishly strong".

So the only reason Berger saw the EEG signal in the first place was because he was working with the same patients he was trying to record brain pulsations from. And the only reason he was interested in these brain pulsations was to try and tie cerebral activity (blood flow) to mental states to show that thoughts have a physical basis. And the main reason he cared about that was to provide a theoretical framework through which psychic phenomena could operate!

Thus, I owe a good part of my career to Dr. Hans Berger and his unfailing effort to prove telepathy. And my work with people who had a surgical hemicraniectomy to better allow us to perform cognitive neuroscience and brain-computer interfacing experiments was published almost 100 years after Berger was recording from the very same types of patients! What an awesome scientific lineage to follow.