Caveat lector: This blog is where I try out new ideas. I will often be wrong, but that's the point.

Home | Personal | Entertainment | Professional | Publications | Blog

Search Archive


Voytek Journal of Cognitive Neuroscience paper: "Hemicraniectomy: A new model for human electrophysiology with high spatio-temporal resolution"

(Note: this is a repost of my original post from 2009 Dec. I'm reposting some old posts to work within the ResearchBlogging.org framework.)

Bradley Voytek hemicraniectomy

This paper grew out of an interesting collaboration with some physicians at the University of California, San Francisco and San Francisco General Hospital, initially through a meeting between Dr. Geoffrey Manley, Dr. Robert Knight, and I. Dr. Manley has recently published several papers on the clinical benefits of performing a decompressive hemicraniectomy on people who have had some kind of head trauma. To give a little bit of a background, a decompressive hemicraniectomy is a surgical procedure in which the surgeon actually removes a large part of the skull (see the picture at the right) after someone has had head trauma that has caused the pressure inside the skull to increase. This can happen in a few ways, but basically, because the head is an enclosed system,  if the brain swells arteries can get pressed closed. This can cut off the blood supply to different brain areas. The swelling can also cause the brain to press down onto the brainstem which can lead to coma or death.

These folks go without a big piece of their skull for several months. If you're paying attention, that means... yes... there's not a whole lot protecting their brains. As you could imagine most of them wear helmets during this time. Also, some of them actually have the piece of their own skull surgically placed inside their abdomen so that the skull tissues can be kept alive before the get their skull surgically put back in!

Working with these patients gave us a unique opportunity as cognitive neuroscientists. Most of my research uses EEG to examine attention and memory processes. One of the things about EEG is that you can't accurately locate where in the brain something is happening, but you can know when it happens with excellent accuracy. However, because these patients literally have a window onto the brain we can get a much better idea of where the signal we're recording is coming from. And, for a variety of reasons, the signal quality is better over this window.

Our lab does a lot of work with humans who have had electrodes surgically implanted directly onto their brains (I'll write a more in-depth post about this topic in the near future when one of my papers on this topic is published). Because of this I see a lot of really clean data from the intracranial recordings that looks much better than the data we see in normal scalp EEG. So we decided to try and quantify these differences to a certain extent, and thus ran this study.

So in this paper we set out to quantify how the brain signals we record in EEG are different between the side of the head with the skull and the side of the head without. And because the signal quality is better, we can do a few cool things with it... like predicting when a person squeezes their hand just by looking at the brain signal.

It was a fun project, but a bit tricky to run.

Voytek B, Secundo L, Bidet-Caulet A, Scabini D, Stiver SI, Gean AD, Manley GT, & Knight RT (2010). Hemicraniectomy: a new model for human electrophysiology with high spatio-temporal resolution. Journal of Cognitive Neuroscience, 22 (11), 2491-2502 PMID: 19925193


Quick note: ResearchBlogging.org

So I've decided to try out ResearchBlogging.org, which is a site that aggregates any member's blog posts that cite peer-reviewed research. It's a neat, simple system that works via RSS tracking, and apparently the PLoS family of journals uses blog posts made via ResearchBlogging.org as a measure of impact.

Because half of my posts are about peer-reviewed research (although, much of it is about papers more than 100 years old!), I've decided to give it a shot.

What this means to my readers is that I'll be re-posting my older posts on peer-reviewed research so that it will get posted via RSS. I apologize if this is at all annoying to anyone.


Cargo cults of the brain

During World War II, Japanese and American troops operated in large sections of Melanesia. Both sides brought in huge amounts of food, equipment, and supplies for their troops, though by the end of the war the US occupied this region. These islands were inhabited by small indigenous tribes who had never seen such abundance.

After World War II the US abandoned their posts, stopped bringing in supplies, and left the islands. Over time, certain members of the tribe began mimicking the behaviors of the soldiers they had previously seen bringing in vast amounts of wealth. According to Wikipedia,

Cult behaviors usually involved mimicking the day to day activities and dress styles of US soldiers, such as performing parade ground drills with wooden or salvaged rifles. The islanders carved headphones from wood and wore them while sitting in fabricated control towers. They waved the landing signals while standing on the runways. They lit signal fires and torches to light up runways and lighthouses.

In a form of sympathetic magic, many built life-size replicas of airplanes out of straw and cut new military-style landing strips out of the jungle, hoping to attract more airplanes. The cult members thought that the foreigners had some special connection to the deities and ancestors of the natives, who were the only beings powerful enough to produce such riches.

Bradley Voytek cargo cult airplane

The tribes didn't understand how the US troops has access to so much wealth. To try and get at that wealth themselves, they simply mimicked the behavior of the troops. Obviously this is a fallacy: mimicking the behavior of a thing is not the same as understanding the thing. To put it another way:

The map is not the territory.

In 2010, we have major neuroscientific endeavors such as the Blue Brain Project. According to this group,

The facility has been used to build the first model of the neocortical column, which consists of 10,000 3D digitizations of real neurons that are populated with model ion channels constrained by the genetic makeup of over 200 different types of neurons. A parallel supercomputer is used to build the model and perform the experiments so that the behavior of the tissue can be predicted through simulations.

Bradley Voytek blue brain project

Maps? Territories? As I said in my old post on the subject,

To think that modeling a bunch of neurons digitally is akin to a thinking, evolved, conscious, aware human brain is like thinking that by soldering together a couple of million transistors in a "Apple-like fashion" will give you a working MacbookPro.

Now, I swear I'm not picking specifically on Blue Brain. I love Blue Brain. I want Blue Brain to succeed. I'm a huge sci-fi nerd! I want my cool brain-computer interfaces, AI, etc. When I wrote a skeptical post about the Blue Brain Project back in July, there were a few challenging comments written in response. However, my critiques of this project are certainly not the only ones.

When the project first started up in 2005, Nature ran a brief news piece on the Blue Brain Project wherein they discussed some of the very same concerns I voiced in my post.

This is an ambitious project that is bound to fail," says Terry Sejnowski of the Salk Institute for Biological Studies in San Diego, California. "We are still far from understanding enough about the brain to build a detailed realistic model."

Neuroscientists say that too little is known about the structure of the network connecting cortical cells, for example. They add that a truly realistic model would have to incorporate molecular activity in the regions where neurons connect, a level of detail that is currently beyond the Blue Brain Project.

Right. Well, at the time of my posting, I hadn't read that, but that was pretty much what I said:

...[A]t this point we honestly just don't know enough about how all the pieces play together to give rise to our cognition. I think the neurosciences right now are where physics was in the early 1900s. A bunch of people thought Newtonian mechanics could explain everything. Turns out, the physical universe is much more complicated than that....

...[W]e know a lot about the biology of the neuron. Similarly, computational modeling has gotten very sophisticated. When researchers build computational models incorporating known biology, they call it a "biologically-plausible" model. I think we're still stuck in the Newtonian mechanics period of neuroscience, and we're just now segueing into the more complicated "oh my god this stuff is harder than we thought!" part of our science.

However, people seem to think that the fact that I don't believe that the Blue Brain Project, as currently instantiated, will give us a human brain model somehow translates to me thinking that the Project shouldn't be done. As I said in the comments to my post,

Blue Brain's an excellent step in the right direction. However the people selling it are over-hyping what we do know in neuroscience.

There's a difference between the practice of science and the salesmanship of science. For the former, failure is a critical component!

This month, in Nature, Melanie Stefan wrote an excellent piece on how individual scientists should emphasize their failures, a practice which I myself have put into play in my own CV!

I embrace failure! And Blue Brain will fail! And then we scientists will address those failures, highlight and repair the faults, embrace the successes, and iterate forward. And future Blue Brain may very well succeed!

But that's not my issue.

My main issue is the salesmanship. The grantsmanship. On their own site, Blue Brain claims that it is, "A Novel Tool for Drug Discovery for Brain Disorders" that will, "...provide a concrete foundation to explore the cellular and synaptic bases of a wide spectrum of neurological and psychiatric diseases."

This is called "over-hyping", and it happens all the time. Of course, this isn't a problem just with Blue Brain, or even in science. Anyone who has interviewed someone for a job, been on a date, or basically lived in the western world will be quite familiar with this phenomenon.

But, when it comes to science especially, over-hyping needs to be reigned in. It's a problem that is endemic to the very system in which modern science operates.

Now the new hotness is "connectomics", a research path for which I am a very strong advocate! As my friend Josh says, neuroanatomy is the RULES! If you don't know the anatomy then you can't say much about the brain!

Bradley Voytek human connectome project

Well, this week, Nature Neuroscience takes the over-hyping of the Human Connectom Project to task. (By the way, tons of love for the Nature Publishing Group in this post, apparently. Of course, being NPG, all of their links will be paywalled and thus inaccessible to many readers... ::sigh::)

Anyway, the piece, "A critical look at connectomics", astutely points out that,

Local connections in brain regions, which are roughly 80% of the connections in the cerebral cortex, are invisible to [current] imaging methods. Thus... the Human Connectome Project will necessarily provide us with partial and probabilistic data.

This is similar to the issue with the Blue Brain Project wherein they're currently (I believe) only modeling neocortical columns (for now). This represents fewer than half of the neurons in the human brain, fewer than 5% of the total cells in the brain, and who knows what portion of cognition (if it even makes sense to talk about "cognition" outside of a full brain). Of course, given my love for subcortical brain regions (see my PNAS paper), I'm a bit biased, but non-cortical brain areas really can't be ignored when you're talking about understanding brain disorders!

The editors at Nature Neuroscience go on to say,

It's tempting to sell the Human Connectome Project, and connectomics in general, as directly relevant to disease, particularly given the public money invested. However, given the challenges that this field is facing, it seems ill-advised to present connectomics as providing immediate answers for disease when it is clear that this is a long-term goal that will require the continued support and collaboration of the neuroscience community and the tax-paying public.

To translate: as scientists, the hyping that we do to get grants commits us to a message that may very well be detrimental to the very scientific endeavors that we love so much that we've dedicated our lives to pursuing them. Blue Brain does this. I've done this. Anyone who's written a grant has done this. And we need to stop.

So this post is to affirm my commitment to reigning in the hype. We can be exciting and relevant, without blowing smoke up the public's ass.

[EDIT: A friend pointed out to me that this post is similar in concept to Feynman's "cargo cult science" idea. While I wasn't consciously aware of this prior to writing this post, I still feel compelled to point that out in case I was unconsciously building off of it. Plus, Feynman is awesome and I love that phrase now and will be using it often, I'm sure, much to the annoyance of my wife, friends, and colleagues.]


Literary neuroscience: “Unseeing” in China Miéville’s The City & The City

As everyone is well aware by now, last month was quite the zombie neuroscience month for me. There was zomBcon, my interview of George Romero, and the National Geographic special, "The Truth Behind Zombies".

Well, I don't want to be a one-trick zombie pony. So I'm branching out with the whole neuroscience and science-fiction thing.

A few weeks ago my friend and colleague Roby Duncan told me that he was submitting an abstract to the 32nd annual International Conference on the Fantastic in the Arts in Orlando, Florida. He told me this the day before the deadline while I was at the annual Society for Neuroscience conference in San Diego. Yeah; I was working on a tight schedule...

Yesterday, I found out that my abstract was accepted! Thus, I will have 20 minutes to "read my paper" at the conference in March. That last bit is in quotation marks because that's a social science/humanities phrase that I think means give a talk, but I'm honestly not quite sure and I need to sort that out. This is my first non-scientific conference presentation. It should be an interesting conference. After nearly a decade of scientific conferences, I'm curious to see how the other half of academia approaches things. I expect about the same: with beer.

Anyway, for those interested, I'll be talking about China Miéville’s book, The City & The City, and the unique perceptual/awareness habits of its citizens. It's an excellent book, and Miéville is one of the best contemporary science-fiction/fantasy writers, in my opinion. The full abstract, as accepted, is below.

Breach in the mind: The hypothetical neuroanatomy subserving the process of “unseeing” in China Miéville’s The City & The City

In China Miéville’s The City & The City, citizens of the grosstopically overlapping cities of Besźel and Ul Qoma are taught from birth to “unsee” the architecture, people, events, and surroundings of the other city. Despite the terminology, unseeing is not just limited to the sense of vision, but to all other senses as well, and as such citizens must also “unhear” and “unsmell” stimuli from the other city. The consequences of failing to unsee are dire and possibly life threatening, as the semi-mystical force of “Breach” is charged with removing any offenders who willfully or accidentally notice the other city. In areas where the cities are cross-hatched, citizens of each city must carefully and selectively unsee their surroundings, even for houses neighboring theirs, for cars sharing the same roads, and for people walking the same streets. All the while they must unsee while noticing just enough to avoid running into their forbidden neighbors.

Although Miéville uses the process of unseeing for great narrative effect in a fictional setting, there is a rich neuroscientific literature surrounding the neuroanatomical bases for attention and awareness, perception, directed forgetting, sensory adaptation, repetition suppression, and other associated processes. In this presentation I will provide an introductory discussion on the neuroanatomical basis of attention and perception. From that foundation I will then provide a “hypothetical neuroanatomy” of what the brain of a person raised in a culture of unseeing might look like such that they could consciously and willfully unsee.

According to ironic process theory, the human brain fares quite poorly at avoiding certain thoughts when deliberately trying to suppress them. Thus I propose that a Besz or Ul Qoman citizen’s brain must develop differently when raised in an unseeing society to allow for such directed forgetting. Such goal-directed behaviors are mediated by an area at the front of the human brain known as the prefrontal cortex. The prefrontal cortex exerts control over sensory processes during normal perception, memory, and cognition. This process is referred to as “top-down” control.

I incorporate into my hypothetical neuroanatomy information from literature on patients with focal brain lesions, from neuroimaging, and from neural development to provide a hypothetical account for unseeing. Specifically, I will cite evidence from the brain lesion literature that shows that damage to specific brain regions affects the ability to attend to, remember, or be aware of certain stimuli, as well as brain imaging studies on attentional “blinks” and the role of ongoing brain activity in awareness, perception, and memory. Finally, I will discuss the physiological mechanisms behind sensory adaptation and how such mechanisms may subserve unseeing.

This science-nonfiction evidence can provide an understanding of the science-fiction of the Besz and Ul Qoman brain. I believe that the neuroanatomical plausibility of Miéville’s unseeing is what lends such strong credibility and interest to the story itself, as his narrative device of unseeing remains fantastic enough to differentiate from the real while being grounded enough in fact to remain comprehensible and relatable.


Updating university education

There was a question over on Quora a while back: "Will lecture-style teaching at universities become obsolete? If so, what do you think will replace lectures?"

I gave a somewhat off-the-cuff answer based on some thought processes that had been kicking around in the back of my head for a while. I'm curious to hear what people think.

From a purely cost-benefit point of view, it always struck me as wastefully redundant to have college professors and lecturers, many of whom are sub-par teachers but whom may be excellent researchers, teaching the same basic math, literature, biology, physics, chemistry, etc. courses. There are more than 4000 colleges in the US alone. At 10 hours of work per week of lecturing, prep, grading, etc., that’s more than 1 million hours per year in redundant work by highly trained specialists. Per course.

Why are we having a bunch of people who are trained to do research (and often not trained to teach), teach the same redundant information that takes time away from their research? Why not just broadcast lectures by the best of the best via some education syndication to consolidate the actual lecturing and have other professors and lecturers on-hand to supplement the information?

That said, homogeneity of education strikes me as a less-than-ideal solution...


zomBcon interview with George Romero: fast vs. slow zombies

Bradley Voytek fast vs. slow zombies

Some videos are now online!

As I mentioned previously, at this year's first annual zomBcon, much to my surprise, I found myself moderating and leading a panel with George Romero.

Ostensibly, the panel was meant to consist of George, myself, and two other panelists. The topic of conversation was slated as "fast vs. slow zombies". If you're unfamiliar with this terminology, the fast/slow zombie issue is a big one in zombie cinema.

The original Romero zombies (or, as he calls them, the "living dead") from his Night of the Living Dead were very slow moving. As he explained, this is because... well... they were dead. Rigor mortis and all. Of course they would be slow.

For many, the draw to the zombie genre is the very fact that no one zombie is necessarily scary. They're easy to kill, easy to outrun, and easy to outwit. The horror in zombie movies are the sheer numbers involved. There are hundreds, if not thousands of these dumb, slow moving, relentless creatures that have no desire and no will, just the need to feed. National Geographic's The Truth Behind Zombies (with yours truly!) goes into all of this stuff in some detail.

Now, in 1985, Dan O'Bannon (writer on Alien and Heavy Metal, effects specialist for Star Wars) made Return of the Living Dead. This somewhat horror, somewhat (now) comedy introduced the slightly more intelligent, faster, more coordinated zombie. While this movie is often most famous for Linnea Quigley's dance scene (googling this is NSFW), it also co-stars MST3K favorite, Clu Gulager (Clu Gulager alert!).

Back to the panel. What ended up happening was that George showed up a bit late while a few hundred people waited and I paced around up front. The other panelists never arrived, so it was just me on stage with Mr. Zombie himself. In the end I pulled up the rest of the zombie "brain trust" to help me out: Tim Verstynen and Steve Schlozman. We talked a bit about zombie neuroscience, video games, cinema, etc. It was very informal, and a lot of fun.

Looks like someone filmed the whole thing, too! The audio quality isn't great, but if you turn up the volume you can hear it decently well.

Here's the first video:

along with parts two, three, four, and five.


Voytek Frontiers in Human Neuroscience paper: "Shifts in gamma phase-amplitude coupling frequency from theta to alpha over posterior cortex during visual tasks"

This post is about my latest paper published in the (open access!) journal Frontiers in Human Neuroscience. This paper was actually an invited submission (even though it was invited, it was still peer-reviewed!) It was part of a special topic issue, "Origins and consequences of rhythmic cortical activity".

The Frontiers journals are, simply put, amazing. For those of you unfamiliar with the peer-review process, Penny Arcade sums up the experience pretty nicely (satirized here).

In all seriousness, here's how it works:

1. As a reviewer you often see the names of every author of the paper you're reviewing, but you never get to know who your reviewers are. The counter argument against changing this and making the system double-blind is often that, "people can often tell who wrote a paper anyway based upon the content, methods, etc."

To deal with this, the Frontiers journals start off single-blind, but after all the reviews are completed the reviewers are unblinded and everyone knows who everyone is. This has lead to substantially nicer, more helpful reviews, in my opinion. All reviewer names are published along with the paper which means that the reviewers are held somewhat publicly responsible as well.

2. Speed and nature of communication. Reviewers often take weeks or months to review a paper. And then the author takes several weeks to respond. And then the reviewers go back and review the responses, etc. This can lead to a several-months long review process. Again, the Frontiers journals have addressed this nicely. After the first round of reviews the editor initiates an interactive online forum where the editor, reviewers, and authors can interact at a more rapid pace.

3. Editors and "novelty". Often papers get rejected before ever being reviewed because an editor deems the paper to be not "novel" enough to warrant publication in their journal. This tends to be a problem for the "high impact" journals. The Public Library of Science has addressed this by introducing PLoS ONE, which publishes nearly any paper deemed scientifically and methodologically sound, regardless of "novelty".

So the review process was actually quite interesting and quick. It's nice to see some publishers embracing technology a bit and allowing for rapid, forum-style communication between the authors and reviewers.

As for the paper itself, the idea grew out of a pretty simple follow-up based on an awesome paper by my friend, colleague, groomsman, and co-author, Ryan Canolty. In 2006, Ryan published a paper in Science: "High gamma power is phase-locked to theta oscillations in human neocortex". As the title implies, they found that oscillations in the human neocortex form nested rhythms across frequency bands. They showed that the phase (how "peak-like" or "trough-like" the sinusoid is) of low frequency "theta band" activity (4-8Hz) modulates the amplitude of high frequency "gamma band" activity (80-150Hz).

Bradley Voytek Frontiers in Human Neuroscience 2010 Shifts in gamma phase-amplitude coupling frequency from theta to alpha over posterior cortex during visual tasks

More simply: when the theta wave is at its lowest point, the trough, power in the gamma band is highest. You can see a toy example of this in the image above.

Great! But why do we care?

First, gamma band power correlates with single-unit (neuronal) spiking activity correlates with fMRI BOLD signal. That is, all of these different signals that we measure might be a way at getting at neuronal activity more directly.

Second, oscillations in low frequency rhythms are probably reflecting (sub-threshold) changes in the extracellular membrane potential. For neurons to "fire" an action potential, ion channels in the cells themselves must open to allow ions (and thus charge) to flow.

Third, low frequency oscillations may help coordinate long-distance communication between brain regions by "shaping" which neuron groups are more likely to respond to a stimulus by biasing the statistical probability of action potentials occurring. Thus, these nested brain rhythms might reflect a mechanism of connecting single-unit activity with huge brain networks, and may reflect the way that the brain "works".

That was really dense. Let's unpack that.

We don't know how different brain areas communicate to give rise to cognition. There's a complicated code that we don't understand. This nested oscillations idea might connect the really low-level physiology of the brain with high-level cognition that requires communication between a lot of brain regions. And it ties it all nicely together into a cool communication system where different low frequencies could act as "switches" to bias information flow between brain regions.

I've talked about oscillations here before: in my post about the paper "Endogenous Electric Fields May Guide Neocortical Network Activity", in my post on neuroimaging, "What can we measure using neuroimaging techniques?", and in the post about my paper in the Journal of Cognitive Neuroscience, "Hemicraniectomy: A new model for human electrophysiology with high spatio-temporal resolution".

In this paper we recorded data from two human patients with implanted subdural electrodes. This technique—known alternately as "electrocorticography" (ECoG), "intracranial EEG" (iEEG), or "intra-cranial electrophysiology" (ICE)—is a surgical procedure done as a treatment for (usually) epilepsy. I've talked about this stuff before (see the above links); it's a staple of my research. (For a more detailed explanation as to why someone might get electrodes surgically implanted into their brains, check out this part of one of my talks).

The first step was to recreate Ryan's findings that gamma amplitude couples to theta phase.

Bradley Voytek Frontiers in Human Neuroscience 2010 Shifts in gamma phase-amplitude coupling frequency from theta to alpha over posterior cortex during visual tasks. Theta gamma phase-amplitude cross-frequency coupling

When we recreate the conditions of Ryan's experiments (auditory tasks, frontal electrodes) we see really nice theta/gamma coupling, as can be seen in the image above. When the subjects are performing non-visual tasks, theta/gamma coupling is strong across most electrodes. The more red the electrode, the strong the theta/gamma coupling. In the comodulogram (colorful thingy on the left) you can see the average theta wave in the specific highlighted electrode. You can also see the red stripes above it that occur during the trough of the theta. The more red those stripes, the higher the gamma amplitude. Nice coupling.

Bradley Voytek Frontiers in Human Neuroscience 2010 Shifts in gamma phase-amplitude coupling frequency from theta to alpha over posterior cortex during visual tasks. Alpha gamma phase-amplitude cross-frequency coupling

Now, in contrast, when the subjects perform visual tasks, we see that at electrodes over the posterior (visual) parts of the brain begin to exhibit coupling between gamma power and a different low-frequency band; alpha. Alpha is a "visual" brain rhythm that is strongly modulated by visual attention. When subjects are visually engaged, we find that phase-amplitude coupling over the posterior cortex shifts to an alpha/gamma pairing.

This paper is the first time anyone has shown that the phase frequency in phase-amplitude coupling is selectively modulated by behavioral state.

Good times!

This work was financially supported by the (sadly defunct) American Psychological Association Diversity Program in Neuroscience grant 5-T32-MH18882 (to B.V.) and the National Institute of Neurological Disorders and Stroke grants NS21135, NS21135-22S1, and PO40813 (to B.V. and R.T.K.).

Voytek B, Canolty RT, Shestyuk A, Crone N, Parvizi J, and Knight RT (2010). Shifts in gamma phase-amplitude coupling frequency from theta to alpha over posterior cortex during visual tasks. Front Hum Neurosci.


What's in a signal? Self-paralysis for neuroscience!

It's been a while since I've had a "fun" post because I've been so busy with work stuff. There was my Japan trip, zomBcon, and I'll be giving a talk at the annual Society for Neuroscience conference next week. In between I've been writing a lot about my recent work and associated press.

Time to tone down the narcissism and approval-seeking and talk about some Crazy Science History!

I've already talked about Henry Head's self-mutilation and penis experimentation as well as Brown-Séquard sperm injections. As some of you can tell, I'm fascinated by this kind of work. So when my PhD advisor pointed out this paper to me, I knew I had to cover it here.

The paper title seems tame enough: "Nature of average evoked potentials to sound and other stimuli in man" by Reginald Bickford, James Jacobson, and D. Thane R. Cody from 1964. Sounds innocuous, but wow it's werid!

But first: context!

As I've mentioned, scalp EEG has some pretty serious limitations, even though it's my technique of choice. EEG has a long history in cognitive neuroscience but it's not without some controversy, still.

Within the EEG community, there's a now classic paper (and somewhat controversial in its own right) by Yuval-Greenberg out of the lab of my colleague Leon Deouell titled, "Transient induced gamma-band response in EEG as a manifestation of miniature saccades". As I said in my earlier post on EEG "...the EEG signal is (generally) dominated by surface cortical signals." Note the parenthetical. Generally. Turns out, we might potentially be confounding a lot of "brain" signal with what's really some other physiological signal not coming from the brain, which Yuval-Greenberg and colleagues elegantly demonstrated in their work.

By the way, all imaging techniques are susceptible to these non-brain physiological confounds, which is one reason I work with stroke patients: it certainly provides stronger causal brain/behavior evidence than imaging methods.

Anyone who's worked with EEG knows that signal artifacts from eye movements are a major potential source of noise. So is muscle activity. In fact, because I was so curious about my method of choice, I worked hard to quantify some of these artifacts in my hemicraniectomy paper.

Try this: with your finger, touch the very front top of your ear, then move your finger directly from there to your scalp. Now move straight up about 1-2 inches from there, press your finger hard against your head, and then bite down.

Feel that? That's your temporalis muscle. It's big.

Bradley Voytek

Electrical activity from the muscles can be picked up quite easily. This is a technique known as electromyography. It's pretty straightforward: your muscles contract because of electrical signals, and those signals can be detected. Similarly, using a technique called electroretinography you can record signals from your retina.

What does this mean for EEG? Well, muscle and eye activity is very strong compared to brain activity recorded from the scalp. This is why electrocorticography is so nice, and why at-home EEG devices (that shall not be named here) that get a lot of buzz for being "brain-controlled" are so fraught with issues.

It turns out that, if you're running an EEG experiment, and you get a nice big "brain" effect, what you might actually be seeing is an artifact of muscle or eye activity. Think about it this way: let's say you're running an experiment to look at the neural basis of attention. To do this you play a bunch of auditory pure tones, and 10% of the time the tone is louder and the subject is supposed to respond. When you play those louder tones, you see a big EEG response. Well, what if the loud tones cause the person to flinch or blink ever so slightly? Because of the huge amplification of the EEG signals in order to detect brain activity, these slight contractions look really big, and it would appear that you have just seen a big neural response, when in reality the person might just be flinching!

How do you disentangle these phenomena!? (What will really twist your noodle is that the flinching is also controlled by the brain!)

This is where we come to the Bickford paper (at last!)

What did Bickford do? He had his colleagues inject him with the neurotoxin curare, commonly known for its use in arrows by South American hunters.

Curare is a strong muscle relaxant. So strong, that at the doses used in the experiment, Bickford had to be placed on an artificial respirator, or else he would have asphyxiated due to the paralysis of his breathing muscles!

What they found was that the EEG responses they had been finding in response to the auditory tones were completely abolished once all of Bickford's muscles had been paralyzed. They interpreted this as demonstrating that the auditory tone responses seen in EEG in this case aren't due to brain activity, but rather due to muscle activity from slight head movements in response to the tones.

This is not to say that auditory tones don't have a neural component, of course, just that a researcher must take care of confounds when conducting experiments.

It's amazing that this kind of work is still going on. Here's a paper from an Australian group from 2008 wherein they do a very similar design, going so far as to paralyze four of their subjects! Thinking activates EMG in scalp electrical recordings. (It's unclear whether the authors were the paralyzed subjects in this experiment).

As an aside, my academic grand-father, Robert Galambos, was very involved in this kind of auditory work. His research proved that bats use hearing to echolocate, and lead to the development of the auditory brainstem response exam that's given to almost every newborn. In fact... he deserves his own post....


Voytek Neuron paper: "Dynamic Neuroplasticity after Human Prefrontal Cortex Damage"

This post is about the second of my most recent publications, "Dynamic neuroplasticity after human prefrontal cortex damage", published in the journal Neuron. This paper was—it seemed to be—the next logical extension of my recent PNAS paper (open access!), "Prefrontal cortex and basal ganglia contributions to visual working memory".

(EDIT: Here is the official press release and some media.)

While the peer-review process for the PNAS paper was genuinely a pleasant experience, this one was quite a lot more difficult. For starters, this paper was actually outright rejected from Neuron. Twice. And we appealed. Twice. No one can say I'm not adamant, at least.

So as I showed in the PNAS paper, patients with unilateral prefrontal cortex (PFC) lesions do just fine when visual stimuli enter their "good" (non-lesioned) hemisphere, but they show behavioral deficits when the stimuli enter the "bad" (lesioned) hemisphere. As I explained in my post on the PNAS paper, this research was an extension of work published in Nature Neuroscience by my advisor Robert Knight and colleague Francisco Barceló (both coauthors on this Neuron paper). So unilateral PFC lesions cause attention and memory deficits for contralesional visual stimuli.

However these deficits are unlike lesions to the motor cortex, for example. If you have a motor cortex lesion, chances are you'll be paralyzed on the opposite side of your body (this is called hemiparesis, since it's paralysis on only half the body). It's not like the people with unilateral PFC lesions have no working memory or attention. They're just subtly worse at these things than people without lesions. Why is there such a big difference between the lesions to the frontal lobe that affect movement and lesions than affect cognitive functions?

It's all about connectivity and distributivity!

Think about it this way: if you cut your internet cord, you're not going to be able to access google.com. The reason is that you've removed your final, most important connection to the internet. But it you cut a hundred cords at Google, you'll still be able to access google.com, because google.com isn't centralized. It's distributed across many thousands of machines. But those 100 cords weren't doing nothing, so you should be able to measure how the information in the network rerouted to get around those 100 missing connections.

That's the idea behind this paper. Sure, PFC lesions cause problems, but how are the people with those lesions doing as well as they are? I mean, if you stick 100 people in an fMRI machine and have them do a visual attention or working memory task, their PFC will "light-up" in a task-dependent manner. So why can people with huge amounts of damage to that region still do the task? This kind of indicates that hey, maybe cognition isn't so nicely localizable.

So we built off previous work on language and motor recovery that showed that recovery was associated with task-specific increases in the homologous brain regions in the non-damaged hemisphere.

So I re-analyzed the data from my PNAS paper, as well as some older PFC attention data from our lab.

Bradley Voytek Neuron 2010 Dynamic neuroplasticity after human prefrontal cortex damage

Just like the figure from my PNAS paper post, the figure above shows the average of the two patient groups where the color represents the number of patients with a lesion in that exact brain area.

Bradley Voytek Neuron 2010 Dynamic neuroplasticity after human prefrontal cortex damage

As you can see in the figure above, when we increased the memory load for our subjects with PFC lesions they showed increasing activity over the undamaged PFC. The greater the memory load, the harder the intact PFC seemed to be working.

What was new in our study was that we showed that this compensation occurs for cognitive tasks, and it occurs very rapidly (within 600 milliseconds) and only as needed. That is, the intact PFC seems to be "recruited" when the task is too hard.

Bradley Voytek Neuron 2010 Dynamic neuroplasticity after human prefrontal cortex damage

We tried to show this "recruitment" using a very rudimentary connectivity analysis where we showed that early activity in the visual cortex (green circles at the back of the brain) was strongly correlated with activity over the intact PFC only when the lesioned hemisphere was challenged.

This work was financially supported by the (sadly defunct) American Psychological Association Diversity Program in Neuroscience grant 5-T32-MH18882 (to B.V.) and the National Institute of Neurological Disorders and Stroke grants NS21135, NS21135-22S1, and PO40813 (to B.V. and R.T.K.).

Voytek B, Davis M, Yago E, Barcelo F, Vogel E, and Knight RT (2010). Dynamic Neuroplasticity after Human Prefrontal Cortex Damage. Neuron.


Neuroscience world tour! Japan, zomBcon, University of Washington

What a hell of a few weeks!

Just about two weeks ago I left for a trip to Japan. I was invited to give a lecture at a workshop titled "Neuronal oscillations in multi-scale brain networks" at the 2010 International Congress of Clinical Neurophysiology. My presentation was related to my recent paper in Frontiers in Human Neuroscience, "Shifts in gamma phase-amplitude coupling frequency from theta to alpha over posterior cortex during visual tasks".

Our hostess was Dr. Noriko Tsuru; she was amazingly generous and a very interesting researcher. I couldn't have been more pleased to meet her. And she was kind enough to introduce me to Japan's Science Minister, with whom I got to share a bit of sake! I also got to meet Dr. Ryuta Kawashima (of Brain Age fame) very briefly!

Bradley Voytek ICCN

The trip was amazing and I got to meet some brilliant people. Japan was beautiful, the food was excellent, and overall it was an interesting personal and cultural experience. I would love to go back for a more personal vacation some day. We saw and did so much, that I really can't capture it all here. Though, I do have time for one quick historical anecdote that I loved. We visited a castle that was a major stronghold during the Edo period. Some rival clans came to siege the castle and were completely stopped by the huge moat surrounding the compound. Instead of giving up, the attacking general spent several months re-routing the local river, flooded the moat, and proceeded to sail in ships to bombard the castle walls. Friggin' AWESOME.

My wife and I also visited a super interesting place in Osaka. It was a 9-story entertainment... place. It had three stories of bowling, two for gambling, two for video games, a bar, darts, billiards, karaoke... everything. Check it out:

This week I also applied for a few faculty positions at UCLA and Stanford. This has been an exciting process, because I'm doing it much earlier than I was initially expecting (considering I'm only a few months into my post-doc). But I've had a pretty good publishing run, and several faculty members have suggested I give it a shot. The worst-case scenario is that I continue doing my post-doc as planned, with my excellent mentor(s).

By the way, I think I should get extra "definitely prepared for academia" points for putting the final touches on my application essays while riding the ferry across Osaka Bay from Kobe (the site of the conference) to Kansai Airport...

Now, instead of flying directly home from Japan, as any sane man should have done, I stopped instead in Seattle for a long weekend of presenting both real and fake science! Hooray!

As a member of the advisory board for the Zombie Research Society, and one of the world's experts on the zombie brain, I was asked to attend the first annual zomBcon!

Here, along with my colleague Tim, I gave a presentation on the anatomy of the zombie brain. The presentation went over very well, and people really seemed interested in the actual science! One guy even told us that he was planning on going back to school to get his bachelors in psychology after hearing us talk. That's really, honestly, the best kind of reaction I can hope for from these kinds of outreach efforts.

Coincident with zomBcon was the release of National Geographic's TV show "The Truth Behind Zombies" starring yours truly, and other board members of the ZRS. I haven't even seen it yet, but I hear it's not too bad.

One of the most exciting parts of zomBcon was when I found myself suddenly moderating an hour-long interview with the legend, George Romero, director of the original, 1968, Night of the Living Dead. We sat up on a stage in front of a few hundred zombie enthusiasts shooting the shit about zombies, psychology, neuroscience, and sociology, as well as his personal history and interest in film.

Bradley Voytek George Romero Steven Schlozman zombies zombcon

To round it out, Romero is also now an official member of the ZRS advisory board! So chances are, we'll get to hang out again.

Another zomBcon highlight was the fact that I got to sit in on a 5-person panel with Fight Club author Chuck Palahniuk! We talked about zombie infection, and why we think zombies are so culturally fascinating and hot right now. We got to talk about memetics, neuroscience, film, etc. Apparently he's also writing a piece for Rolling Stone about zomBcon. Can't wait to read it.

Bradley Voytek Chuck Palahniuk zombies zombcon

(That's Palahniuk on the far right)

Finally, I gave a scientific talk at the University of Washington Monday about my latest research. Attending the talk was a neurosurgeon who is the decendent of legendary psychologist and neuroscientist Donald Hebb. And I got to have a nice, private dinner hosted at my friend Kai's place with one of the Ojemann family, and son of neurosurgeon George Ojemann (who trained with Wilder Penfield). The guys have some crazy neuroscientific lineages!

Like I said, hell of a few weeks! And even better, I've got some big news coming in the next few days!