Caveat lector: This blog is where I try out new ideas. I will often be wrong, but that's the point.

Home | Personal | Entertainment | Professional | Publications | Blog

Search Archive


Endogenous Electric Fields May Guide Neocortical Network Activity

I’ve been geeking out about this paper for a week or so now, so I just finally decided to put together a post about it to explain why I think it’s so awesome.

I’ve been thinking about the foundations of electrophysiological research in neuroscience. The earliest experiments on the electrical properties of neurons were performed on giant squid axons because the axons in these animals are quite large and thus easy to record from. That is, the axons are visible to the naked eye, which means they’re very easy to insert recording electrodes into. From the earliest (Nobel Prize winning experiments) by Hodgkin and Huxley in the 1930s and 40s it was shown that neurons communicate via the transmission of electrical “all or none” action potentials. Decades of biochemistry and electrophysiology in the intervening years have shed a lot of light on the biological and biophysical mechanisms that give rise to these action potentials.

Of course, one of the major outstanding questions in neuroscience is how do you go from millions of individual neurons firing these rapid, near-impulse electrical potentials to a unified behavioral and cognitive experience?

Over the last decade there has been an explosion in the role that endogenous, electrophysiological oscillations play in cognition. To unpack that a little bit, back in the 1920s Hans Berger (about whom I need to write a whole post) found that if you use sensitive recording electrodes attached to the scalp, you can pick up the electrical activity of the brain. As an aside, interestingly these first experiments were performed on patients with small holes in their skulls because the electrical signals were better. I conducted an entire experiment just looking at this phenomenon. Again, decades of research have shown that these electrical fields probably represent the summed activity of millions of synaptic electrical potentials. That is, in order for an action potential to fire, ion channels open in each neuron changing the flow of electrical charge into and out of the cell. Millions of these charges sum together (it’s complicated) and these summed charges can be picked up using EEG.

With EEG (either on the head, outside the skull, or implanted inside the skull onto the brain), really clear oscillations can be observed. In fact, these oscillations are so obvious, that Hans Berger noticed them early on.

Hans Berger EEG

It turns out that the amplitude of these oscillations is modulated by cognitive tasks. Sometimes they oscillate faster or stronger, sometimes slower or weaker. Using math (and science!) we can easily show that different parts of the brain seem to have “preferred” oscillations. No one is sure why. Using even more math, my friend and colleague Ryan Canolty (among others) showed that when slow oscillations are at their lowest points, you’re more likely to see an increase in neuronal activity. I’ve got a paper coming out soon (that I’ll surely write about here) showing that this effect depends on the frequency and location in the brain of the slow oscillation, as well as what the person is doing.

Anyway, it’s often interpreted that the slow oscillation represents the extracellular membrane potential. That is, the space in between neurons has a charge, and if this charge is a little lower (the trough of the slow oscillation) then neurons are more likely to fire (more activity). If the charge is larger (the peak) then neurons are less likely to fire (less activity).

So what’s really been twisting my noodle is that maybe this interpretation is wrong. Maybe across millions of years of divergent evolution, the axon of a giant squid has evolved to perform computations necessary for the survival of giant squids, and maybe mammalian neurons have evolved somewhat differently. Maybe individual action potentials are important in humans, but maybe they’re not the only things doing “computing” in the brain. Maybe these oscillations are also playing an important computational role. Maybe they’re not just epiphenomena of action potentials. But maybe there’s a complex feedback system between action potentials and oscillations.

And that’s just what Flavio Fröhlich and David McCormick have shown. And that’s why this paper is awesome. I’m pretty sure that the more research that’s done on this topic, the more it will be shown that oscillations are pretty key players in this whole consciousness and cognition thing.


Free Will

Recently there was a post on the New York Times blog by Dr. Galen Strawson about Free Will.

The main argument of this post is as follows: because who we are is based in part upon our biology and in part upon our environment, and because we are not responsible for our biology nor our initial environment, we are ultimately not responsible for what we do because what we do is based upon who we are.

More formally, the author states that:

...you can’t at any later stage of life hope to acquire true or ultimate moral responsibility for the way you are by trying to change the way you already are as a result of genetic inheritance and previous experience.

Why not? Because both the particular ways in which you try to change yourself, and the amount of success you have when trying to change yourself, will be determined by how you already are as a result of your genetic inheritance and previous experience.

My problem with this reasoning is that we—as Deterministic individuals—exist in an environment with other such individuals. Many of them. Our interactions result in emergent phenomena that cannot be explained by the actions of any one person alone.

Our biology (brains) are affected by our decisions, which would then of course affect our decisions, which in turn affect our brains, and so on. We're a complex, iterative, dynamic feedback system. We don't live in a vacuum; rather we are products of our social environment, society, culture, and so on.

This topic has been bugging me more and more lately in terms of a similar fallacy as arises in neuroscience. Neuroscientists talk about functional localization in the brain as if "functions" are things can be placed on a map. Hell, I just wrote a book chapter about why I think that this is not the right way to talk about these problems. Similarly, philosophers talk about Free Will as though it is a binary either/or.

Free will and brain functioning are active, dynamic processes! It may very well be True that a baby or child cannot be held responsible for its actions; that it is not possessed of Free Will but is driven by genetic and environmental factors; that it is Deterministic. But much like a series of particles placed in an enclosed sphere, if you start with one hundred particles all moving away from one another, away from the center, you can precisely map their locations. But soon their intercollisions become more complicated (chaotic) such that tracking them becomes a computationally exhausting endeavor. Scale that up to a thousand, a million, or a thousand million particles and eventually this problem becomes intractable.

Early in our lives we lack Freedom and Will, and though our initial course may be constrained by genetics and early experience, we can shape our surroundings to alter that course. And though those initial choices may themselves be constrained, as time goes on our end point becomes impossible to trace from our starting condition because of the huge number of options with which we are faced.

We exist in a chaotic environment and there is a symbiotic feedback between our physical brains and our environments that provides a route through which the seemingly non-deterministic aspects of free will might arise.


Blue Brain Project

We know a lot about the biology of the neuron. It's quite amazing to see what information we have amassed these last few decades. Given my recent introduction to the whole Singularity way of thinking via the upcoming Singularity Summit, I've been thinking about brain-emulation, brain-computer interfacing, thought digitization, etc. more lately.

And of course, this has lead to the Blue Brain Project. For those who don't know about it, this project, funded by the Government of Switzerland and individual benefactors (edited to correct my error, see comments below), aims to "reverse-engineer the mammalian brain, in order to understand brain function and dysfunction through detailed simulations," and, "[u]ltimately, given additional resources, the facility can be extended to permit whole brain modeling, simulation and experimentation."

In his TED talk and in the associated BBC story, Henry Markram, the director of the Blue Brain Project claimed that "[i]t is not impossible to build a human brain and we can do it in 10 years."

So here's what I've been thinking: No.

I mean, not to be a super skeptic naysayer and all who ends up looking really dumb for making a sci/tech forecast that's woefully ignorant, but seriously. No. Not to say that it won't ever happen, but at this point we honestly just don't know enough about how all the pieces play together to give rise to our cognition. I think the neurosciences right now are where physics was in the early 1900s. A bunch of people thought Newtonian mechanics could explain everything. Turns out, the physical universe is much more complicated than that. Now we have quantum mechanics, relativity, the multiverse, dark energy, etc.

Like I said at the beginning of this post, we know a lot about the biology of the neuron. Similarly, computational modeling has gotten very sophisticated. When researchers build computational models incorporating known biology, they call it a "biologically-plausible" model. I think we're still stuck in the Newtonian mechanics period of neuroscience, and we're just now segueing into the more complicated "oh my god this stuff is harder than we thought!" part of our science.

To think that modeling a bunch of neurons digitally is akin to a thinking, evolved, conscious, aware human brain is like thinking that by soldering together a couple of million transistors in a "Apple-like fashion" will give you a working MacbookPro.


What can we measure using neuroimaging techniques?

Recently I've been using new internets social thingy 2.0, Quora, which, according to Wikipedia, is "a social networking website that aggregates questions and answers to many topics and allows users to collaborate on them." So let's go with that.

Anyway, recently someone asked the question, "What can we measure using neuroimaging techniques?"

This is a good question. So I started to write an answer. An hour later, it had become a monster, and I thought it might be worth sharing here. Here it is:


Because this is a very broad topic, I will do my best to address the most common aspects of neuroimaging without going into too much detail. Please feel free to add more to this list, and I recognize I am being neither complete nor fully extensive.

Broadly speaking, I would say there are three categories of neuroimaging: structural, functional, and chemical. These can then be subdivided into non-invasive, semi-invasive, and invasive, which delineate the degree of physical invasiveness involved in the imaging method. That is, cutting open the skull and implanting electrodes would be considered invasive, whereas putting the electrodes on the head (such as in scalp EEG) is non-invasive. Because I'm not proficient in animal imaging methods I will focus on human studies, most of which are non- or semi-invasive, with a few exceptions.

Structural Neuroimaging
Any technique that images structures of the brain. This would include CT (Computed Tomography), MRI (Magnetic Resonance Imaging), and DTI (Diffusion Tensor Imaging).

CT scanning is non-invasive uses x-rays to image tissue density. It is very rapid and can detect cerebral hemorrhaging in the early (acute) stage. It is most often, therefore, used for medical purposes.

Structural MRI is non-invasive and often provides better contrast resolution than CT with similar (and again, often better) spatial resolution. Unlike CT, structural MRI provides excellent tissue delineation, allowing users to visualize boundaries between grey and white matter in the brain, for example. Structural MRI is often used in neuroimaging to calculate volume of different brain regions or to define regions of brain damage or tumor.

DTI is non-invasive and can be done on most research MRI scanners. It involves using a special scanning and reconstruction sequence to image the flow (or, more specifically, constraints in the flow) of water through the brain. Because water flow is constrained by the axons (white matter) in the brain, it can be used to image large axonal connections between brain regions.

Functional Neuroimaging
Any technique that quantifies some metric of brain activity. This would include EEG (ElectroEncephaloGraphy), MEG (MagnetoEncephaloGraphy), fMRI (functional MRI), PET/SPECT (Positron Emission Tomography/Single Positron Emission Computed Tomography), NIRS (Near-InfraRed Spectroscopy), and, to a certain extent, TMS (Transcranial Magnetic Stimulation) and TDCS (Transcranial Direct Current Stimulation), along with several others.

For all functional imaging methods, researchers often make use of the cognitive subtraction method originally established in reaction time studies by Franciscus Donders. The underlying assumption in these studies is that activity in brain networks alters in a task-dependent manner that becomes evident after averaging many event-related responses and comparing those against a baseline condition. Deviations from this baseline reflect a change in the neuronal processing demands required to perform the task of interest.

For example, if you want to study the effects of thumb movement on brain activity, you would have a person move their thumb many dozens or hundreds of time while interrogating brain activity. You would then call the time of thumb movement time zero, and take a set time window around that thumb movement and average the brain activity across those many trials. Because brain activity is usually very "noisy", it's hard to detect specific, thumb-movement activity at any give time. But averaging across all of these times highlights thumb-specfic brain activity.

EEG is most commonly non-invasive, although it can be invasive. Invasive EEG can be referred to as ECoG (ElectroCorticoGraphy) when electrodes are placed specifically on the cortex of the brain or, more generally, as iEEG (intracranial EEG) or ICE (IntraCranialElectrophysiology). Because neurons communicate via electrical action potentials generated by ionic current flow differentials, each neuron acts has a little source and sink. The integrated electrical activity of millions of neurons is what is picked up in EEG. Because the power of the electrical signal drops of as a function of distance from the source, the EEG signal is (generally) dominated by surface cortical signals. Scalp EEG is characterized by its generally low spatial resolution but excellent temporal resolution. To unpack that, when the electrodes are placed non-invasively on the scalp the signal being recorded is far away from the brain source. Also, because the skull is not transparent to the brain electrical signals being measures, the signals get spatial smoothed and smeared (I've published on this before with an interesting patient group with their skulls surgically removed; see Voytek et al., 2009 for details). This makes signal source localization a mathematically intractable problem, though there are very good methods of constraining the search space to improve methods. However, because the electrical signal amplitudes can be queried at the rate limited only by the signal amplifier, brain activity can be measured on a sub-millisecond timescale.

iEEG is an invasive method that involves implanting recording electrodes directly onto or in the brain. This is done for medical and surgical reasons (usually to pinpoint the source of epileptic activity). Researchers collaborate with the patients who have had the procedure done to request to obtain the recordings from the patients' brains since the opportunity is so unique and rare. This method is so valuable because, like EEG, the temporal resolution is excellent, but because the researcher knows exactly where the electrodes are placed, the spatial resolution is vastly superior to scalp EEG.

MEG is similar to EEG, however it measures changes in the associated magnetic fields of the neurons generating electrical potentials. Because the skull is transparent to magnetic fields, however, the spatial resolution of MEG is said to rival that of fMRI while maintaining the superior temporal resolution of EEG. However, like EEG, the signal is biased toward cortical sources.

fMRI is different from EEG and MEG in that it does not measure neuronal activity directly. Rather, what is measured is the hemodynamic response, or activity-related changed in blood flow. This is usually measured as blood-oxygen level dependent (BOLD) activity. When neurons are active they utilize metabolic resources such as glucose and oxygen, so the assumption in fMRI is that task-related changes in BOLD reflect changes in the brain's oxygen use in any given region. Because I'm not an fMRI researcher, I will rather defer to an excellent post on neuroskeptic, "fMRI in 1000 words".

PET and SPECT are similar to fMRI in that, depending on the radiotracer used, it indirectly measures neuronal activity through blood flow or cerebral metabolism. As with fMRI, the assumption is that changes in metabolites reflect task-related changes in brain activity in a specific region. PET is semi-invasive in that it requires an injection or inhalation of a radioactive substance. To measure brain activity, either radioactive water, or glucose, etc. are injected into a person. As these radioactive molecules diffuse through the bloodstream they are deposited in certain regions. As the radioactive portions decay they give off gamma rays. Through coincidence detection, these gamma ray sources can be localized and activity maps reconstructed. Regions with a higher density of emitted gamma rays are assumed to represent regions of greater neuronal activity.

NIRS is another method of functional neuroimaging, unfortunately I'm not very familiar with it, so I defer to wikipedia though I cannot vouch for its accuracy.

TMS and TDCS can be utilized for neuroimaging in a sense. For example, repetitive TMS (rTMS) can be used to alter cortical activity in a small patch of cortex. rTMS is referred to as applying a "virtual lesion" in that the small stimulated region of cortex stops working as efficiently for a few minutes. If there are behavioral changes associated with the application of rTMS then researchers can infer function associated with that region.

Chemical Neuroimaging
Any technique that can measure the specific concentration, usage, or flow of a specific neurochemical would full under the auspice of chemical neuroimaging.

PET can also be used in chemical neuroimaging. Rather than injecting radioactive versions of neuronal metabolites researchers can inject radioactive versions of neurotransmitters or other substances. For example, if one wanted to specifically examine the functional changes in dopamine activity, then radioactive dopamine can be injected.

Various methods of MRI are also capable of chemical neuroimaging, however I am less familiar with those, as well.


Zombies and Singularity Summit

What an odd week it's been!

So once again my talk at TEDxBerkeley has opened a few interesting doors for me. A few weeks after my talk I had lunch with John Chisholm. Apparently he's a colleague of Michael Vassar, the President of the Singularity Institute for Artificial Intelligence. In August they're hosting the Singularity Summit.

The connection here is that John Chisholm saw my talk which lead to our lunch. He mentioned my talk to Michael Vassar. On Thursday, Michael Vassar emailed me to offer me admission to the Singularity Summit. He would like me to give him my neuroscientific input. This is a very unique opportunity; I'm pretty curious to see what the various views on the Singularity are. Of course one of the keynote speakers will be Ray Kurzweil, but I'm very happy to see that James Randi is the other keynote speaker. It should be an interesting comparison and contrast between the Singularity proselytizer and the great skeptic. It should make for excellent conversation.

Last week, MindHacks posted a very nice story about my TEDx talk. After this was posted, I received an email from Matt Mogk, the CEO of the Zombie Research Society. He heard me mention my love for "geek stuff" in my TEDx talk and emailed me to ask if I was interested in zombies at all.

If you know me at all, you know Jess and I are huge zombie fans!

So after speaking with him for a bit over the phone, he asked if I'd be willing to join their advisory board at some point in the future. Also, at the end of this October will be the first zomBcon, and he asked for me to be on a panel discussing the anatomy of zombies. Because of his intense love for zombies I pulled in my buddy and fellow neuroscientist Tim Verstynen to help me write a bit about the neuroscience of the zombie brain.

This is just awesome. Matt Mogk was an extremely nice guy, and I'm really excited about his approach with the ZRS. I see it as an excellent way to teach complicated neuroscience topics to the public using a really fun, amusing popular culture icon.

Anyway, this has all been quite a lot again. I'm still fighting my ever-present inner voice that wonders why in the hell this is all happening for me, and the fear of the other shoe dropping, but for now I'm trying to just enjoy the ride. As Jess said, these kinds of things aren't just luck. I've been working hard to make as many opportunities for myself as I can. Now I just need to take advantage of the opportunities I've been given.