darb.ketyov.com

Caveat lector: This blog is where I try out new ideas. I will often be wrong, but that's the point.

Home | Personal | Entertainment | Professional | Publications | Blog

Search Archive

26.12.11

A New Model for Scientific Publishing

There's a new paper out in Frontiers in Computational Neuroscience that is relavant to my interests. The paper is by Dwight Kravitz and Chris Baker from the NIMH and is titled "Toward a new model of scientific publishing: discussion and a proposal".

About two weeks ago Dwight emailed me his paper saying that he'd read a post I'd written last month for the Scientific American guest blog called "What is peer review for?" (you can check out my interview on Skeptically Speaking on this topic as well).

After reading Dwight's paper I want to make sure it gets as much exposure as possible. I can't it justice because it's so well-written and clear. But before I shuffle you off to read it I wanted to highlight their proposed system and ask you all what you think.

What are the barriers to instantiating their proposed system?

In the SciAm piece I concluded by saying:

But the current system of journals, editors who act as gatekeepers, one to three anonymous peer-reviewers, and so on is an outdated system built before technology provided better, more dynamic alternatives.

Why do scientists-–the heralds of exploration and new ideas in our society–-settle for such a sub-optimal system that is nearly 350 years old?

We can–-we should-–do better.

Well, it looks like Kravitz and Baker put a lot more thought into this problem than I and they've come up with an incredibly novel alternative system to peer review.

They nail it. There's almost nothing that I disagree with.

I love this paper.

They succeed here not because of their criticisms--which abound in the sciences--but rather because of their inclusion of a viable, creative, intelligent solution that addresses problems of motivation, utility, practicality, and even finance for an alternative model for peer review and scientific publication.

They begin by describing the current peer review system in the context of the neurosciences. They have an amusing graph that highlights the 17 levels of hell that is the peer review process loop. These guys crack me up.

"In the case of a rejection the Authors generally proceed to submit the paper to a different journal, beginning a journal loop bounded only by the number of journals available and the dignity of the Authors."

(click to enlarge)

Before I outline what their alternative proposal is, I want to highlight some of the problems regarding the costs and problems of the current system that Kravitz and Baker identify.

...what is striking is less the average amount of time [it takes to publish a paper], which is quite long, but more its unpredictability. In total, each paper was under review for an average of 122 days but with a minimum of 31 days and a maximum of 321. The average time between the first submission and acceptance, including time for revisions by the authors was 221 days (range: 31–533)...

...Beyond the costs of actually performing the research and preparing the first draft of the manuscript, it costs the field of neuroscience, and ultimately the funding agencies, approximately $4370 per paper and $9.2 million over the approximately 2100 neuroscience papers published last year. This excludes the substantial expense of the journal subscriptions required to actually read the research the field produces and the unquantifiable cost of the publishing lag (221 days) and the uncertainty incurred by that delay...

...Authors are incentivized to highlight the novelty of a result, often to the detriment of linking it with the previous literature or overarching theoretical frameworks. Worse still, the novelty constraint disincentives even performing incremental research or replications, as they cost just as much as running novel studies and will likely not be published in high-tier journals.

Okay, so what is their alternative model?

(click to enlarge)

It breaks down like this:

* No more editors as gate-keepers ("Their purpose is to serve the interests of the journal as a business and not the interests of Authors").
* Publication is guaranteed, so no more concern over "novelty".
* Editors instead coordinate the process and ensure that double-blind anonymity is maintained.
* Reviews are passed on to the authors as part of a "pre-reception" process which allows the authors to revise or retract their work before making it publicly available.
* Once public, the post-publication review process begins.
* An elected Editorial Board acts as an initial rating and classification service to put the paper in context.
* Members of the Board are financially incentivized... but the money doesn't go into their pockets, rather it can be put into their own research fund coffers.
* Papers are put into a forum wherein members of that forum can ask questions and offer follow-up suggestions.
* Forums provide a more living, dynamic quality to papers, as well as metrics for each manuscript.
* With better metadata for papers, ads could be more targeted by paper topic (no more ads for PCRs in cog neuro papers, for example).
* Kravitz and Baker note that something like a Facebook page for each paper could serve this purpose.

The Kravitz-Baker system saves money and time wasted by the needless "walking down the impact factor ladder" that usually occurs.

They address a lot more issues beyond what I've described above. They note that a common counter-argument to double-blind review, for example, is that a reviewer can "often guess" who the authors of the paper they're reviewing are because sub-fields are so small. But Kravitz and Baker point out that "the identity of the Authors might be guessed by the Reviewers, any ambiguity should act to reduce this bias". This seems so obvious.

The Kravitz-Baker system seems really well thought out, and one I'd love to see in place. But I'm worried I'm missing some critical fault here.

Anyone see any glaring issues?

ResearchBlogging.orgKravitz, D., & Baker, C. (2011). Toward a New Model of Scientific Publishing: Discussion and a Proposal Frontiers in Computational Neuroscience, 5 DOI: 10.3389/fncom.2011.00055

23.12.11

Am I a scientist?

Over on Quora, someone asked me to answer the question "Am I a scientist?"

They gave the following details:

I am in the process of getting my PhD. I spend every day doing research. I "do" science. Can I put on my business card, "John Smith, Scientist"? Do I have to wait until I have the PhD in hand? Or until I'm a candidate?

I took a quick crack at it, but I'd love to hear what other people think. Here's my answer in full:

*****

There are two issues here: the first is one of credentials and the second is one of societal interpretation.

"Back off man..."

In the case of credentials, there is no exam, or class, or quiz, or whatever that one needs to pass in order to become a "scientist". There is no "science" credentialing system. Certainly if you are a PhD researcher working at a scientific research facility then you are a scientist. But so are all of that person's subordinates, who may or may not hold a PhD or even a degree at all!

Are Diederik Stapel and Marc Hauser still scientists? They hold PhDs and conducted research, but both were caught falsifying and/or fabricating data. That's certainly not scientific!

The second issue is one of societal interpretation. If you put on your business card "SCIENTIST", that gives the person reading the card the impression that you are currently a practicing researcher or theoretician. If you are not such, then you are being duplicitous and should not "advertise" yourself as a scientist. Not because you're not a scientist, but because you're sending a signal that isn't entirely true.

In your case, you are a PhD student. You are doing research (I presume); therefore you are a scientist.

That said, I believe you'd be better off putting "John Smith / Ph.D. candidate, awesomeology / University of Very Impressive" on your business card, as it is accurate, sends a signal that you're a "dedicated" scientist (i.e., working toward a PhD) and that all will carry the baggage of "being a scientist" for you, without the need to explicitly state it.

Now for my totally biased opinion: just calling yourself a "scientist" (without explicating what field of science you're specializing in) comes across as a little sleazy, like you're taking advantage of a title to pull one over on people. It's like people who constantly refer to themselves as "doctor"...

(image source from an older post of mine)

8.12.11

Are toes pretty or ugly?

Another answer to a Quora question. Best. Blog-fodder. Ever.

What might cause the split between people who think toes are ugly and toes are pretty?

People have different experiences that lead to different behaviors. But it's not like my past experience in thinking some women are ugly causes me to think all women are ugly. Why should that be the case for any aesthetic experience?

Thankfully the questioner gave me an out by asking what might cause some people to find toes pretty while others think they're ugly.

I'm going to use that to my advantage to give a completely unsubstantiated, just-so answer that's too cool for me to ignore. If more scientific evidence comes out on this topic I'll try to remember to adjust my answer accordingly.

Short answer: toes and penii might be closely related (neurologically speaking). For more science (SCIENCE!) read on.

In my answer to Why can't I control my individual toes? (two toe-neuroscience answers?! o_O I'm not even into toes!) I introduced the motor homunculus:


This guy's body parts are distorted such that the size of a body part is proportional to the area in the primary motor cortex that is dedicated to representing that part. This was first determined by Wilder Penfield by stimulating people's brains and mapping the motor responses of the body.

Just behind the primary motor cortex (blue in the figure below) is the primary somatosensory cortex (red). The somatosensory cortex is the final common pathway for all incoming touch sensations of the body (pain, light touch, etc.)


The representation here is similar the motor cortex and mirrors it quite closely. Save one pretty striking exception.

Meet the (male) somatosensory homunculus:


If you're up for it, here's the uncensored, possibly NOT WORK SAFE version (if your workplace hates science).

The first thing you'll notice is that the representation of our toes is much bigger on the somatosensory compared to the motor homunculus. (That was the first thing you noticed, right?) This means our toes are given a lot of brain area in the somatosensory cortex, which means we have relatively more sensitivity in our toes than, say, an equivalent area on our shins.

So lets take a look at the somatosensory map to see what the layout of body parts looks like on the brain:


Check out the locations of the toes there on the right. At the top, where the butt is, is the top of the brain. This image represents only one half of the brain, so the butt is actually at the top center, and right across from it, in the other half of the brain, the other half of your butt is represented.

This means that the toes are actually represented on the medial surface, squished between the two hemispheres of the brain (with each hemisphere having a representation of the toes on the opposite side of the body).

See what body part is represented right next to the toes, though!?

GENITALIA! Yay! I'm so close to actually answering the question now!

(See Why does the writing style of most PhDs on Quora appear to be long-winded and poorly structured?, bro.)

Here's the theory put forth by UCSD neuroscience rockstar VS Ramachandran:

In some people, neurons coding for or representing genetical sensation are "cross-wired" with neurons representing toes and feet. This cross-talk may give rise to the sexual associations of toes and feet.

This is far from proved, but it makes for a nice story. Ramachandran has done some clever experiments to test his theories about how neuronal "cross-wiring" gives rise to certain behavioral phenomena, such as synesthesia, so it's not a totally out-there hypothesis.

Ramachandran relates an amusing story of one of his patients about this topic:

The next day the phone rang again. This time it was an engineer from Arkansas.

"Is this Dr. Ramachandran?"

"Yes."

"You know, I read about your work in the newspaper, and it's really exciting. I lost my leg below the knee about two months ago but there's still something I don't understand. I'd like your advice."

"What's that?"

"Well, I feel a little embarrassed to tell you this."

"Doctor, every time I have sexual intercourse, I experience sensations in my phantom foot. How do you explain that? My doctor said it doesn't make sense."

"Look," I said. "One possibility is that the genitals are right next to the foot in the body's brain maps. Don't worry about it."

He laughed nervously. "All that's fine, doctor. But you still don't understand. You see, I actually experience my orgasm in my foot. And therefore it's much bigger than it used to be because it's no longer confined to my genitals."

It may be that neuroplasticity after this patient's limb loss induced communication between his foot and genital sensory neurons. This observation lends some support to Ramachandran's toe/brain/penis hypothesis.

For more on cool stuff related to neuroplasticity, see my answers to Can brain trauma cause cognitive enhancement? and When parts of the brain are removed during surgery, is it possible for the remaining brain tissue to expand into the available space?

For more reading on sex, brains, and homunculi, check out the neurocritic who's covered recent neuroscience research looking at the somatosensory representation of circumcised v uncircumcised male penises, the representation of the female homunculus, and the neuroscientific attempt to find the clitoris.

ResearchBlogging.orgHubbard EM, Brang D, & Ramachandran VS (2011). The cross-activation theory at 10. Journal of Neuropsychology, 5 (2), 152-77 PMID: 21923784

7.12.11

SciPle.org interview

Lots of interviews this week!

Recently I presented some of the latest developments in my brainSCANr project at the Society for Neuroscience conference.

At my poster I was approached by SciPle.org, and Leonardo Restivo asked if I would do an interview with them. Well the interview's been posted and my answers are below. They're more personal than I was expecting, but I try not to shy away from personal questions (even if the answers can be uncomfortable).

Check Sci.Ple out; follow them on Twitter. They've got some really cool ideas that I'd love to see work out.

Sci.Ple: What is your background?
Technically I began my undergraduate career at the University of Southern California as a physics major. I grew up in San Diego with a lot of clear night skies. I wanted to be an astrophysicist. But that was not to be my path, and I ended up doing a degree in psychology while taking a host of philosophy and cognitive and computer science courses. I then worked as a research associate at UCLA for Edythe London; I was the PET scanner operator and radioactive pee cleaner (that's another story). In 2010 I completed my PhD in neuroscience at UC Berkeley under the mentorship of Robert Knight, the institute's director at the time. I'm currently a post-doctoral fellow at UCSF working with Adam Gazzaley.

Sci.Ple: Among your published papers, which one is your favorite?
Okay, I'm cheating a bit since it's not actually published yet, but it's close so I'm just going to act like it's published and hope that talking about it doesn't jinx the whole process. My favorite paper is one I co-wrote with my wife Jessica and is titled "Automated Cognome Construction and Semi-automated Hypothesis Generation". If I play by the rules of the question, then I'll go with "Hemicraniectomy: A new model for human electrophysiology with high spatio-temporal resolution" (Voytek et al., J Cogn Neurosci 2010).

Sci.Ple: Why is it your favorite?
Honestly I believe that the "Semi-automated Hypothesis Generation" paper is a Big Deal. We're text-mining the abstracts of millions of peer-reviewed neuroscience papers to try and make some sense out of how neuroscientific concepts interrelate. Then we're going one step further to see if we can find statistical "holes" in the literature… we're literally trying to (semi)automate one aspect of the scientific method: hypothesis generation.

Sci.Ple: What was the most challenging part of this paper?
The first spark of the idea for this paper came about during a panel I was on at a cognitive science conference in 2010. In response to a question from an audience member, I said something like, "the scientific literature is smarter than we are; many basic facts about brain function are probably already known, but we suck at synthesizing it all." Trying to prove that statement was the foundation for his paper. In order to write the paper I had to learn about text-mining, some graph theory, etc. It's totally outside of my comfort zone, but I think it's too cool to stress out about it not being "perfect".

Sci.Ple: What drives you in your day-to-day job?
Honestly? People literally pay to me to think about cool shit. That's my whole job. I get to say "I wonder if anyone has done this before," and then go out, run some experiments, and then possibly learn something that no one has ever known before. I've worked on a loading dock carrying heavy things onto trucks for up to 16 hours at a time. I've worked at a motel where my job was to go room to room and collect all of the... soiled... linens. So I guess the simplest answer to what drives me is a mix of awe and perspective. That's what keeps me going to work every day.
 
Sci.Ple: What is the most exciting part of your job?
Talking to other people. Collaborating and combining the accumulated knowledge of multiple brains in new ways to tackle hard problems. That's amazing. It's humbling and inspiring to see brilliant people's minds work.

Sci.Ple: The least exciting?
Paper formatting and data munging.
 
Sci.Ple: Name a scientist whose research inspires you
Reed Richards. That guy seems to be able to produce an endless stream of amazing, breakthrough ideas. Oh. Seriously? I've got a huge amount of respect for pre-digital scientists. Lots of them were going out and just trying everything. Sever the nerves in your own arm? Why not?! Cover yourself in varnish to prove that the skin does something important? Who cares if you almost died? You were right! I don't think I have the… fortitude… to do a lot of what those folks did--from the crazy self-experimentation to the drudgery of pre-digital writing and researching--I can't help but respect that.

Sci.Ple: What are the next frontiers in neuroscience?
Information integration. How do you go from a neuron giving off an action potential to a thought? From a neurotransmitter binding to a receptor to art? Imagine a Venn diagram that contains medicine, biology, genetics, psychology, philosophy, engineering, computer science, mathematics. That's neuroscience. Knowledge in each of these fields is being slowly accumulated (almost always separately) by doctors, biologists, geneticists, psychologists, philosophers, engineers, computer scientists, and mathematicians. We need more cross-disciplinary work.
 
Sci.Ple: Why science?
Why anything? My life, like almost everyone else's, has consisted of a series of half-blind stumbling steps. I got lucky. That said, I'll take the minor frustration of getting a paper rejected or having someone disagree with me over "soiled" towels and sheets any day.

Sci.Ple: If not science?
I'd work as a bartender at a pub. Or try and run my own. I love meeting and talking with people, and few places are better for than that a nice local watering hole.

Sci.Ple: Why?
"What happens to a werewolf on the moon?" I read that online the other day and it still cracks me up.

4.12.11

Skeptically Speaking radio interview

Last month I wrote a piece about peer review for Scientific American.

Shortly thereafter I was contacted by Desiree Schell from the Skeptically Speaking radio show about doing a short interview on the topic. I love this podcast (it's one four I listen to after Science Friday, Radiolab, and TED talks), so I was pretty excited.

And--according to my wife--it was actually interesting. So yay to not being a boring knob. The interview starts with a discussion about faster than light neutrinos; I pop in around minute 43.

Here's the episode.

Anyway, if you haven't checked this show out, please do so. Their archives have great interviews with scientists and general science geeks such as Adam Savage, XKCD's Randall Munroe, and many others.

29.11.11

Is the female brain innately inferior?

Did the title get your attention? It sure as hell caught mine when USC grad student Rohan Aurora sent me the link.

Recently Stanford's Clayman Institute for Gender Research published an interview with a Stanford neuroscientist about male vs. female brains.

I'm not going to lie: when I clicked the link I was expecting the usual attention-grabbing, rabble-rousing crap I've come to expect when I see a headline like that.


I was prepared for some serious eye rolling and frustrated groaning in response to the inevitable logical errors, overreaching conclusions, and other neuro-nonsense. Especially given how much gender has been on my mind lately (what with my recent foray into fatherhood and the whole #womanspace thing).

This certainly wouldn't have been the first time a neuroscientist has railed against silly neuroscientific claims in gender research.

Instead I got a well thought out, sane, clear-headed, and scientifically sound interview. I was especially pleased because the neuroscientist in question is a friend, colleague, and collaborator of mine: Prof. Josef Parvizi (co-author on one of my most-cited first-author papers). I'm especially proud to call him a collaborator after reading this. Seriously, this is a great interview and Josef just nails it.

The whole piece is short, but concise, and a great reference to counter some of the more common neuro-gender crap. They tackle three myths; I've quoted my favorite parts below.

Gender Brain Myth #1: Brain size matters
...if absolute brain size were all that mattered, whales and elephants, both of which have much larger brains than humans, would outwit men and women.

Gender Brain Myth #2: Women and men have different brains due to estrogen and testosterone
...if estrogen and testosterone did shape the brain in different ways, it is an unsubstantiated, logical leap to conclude that such differences cause, "...men to occupy top academic positions in the sciences and engineering or top positions of political or social power, while women are hopelessly ill-equipped for such offices."

Gender Brain Myth #3: Men are naturally better at math
According to Parvizi, this logic is flawed: "Differences seen in cognitive tests do not necessarily provide direct evidence that those differences are in fact innate."

Finally, I love the following. It should be copy-and-pasted into any online argument. Or shouted repeatedly at any TV/magazine puff-piece writer.

"...if we are to entertain the idea that humans 'experience' life differently, and that different experiences mold the brain function differently, then we must also seriously consider that gender (along with class, ethnicity, age, and many other factors) would also contribute to this experience, and that they will contribute to molding of the brain...

So if women and men have systematically different life experiences and face dissimilar expectations from birth, then we would expect that their brains would become different (even if they are not innately dissimilar), through these different life experiences. Even if neuroscientists see differences in the brains of grown men and women, it does not follow that these differences are innate and unchangeable.

For instance, if girls are expected to be more adept at language, and are placed in more situations that require communication with others, it follows that the networks of the brain associated with language could become more efficient in women. Conversely, if boys receive more toy trucks and Lego's, are given greater encouragement in math and engineering classes, and eventually take more STEM (science, technology, engineering and math) courses, it follows that the sections of the brain associated with mathematics could become more efficient in men...

The tricky part is that we do not make the mistake of taking account of these differences as evidence for biological determinism."

18.11.11

Ranking biomedical retractions

This past week I was in Washington, DC for the 30,000+ Society for Neuroscience conference. Recently I wrote about (my interpretations of) the purpose of peer review. Well this prompted a group of us to get together to discuss open science and the future of scientific publishing.

Obviously science and the process of science have been on my mind.

My fellow neuroscientists Jason Snyder and Björn Brembs and I got together for lunch to talk about... lots of stuff, actually... but I came away with a few questions that seemed to have empirical answers.

Before I jump in though, if you haven't seen Björn's talk What's wrong with scholarly publishing today? check out the slides at the end of this post. They're packed with some mind boggling data about the business of peer-review.

At some point during our lunch, Retraction Watch (which is an amazing site), came up, and ultimately inspired two questions:
  • Which journals have the most retractions?
  • Which biomedical fields have the most retractions?
I did a quick-and-dirty check of this using PubMed's API, because they have a nice "search by retraction" option:

http://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi?db=pubmed&field=word&term=hasretractionin&retmax=10000&usehistory=y

There was one issue: every article--regardless of scientific field--for the general science journals (Science, Nature, PNAS) are indexed in PubMed. So if an article (or a dozen) about semiconductors (see: Jan Hendrik Schön) was retracted from Science, it would still show up in this analysis. The result was inflated biomedical retraction counts for those journals, so I had to manually adjust counts down by removing non-biomedical retractions (just to put everything on par, since PubMed doesn't index non-biomedical peer-review journals).

Here are the results for the 1922 retractions across 796 journals:
Bradley Voytek retraction counts Science Nature PNAS

PNAS (59 retractions) and Science (52) lead the pack, followed by J Biol Chem (40), J Immunol (33), and Nature (31).

Next I counted words that appeared in the titles of the retracted articles to get a feel for what kinds of papers are being retracted. Here's all words that appear at least 50 times in paper titles:
  • cells (189)
  • activity (154)
  • effects (152)
  • human (148)
  • patients (136)
  • protein (108)
  • factor (104)
  • gene (103)
  • expression (102)
  • receptor (96)
  • study (81)
  • cancer (70)
  • treatment (57)
  • surgery (54)
  • disease (54)
  • DNA (53)
  • virus (50)
(Similar words were grouped using TagCrowd)

At first blush, it looks like cell/molecular/micro biology represents a big chunk of the retractions (cells, protein, factor, gene, expression, receptor, DNA, virus), but human patient research isn't much better off... (human, patients, surgery).

I've heard the argument before (sorry, can't remember where) that fields where the data is more difficult to collect and replicate are more prone to shady research practices... I'm not sure if that's exactly being reflected here, but the exercise was an interesting one.

Thoughts?

12.11.11

...the rest is just details

(This cross-posted from my piece at Nature)

Meet the electric brain. A pinnacle of modern science! This marvel comes complete with a "centrencephalic system", eyes, ears, medial and lateral geniculate, corpora quadrigemina, and visual cortex.

(click to enlarge)

The text reads:
A giant electrified model of the human brain's control system is demonstrated by Dr. A.G. Macleod, at the meeting of the American Medical Association in New York, on June 26, 1961. The maze of twisting tubes and blinking lights traces the way the brain receives information and turns it into thought and then action.

It's a cheap journalistic trick to pull out a single example of hubris from the past at which to laugh and to highlight our own "progress". But where did the Electric Brain fail? Claims to understanding or modeling the brain have almost certainly been made countless times over the course of human thinking.

Hell, in moments of excitement with colleagues over a pint (or two) I've been known to shout, "I've figured out the brain!" But, of course, I have always been wrong.

So here we are, exactly 50 years post Electric Brain, and I find myself once again at the annual Society for Neuroscience conference (SfN). Each year we push back the curtain of ignorance and, just as I have every year since I began my neuroscience career in 2003, I find myself surrounded by 30-40,000 fellow brain nerds.

How do the latest and greatest theories and findings on display at SfN compare to the Electric Brain? One would like to think that, with this much brain power (har, har), surely we must be close to "understanding the brain" (whatever that might mean). Although any model of the human brain feels like an act of hubris, what good are countless scientific facts without an integrated model or framework in which to test them?

The Electric Brain is an example of a connectionist model in which the brain is composed of a collection of connected, communicating units. Thus, the brain can be modeled by the interconnections between all the subregions of it; behavior is thought to be an emergent property of the complexities of these interconnected networks of units.

A "unit" in the Electric Brain appears to be a whole region, whose computations are presumably modeled by a simple input/output function. The modern incarnations of this movement are seen in the rapidly maturing field with the sexy name of connectomics, the underlying belief of which is that if we could model how every neuron connects to every other neuron, we would understand the brain.

With advancements in computational power, we've moved beyond simplified models of entire brain regions and toward attempts to model whole neurons, such as this model of 10^11 neurons by Izhikevich and Edelman (Large-Scale Model of Mammalian Thalamocortical Systems, PNAS 2008).



There are also attempts to model the brain at the molecular level, such as with the Blue Brain Project.

But as Stephen Larson, a PhD candidate at UCSD astutely noted on Quora,

To give you a sense of the challenge here, consider the case of a simple organism with 302 neurons, the C. elegans. There has been a published connectome available since 1986... however, there is still no working model of that connectome that explains the behavior of the C. elegans.

One issue with this approach is that the brain is dynamic, and it is from this dynamism that behavior arises (and the complexities of which are hidden in a static wiring diagram). Wilder Penfield summed it up nicely, "Consciousness exists only in association with the passage of impulses through ever changing circuits of the brainstem and cortex. One cannot say that consciousness is here or there" (Wilder Penfield: his legacy to neurology. The centrencephalic system, Can Med Assoc J 1977).

In my most cynical moments, connectionist approaches feel like cargo cult thinking, whereby aping the general structure will give rise to the behavior of interest. But how can we learn anything without first understanding the neuroanatomy? After all, our anatomy determines the rules by which we are biologically constrained.

This year I spoke at the 3rd International Workshop on Advances in Electrocorticography. Across two days there were a total of 23 lectures on cutting-edge methods in human and animal electrophysiology. Last year I attended a day-long, pre-SfN workshop titled "Analysis and Function of Large-Scale Brain Networks".

During both of these sessions I sometimes found it difficult to restrain my optimism and enthusiasm (yes, even after all these years "in the trenches"). And while wandering through a Kinko's goldmine of posters and caffeinating myself through a series of lectures, occasionally I forget my skepticism and cynicism and think, "wow, they're really on to something."

And that's what I love about this conference: every year it gives me a respite from cynicism and skepticism to see so many scientists who are so passionate about their work. Sure, it might be hubris to think that we can model a brain, but so what? When the models fail, scientists will learn, the field will iterate, and the process will advance.

That's what we do, and that's why I keep coming back. The Society for Neuroscience conference is my nerd Disneyland.

ResearchBlogging.orgIzhikevich EM, & Edelman GM (2008). Large-scale model of mammalian thalamocortical systems. Proceedings of the National Academy of Sciences of the United States of America, 105 (9), 3593-8 PMID: 18292226
Jasper HH (1977). Wilder Penfield: his legacy to neurology. The centrencephalic system. Canadian Medical Association journal, 116 (12), 1371-2 PMID: 324600

2.11.11

What is peer-review for?

(This is re-posted from the Scientific American Guest Blog)

There is a lot of back and forth right now amongst the academic technorati about the "future of peer review". The more I read about this, the more I've begun to step back and ask, in all seriousness:

What is scientific peer-review for?

This is, I believe, a damn important question to have answered. To put my money where my mouth is I'm going to answer my own question, in my own words:

The scientific peer-review process increases the probability that the scientific literature converges--at long time scales--upon scientific truth via distributed fact-checking, replication, and validation by other scientists. Peer review publication gives the scientific process "memory".

(Cover of the first scientific journal, Philosophical Transactions of the Royal Society, 1665-1666. Source: Wikipedia)

Note that publication of methods and results in a manner that they can be peer-reviewed is a critical component here. Given that, let's take part in a hypothetical regarding my field (neuroscience) for a moment.

In some far-distant future where humanity has learned every detail there is to know about the brain, what does the scientific literature look like in that world? Is it scattered across millions of closed, pay-walled, static manuscripts as much of it is now? Does such a system maximize truth-seeking?

And, given such a system, who is the megamind that manages to read through all of those biased (or incomplete, or incorrect) individual interpretations to extract the scientific truths to distill a correct model of the human brain and behavior (whatever that might entail)?

I am hard-pressed to imagine that this future scientific literature looks like what we currently possess. In fact, there are many data-mining efforts underway designed to overcome some of the limitations introduced by the current system (such as, for cognitive neuroscience, NeuroSynth and my own brainSCANr).

The peer-review system we have now is incomplete.

I'm not attacking peer-review, I'm attacking peer-review based on journal editors hand-picking one to three scientists who then read a biased presentation of the data without being given the actual data used to generate the conclusions. Note that although I am only a post-doc, I am not unfamiliar with the peer-review process, as I have "a healthy publication record" and have acted as reviewer for a dozen "top journals".

To gain a better perspective, I read this peer-review debate published in Nature in 2006.

In it, there were two articles of particular interest, titled:

* What is it for?
* The true purpose of peer review

These articles, in my opinion, are lacking answers to the questions that are their titles.

Main points from the first article:

* "For authors and funding organizations, peer review provides an important veneer of respectability."
* "For editors, peer review can help inform the decision-making process... Prestigious journals base their reputation on their exclusivity, and take a 'top-down' approach, creaming off a tiny proportion of the articles they receive."
* "For readers with limited time, peer review can act as a filter, reducing the amount they think they ought to read to stay abreast."

The first two points are issues of reputation management, which ideally have nothing to do with actual science (note, I say ideally...) The second point presupposes that publishing results in journals is somehow the critical component, rather than the experiments, methods, and results themselves. The final point may have been more important before the advent of digital databases, but text-based searches lessens its impact.

Notably, none of these mention anything about science, fact-finding, or statements about converging upon truth. (Note, in the past I've gone so far as to suggest that even the process of citing specific papers is biased and flawed, and that we would be better off giving aggregate citations of whole swathes of the literature.)

The second article takes almost an entirely economic, cost-benefit perspective of peer-review again focused on publishing results in journals. Only toward the end does the author directly address peer-review's purpose in science by saying:

...[T]he most important question is how accurately the peer review system predicts the longer-term judgments of the scientific community... A tentative answer to this last question is suggested by a pilot study carried out by my former colleagues at Nature Neuroscience, who examined the assessments produced by Faculty of 1000 (F1000), a website that seeks to identify and rank interesting papers based on the votes of handpicked expert 'faculty members'. For a sample of 2,500 neuroscience papers listed on F1000, there was a strong correlation between the paper's F1000 factor and the impact factor of the journal in which it appeared. This finding, albeit preliminary, should give pause to anyone who believes that the current peer review system is fundamentally flawed or that a more distributed method of assessment would give different results.

I strongly disagree with his final conclusion here. A perfectly plausible explanation for this result would be that scientists rate papers in "better" journals higher because they're published in journal perceived to be better. This would appear to be a source of bias and a major flaw of the current peer-review system. Rather than giving me pause as to whether the system is flawed, one could easily interpret that result as proof of the flaw.

The most common response that I encounter when speaking with others scientists about what they think peer-review is for, however, is some form of the following:

Peer-review improves the quality of published papers.

I'm about to get very meta here, but post-doc astronomer Sarah Kendrew recently wrote a piece in The Guardian titled, "Brian Cox is wrong: blogging your research is not a recipe for disaster".

This was followed by a counter post in Wired by Brian Romans titled "Why I Won’t Blog Unpublished Results". In that piece, Brian also says that peer-review improves papers:

First of all, the formal peer-review process has definitely improved my submitted papers. Reviewers and associate editors can catch errors that elude even a team of co-authors. Sometimes these are relatively minor issues, in other cases it may be a significant oversight. Reviewers typically offer constructive commentary about the formulation of the scientific argument, the presentation of the data and results, and, importantly, the significance of the conclusions within the context of that particular journal. Sure, I might not agree with every single comment from all three or four reviewers but, collectively, the review improves the science. Some might respond with ‘Why can’t we do this on blogs! Wouldn’t that be great! Internets FTW!.’ Perhaps someday. For now, it’s difficult to imagine deep and thorough reviews in the comment thread of a blog.
(emphases mine)

Although Brian concedes (but dismisses) the fact that none of these aspects of peer-review need be done in formal journals, he argues that because his field doesn't use arXiv and there is currently no equivalent for it, then journals are still necessary.

We also see an argument in there about how reviewers guide statements of significance for a particular journal, and the conclusion that somehow these things "improve the science". But even the narrative that peer-review improves papers can be called into question:

Smith R. Peer review: a flawed process at the heart of science and journals. J R Soc Med, 99(4); 2006
Another answer to the question of what is peer review for is that it is to improve the quality of papers published or research proposals that are funded. The systematic review found little evidence to support this, but again such studies are hampered by the lack of an agreed definition of a good study or a good research proposal.

Rothwell PM and Martyn CN. Reproducibility of peer review in clinical neuroscience. Brain, 123(9); 2000
Peer review is central to the process of modern science. It influences which projects get funded and where research is published. Although there is evidence that peer review improves the quality of reporting of the results of research, it is susceptible to several biases, and some have argued that it actually inhibits the dissemination of new ideas.

To reiterate: peer-review should be to maximize the probability that we converge on scientific truths.

This need not happen in journals, nor even require a "paper" that needs improvement by reviewers. Papers are static snapshots of one researcher or research teams' views and interpretations of the results of their experiments.

Why are we even working from these static documents anyway?

Why--if I want to replicate or advance an experiment--should I not have access to the original data and analysis code off which to build? These two changes would drastically speed up the scientific process. Almost any argument against implementing a more dynamic system seems to return to "credit" or "reputation". To be trite about it, if everyone has access to everything, however will they know how clever I am? Some day I expect a Nobel Prize for my cleverness!

But a "Github for Science" would alleviate even these issues. Version tracking would allow ideas to be traced back to the idea's originator with citations inherently built into the system.

I'm not saying publishing papers is bad. Synthesis of ideas allows us to publicly establish hypotheses for other scientists to attempt to disprove. But most results that are published are minor extensions of current understanding that don't merit long-form manuscripts.

But the current system of journals, editors who act as gatekeepers, one to three anonymous peer-reviewers, and so on is an outdated system built before technology provided better, more dynamic alternatives.

Why do scientists--the heralds of exploration and new ideas in our society--settle for such a sub-optimal system that is nearly 350 years old?

We can--we should--do better.

ResearchBlogging.orgWager, E. (2006). What is it for? Analysing the purpose of peer review. Nature DOI: 10.1038/nature04990
Jennings, C. (2006). Quality and value: The true purpose of peer review Nature DOI: 10.1038/nature05032
Editors (2005). Revolutionizing peer review? Nature Neuroscience, 8 (4), 397-397 DOI: 10.1038/nn0405-397
Smith, R. (2006). Peer review: a flawed process at the heart of science and journals Journal of the Royal Society of Medicine, 99 (4), 178-182 DOI: 10.1258/jrsm.99.4.178
Rothwell, P. (2000). Reproducibility of peer review in clinical neuroscience: Is agreement between reviewers any greater than would be expected by chance alone? Brain, 123 (9), 1964-1969 DOI: 10.1093/brain/123.9.1964

31.10.11

The Zombie Brain: Conclusions

This post is the final installment of our collaborative venture on exploring the Zombie Brain. We hope you’ve enjoyed the ride.

Sincerely, Bradley Voytek Ph.D. & Tim Verstynen Ph.D.


Bringing it all together: The Zombie Brain



Over the last ten days we’ve laid out our vision of the Zombie Brain. To recap, we’ve shown that zombies:

1) Have an over-active aggression circuit
2) Show cerebellar dysfunction
3) Suffer from long-term memory loss due to damage to the hippocampus
4) Present with global aphasia (i.e., can’t speak, can understand language)
5) Suffer from a variant of Capgras-Delusion
6) Have impaired pain perception
7) Cannot attend to more than one thing at a time
8) Exhibit addictive responses to eating flesh
9) Have an insatiable appetite

Together, these symptoms and their neurological roots reveal a striking picture of the zombie brain.

Based on the behavioral profile of the standard zombie, we conclude that the zombie brain would have massive atrophy of the “association areas” of the brain: i.e., those areas that are responsible for the higher-order cognitive functions. Given the clear cognitive and memory deficits, we would also expect significant portions of the frontal and parietal lobes, and nearly the entire temporal lobe, to exhibit massive degeneration. As such, the hippocampuses of both hemispheres would be massively atrophied (resulting in memory deficits), along with most of the cerebellum (resulting in a loss of coordinated movements).

In contrast, we would expect that large portions of the primary cortices would remain intact. Behavioral observations lead us to conclude that vision, most of somatosensation (i.e., touch), and hearing are likely unimpaired. We also hypothesize that gustation and olfaction would also remain largely unaffected. We must further conclude that large sections of the thalamus and midbrain, brainstem, and spinal cord are all likely functioning normally or are in a hyper-active state.

Putting these elements together, we have reconstructed a plausible model for what the zombie brain would look like.

Overlay (yellow is zombie, gray is human)


It is interesting to point out, from an historical standpoint, that many of the regions we hypothesize to be damaged in the zombie brain are part of what is generally referred to as the Papez circuit. James Papez first identified this circuit in 1936. Much like our current study, Papez was trying to unify a cluster of behavioral phenomena he had observed into a neuroanatomical model of the brain. He wondered why emotion and memory are so strongly linked. Thus, he hypothesized that emotional and memory brain regions must be tightly interconnected.

To test this theory, he injected the rabies virus into the brains of cats to watch how it spread and he made note of which brain regions were destroyed as a result of these injections. He observed that the hippocampus (important for memory formation) connects to the orbitofrontal cortex (social cognition and self-control), the hypothalamus (hunger regulation, among other things), the amygdala (emotional regulation), and so on. These experiments, conducted almost three-quarters of a century ago, may shed some insight into the nature of the zombie disorder today. We’re not suggesting that some super, brain-eating rabies virus is responsible for zombies. We’re just saying that it’s not not possible.

The profile of damage we have outlined corroborates the behavioral observations we have made from zombie films. From a subjective standpoint, this pattern of cerebral atrophy represents a most heinous form of injury unparalleled in the scientific literature. It would lead to a pattern of violence and social apathy; patients thus affected would represent a grievous harm to society, with little chance of rehabilitation. The only recommendation is immediate quarantine and isolation of the subject.

However, as we learned in GI Joe “knowing is half the battle.” Based on our observations, we leave you with a few strategies to maximize survival in the event of a zombie encounter.

1) Outrun them: Climb to a high point or some other place they will have trouble reaching. Practice parkour. The slow zombie variant can’t catch up with a healthy adult human.

2) Don’t fight them: They can’t feel pain and aren’t afraid of dying, so they’ve got the edge in close combat. If you can simply out run them, why risk the bite?

3) Keep quiet and wait: The zombie memory is so terrible that if you can hide long enough, it will mill around only until something else captures its attention.

4) Distraction, distraction, distraction: Throw something behind the zombie to capture its attention. Set off a flare, use a flashbang, or whatever you need to do to distract it to get away

5) If you can’t beat ‘me, join ‘em: If you can’t out run them (or are around the fast zombie variant) take advantage of their self-other delusion and act like one of them.

There you have it folks... scientifically validated safety tips for surviving the zombie apocalypse. Use them wisely the next time you come face-to-face with the living dead.

30.10.11

The Zombie Brain: Insatiable Hunger

This is the last symptom of the multi-day series on The Zombie Brain between Oscillatory Thoughts and The Cognitive Axon.

Be sure to check out our last post tomorrow in which we wrap everything up.

Symptom 9: Insatiable Hunger

What drives the zombie’s insatiable hunger for human flesh? In the last post we discussed the role of addiction in the zombie’s craving for your skin, but why are they never satisfied? Why, after having eating your entire family, will the zombie continue on to consume you as well? How can they keep eating?

Bradley Voytek & Timothy Verstynen - Shaun of the Dead Zombies

While we cannot ascertain for certain the etiology of zombie hunger, we posit two possible neural mechanisms that underlie this disorder.

First and most probable is the alteration of the hunger/satiety circuit in the zombie brain. In a previous post we outlined the nature of the zombie impulsive-reactive aggression. In that post we demonstrated that damage to the zombie orbitofrontal cortex would lead to alterations in the top-down control over subcortical nuclei such as the hypothalamus.

The hypothalamus plays an important role in maintaining control over autonomic functions such as regulating body temperature and the sleep/wake cycle, as well as modulating thirst and hunger. Dysfunction of this region due to abnormal activity--either from the loss of or damage to connecting regions, or possibly via unknown neurochemical changes--would severely affect zombie satiety and sleep.

Bradley Voytek & Timothy Verstynen - Zombie hypothalamus

For example, we are often taught that the ventromedial nucleus of the hypothalamus regulates feelings of satiety while the lateral nucleus regulates feeding (quick mnemonic: "stim the ven, get thin; stim the lat, get fat"). Thus, damage to the zombie ventromedial nucleus may result in a loss of satiety.

In other words: zombies will always be hungry. For you.

(Though, as with all things neuroscience, this may be more complicated than we are taught.)

The second possible mechanism for zombie fleshlust may be the result of a form of Klüver–Bucy syndrome.

This interesting disorder is caused by damage to both temporal lobes and is associated with an array of disorders including hyperphagia (overeating) as well as pica (eating strange objects).

Sound familiar? I would say eating flesh would constitute as eating a "strange object".

This is a very rare disorder that usually only occurs in experimental settings where researchers intentionally remove the temporal lobes of an animal. However, it is also associated with other symptoms including placidity, emotional dysregulation, and hypersexuality. See exhibit A below (from the Annals of Mad Science, circa 1930):

Bradley Voytek & Timothy Verstynen - Kluver-Bucy Syndrome Zombies

So while Klüver–Bucy would account for the eating patterns seen in zombies, few would associate placidity or sexuality with the walking dead.

In fact, hyperorality, pica, hypersexuality, and strange emotional responses sound more like symptoms of sparklevampires.

And insatiable hunger is also associated with other Awesome Beings, so these phenomena may underlie the abnormal behavioral patterns observed in other, more rare, non-zombie subjects...

Bradley Voytek & Timothy Verstynen - Galactus Zombies

ResearchBlogging.orgKing BM (2006). The rise, fall, and resurrection of the ventromedial hypothalamus in the regulation of feeding behavior and body weight. Physiology & behavior, 87 (2), 221-44 PMID: 16412483

28.10.11

The Zombie Brain: Stimulus-locked Attention

This is part eight(!) of our multi-day series on The Zombie Brain.

Be sure to visit The Cognitive Axon tomorrow for symptom 8!

Symptom 7: Stimulus-locked attention

If zombies are anything, they're at least highly distractible. Don't want them to see that you're looting an old corner store for supplies? Throw up a few fireworks. The walking dead will be occupied as long as the show lasts.

Of course, this type of attention can be deadly if the critters set their sights on you...

This type of stimulus-bound attention reflects another rare clinical disorder called simultanagnosia. Simultanagnosia, particularly the "dorsal form", is the inability to attend to more than one thing at time and is often seen in patients suffering from Balint's Syndrome.

Timothy Verstynen Bradley Voytek - Zombie Research Society - zombie brain parietal cortex

The only thing these patients perceive is whatever has grabbed their attention. This originates when both the left and right parietal cortex, the back and top part of your neocortex, are lesioned. If one is intact, spatial attention is still somewhat impaired, but the impenetrable stimulus-locking does not occur.

Damage to this parietal network may also underlie some of the issues we previously described with the mirror neuron network, language regions, and pain sensory regions.

So, along with a good chunk of the frontal lobe, it's safe to assume that the dorsal parietal lobe is lesioned in the zombie brain as well. This could happen if the neurons themselves are destroyed, or if the axons connecting them (called "callosal fibers" because they pass through the corpus callosum) are damaged.

Timothy Verstynen Bradley Voytek - Zombie Research Society - zombie brain corpus callosum

Only careful, clinical trials can tell us the true root cause of this damage in the zombie brain. Nevertheless, by arming ourselves with the knowledge of the zombie’s simultanagnosia, we can devise further survival strategies. More on this later.

26.10.11

The Zombie Brain: Self/Other Delusion

This is part six of our multi-day series on The Zombie Brain.

Be sure to visit The Cognitive Axon tomorrow for symptom 6!

Symptom 5: Self/Other Delusion

So what exactly did we do to piss off the living dead? Oh sure, you probably had a grudge or two with someone who's risen from the grave, but most of the walkers coming after you have no idea who you are.

Zombies are just plain delusional; they see us as something else, something they can't make sense of. While modern neuroscience is still just beginning to understand the neural underpinnings of delusional disorders, this particular type of self/other association delusion has been linked at least two distinct circuits in the brain.

First, the inability for a zombie to recognize that they're chowing down on their bowling partner mirrors a very rare neurological disorder called the Capgras delusion.

Individuals with Capgras are convinced that the people they know well (e.g., a close friend or spouse) have been replaced by an imposter. It's like an alien came down and replaced your teammate with a lookalike that wasn't him. Think Invasion of the Body Snatchers.

Now we don't know precisely what causes the Capgras delusion, but let's suffice it to say that the zombie brain most likely has it! One hypothesis is that the brain regions that recognize faces (the fusiform face area) and the regions that assign emotional content to experience (the amygdala), which are normally communicating nicely, are somehow disrupted.

You can see Ramachandran talk about it on this TED video:



This means that people with Capgras can still recognize people as people but that they don't have any emotional ties to them. Couple that with the orbitofrontal aggression and control issues we described earlier, and you could see how this would get out of hand!

Therefore, the zombie is after you not only because they're pissed off and looking for a food source, but because they literally don't recognize you as the person they once loved.

Second, this ability to recognize "self" from "other" may also reflected in a set of frontal and parietal neurons called mirror neurons. These cells turn on when you perform an action (say picking up an ax) and when you see someone else perform the same action.

Some scientists believe that mirror neurons play an intimate role in social bonding and empathy; both behaviors that zombies clearly lack. Now, some have said that zombies completely lack mirror neurons altogether. However, that doesn't seem reasonable since zombies tend to imitate what they see (particularly things other zombies do).

Timothy Verstynen Bradley Voytek - Zombie Research Society - zombie brain mirror neurons - Shaun of the Dead

Likely, the response properties of these cells have changed. If you were to stick an electrode in a zombie's inferior frontal cortex, you’d likely observe mirror neurons that only respond when a zombie performs an action or sees another zombie performing the same action.

Both brain dysfunctions point out one key survival skill: if you want a zombie to not come after you... start acting like a zombie. They'll think you’re one of them and might even follow you into a well-devised trap. This demonstrates that, by understanding the neural basis of the zombie disorder, we can begin building protocols that will maximize our survival.

24.10.11

The Zombie Brain: Long-term Memory Loss

This is part four of our multi-day series on The Zombie Brain.

Be sure to visit The Cognitive Axon tomorrow for symptom 4!

Symptom 3: Long term memory loss

Why is it that it only takes you a few seconds to hide from zombies before they get distracted, forget their prey, and move on after some other helpless chap? Yet they flock in droves to places like malls and churches that they remember from their pre-zombie past?

We contend that zombies are incapable of storing long-term memories as a result of a disorder called anterograde amnesia. Anyone who's seen the movie Momento will know the symptoms well. Immediate events are available for only a few minutes at a time, at most, before their flow of conscious memories is disrupted.

Timothy Verstynen Bradley Voytek - Zombie Research Society - zombie brain hippocampus

Once distracted, someone afflicted with anterograde amnesia will lose those memories as if they never existed at all. However, memories that were gained before the amnesia-inducing brain damage will be retained as clearly as if they had happened yesterday.

This phenomenon arises from damage to a very specific area of the brain called the hippocampus. This region sits right behind the amygdala (which we talked about earlier) and is nestled deep within the temporal lobe. For such severe amnestic symptoms, our hypothetical zombie subjects would have to lose both their left and right hippocampuses.

Timothy Verstynen Bradley Voytek - Zombie Research Society - zombie brain hippocampus

This is a rare condition that has so far really only come about for surgical reasons and doesn't really tend to happen naturally. However, there are cases in which severe vitamin deficiency can lead to memory issues and anterograde amnesia. This is due to the susceptibility of the mamillary bodies, a brain region that is heavily interconnected with the hippocampus, to vitamin deficiency degeneration.

This leads to a disorder referred to as Wernicke-Korsakoff’s syndrome, characterized by memory disruptions, which may be involved in regulation of the hippocampus. Whether a virus has destroyed the hippocampus, or zombies are simply suffering from a severe vitamin deficiency, it's safe to say that somehow they've lost their hippocampuses for good, as well as any ability to form memories of the new un-life.

22.10.11

The Zombie Brain: Impulsive-Reactive Aggression

This is part two of our multi-day series on The Zombie Brain.

Be sure to visit The Cognitive Axon tomorrow for symptom 2!

Symptom 1: Impulsive-reactive aggression

It’s pretty much a given that zombies are constantly pissed off and they want to eat you. The snarls, the teeth, the guttural howls as they close in on their prey... these creatures are enough of a public health danger and menace to warrant serious research funding from the National Institutes of Health!

The adrenalin-infused rage of thousands of raging beasts is unmistakable.

So what does this uncontrolled, rampant rage tell us about the zombie brain? First, the type of rage that zombies exhibit is of a very primal form known as impulsive-reactive aggression. This is more like the aggression you see when two drunks get in a fight.

It differs from the cold and calculated rage seen, for example, in a school shooting. The zombies will direct their anger at anyone and everyone simply because they're human. This type of rage has its roots in the more “primitive” (i.e., phylogenetically older) parts of the brain and reflects the engagement of the “fight-or-flight” circuitry that all mammals have. Steve Schlozman has referred to this circuit as the "crocodile brain".

Timothy Verstynen Bradley Voytek - Zombie Research Society - zombie brain orbitofrontal cortex

Normally, these every-day anger impulses are suppressed by signals that originate in the lower part of the frontal lobe: the orbitofrontal cortex. This area sends inhibitory signals to the medial amygdala, a little almond-shaped area that sits at the front of your temporal lobe. If left uncontrolled this tiny region would ramp up signals to the hypothalamus and thalamus that trigger the adrenal responses you feel when angry and frightened. But since most of us have an intact orbitofrontal cortex, the little amygdala is turned down except in rare cases.

Timothy Verstynen Bradley Voytek - Zombie Research Society - zombie brain orbitofrontal cortex

Studies of violent, pathological criminals have found that functional abnormalities of the dorsal and ventral prefrontal cortices and the amygdala may underlie some anti-social and violent behaviors. Furthermore, people with damage to the orbitofrontal cortex often have issues with social cognition, understanding and adhering to social norms and mores, as well as moral decision-making.

Some of you may have heard of the famous case of Phineas Gage who had a rod shot through his brain and went from mild-mannered middle management to uninhibited risk-taker and speaker of all things inappropriate. Well that’s because he lost his orbitofrontal cortex.

Certainly the zombie doesn’t care about social norms or morality!

Timothy Verstynen Bradley Voytek - Zombie Research Society - zombie brain orbitofrontal cortex

Given the impulsive and aggressive behavior exhibited by zombies, it’s safe to say that they
lack a properly functioning orbitofrontal cortex. So we’ve modeled the zombie brain such that the orbitofrontal cortex is more or less obliterated. As a result, the zombie amygdala, hypothalamus, and thalamus (specifically the bed nuclei of the stria terminalis) should be constantly overactive. These changes would easily produce a hair-trigger adrenal response unlike anything seen in normal humans!

21.10.11

The Zombie Brain

The Living Dead Brain: What Forensic Neuroscience Can Tell Us about the Zombie Brain
Dr. Timothy Verstynen & Dr. Bradley Voytek
Zombie Research Society

This is a cross-post between Oscillatory Thoughts and Cognitive Axon. Stay tuned to both sites over the following days leading up to Halloween for updates on our model of the zombie brain.



What can neuroscience teach us about surviving the zombie apocalypse?

What makes a zombie a zombie or, more importantly, what makes a zombie not a human? Philosophers contend that a zombie lacks that qualia of experience that belies normal consciousness.

However this is a less than satisfying explanation for why the lumbering, flesh eating creatures are pounding outside the door of your country farmhouse.

Beyond the (currently) immeasurable idea of consciousness or the whole supernatural “living dead” theory, zombies are characterized primarily by their highly abnormal but stereotyped behaviors. This is particularly true in more modern manifestations of the zombie genre wherein zombies are portrayed not as the reanimated dead, but rather as living humans infected by biological pathogens. They are alive, but they are certainly not like us.

Neuroscience has shown that all thoughts and behaviors are associated with neural activity within the brain. Therefore, it should not be surprising that the zombie brain would look and function differently than the gray matter contained in your skull. Yet, how would one know what a zombie brain looks like?

Luckily, the rich repertoire of behavioral symptoms shown in cinema gives the astute neuroscientist or neurologist clues as to the anatomical and physiological underpinnings of zombie behavior. By taking a forensic neuroscience approach, we can piece together a hypothetical picture of the zombie brain.

Over the course of the next week, Oscillatory Thoughts and Cognitive Axon will team up to show our hypothetical model of the zombie brain. Each day we will present a new "symptom" associated with a zombie behavior and show its neural correlates in our simulated zombie brain.

This entire endeavor is partly an academic "what if" exercise for us and partly a tongue-in-cheek critique of the methods of our profession of cognitive neuroscience. We’ll be breaking up the workload and alternating days (hey... we gotta work our real jobs too) so be sure to check both places for the newest updates on zombie neuroscience.

Timothy Verstynen and Bradley Voytek - Zombie Research Society zombie brain

DISCLAIMER: We need to be very clear on one point. While we sometimes compare certain symptoms in zombies to real neurological patient populations, we are in no way implying that patients with these other disorders are in some way “part zombie”. Neurological disorders have provided critical insights into how the brain gives rise to behavior and we bring them up for the sake of illustration only. Their reference in this context is in no way meant to diminish the devastating impact that neurological diseases can have on patients and their caregivers.

19.10.11

Forbes "Edge Thinkers"

On Monday my interview with Forbes went live:

Neuroscientist Bradley Voytek is Bringing the Silicon Valley Ethos into Academia


This was part of their "Working The Edge" series.

Recently Forbes managing editor Bruce Upbin put out a call for "Edge Thinkers":

Forbes just launched a new section on our Tech channel to showcase intriguing people operating at the fringes of science, business, education, government, healthcare and the arts. It’s called Working The Edge. It’s a way to cover the kinds of people who innovate by merging seemingly unrelated disciplines. The kinds of people with three degrees, or else are entirely self-taught. They are the associative thinkers mixing biology and architecture, chemistry and marketing, charity and capitalism, Shakespeare and data mining, and food and physics. They’re not superheroes or brainiacs, just ordinary people who work hard and passionately at exploring a big new idea. Okay, sometimes they’re brainiacs.

Last week I got an email from Forbes writer Alex Knapp (who writes the Robot Overlords blog) saying that my name was on their list.

A few days later we did an hour-long phone interview.

We talked about zombies (of course), Uber and my "startup sabbatical", my TEDxBerkeley talk about my grandfather's Parkinsonism, brainSCANr, my PhD research, and how my wife can kick my ass at writing code.

It was a lot of fun (though it did hit some of my "what the hell am I doing?" anxiety buttons) and, I think, a good interview.

So yeah, check it out.