Caveat lector: This blog is where I try out new ideas. I will often be wrong, but that's the point.

Home | Personal | Entertainment | Professional | Publications | Blog

Search Archive


Brain Log? Collaborative academic neuroscience blog

Earlier today on Twitter I mentioned that I feel that academic neuroscience could use a collaborative, professional blog such as Language Log. This seemed to resonate with several people, but 140 characters isn't enough to explain why I feel this way.

First and foremost: for those of you not familiar with Language Log, it's a blog run by academic Linguists and other language professionals, including several journalists who cover language.

Off the top of my head, here are a few reasons why I think neuroscience could use a "Brain Log":

  • Language Log is collaborative, meaning it doesn't "belong" to any one person.
  • This minimizes one issue of a lot of personal blogs: personal "branding" and link-bait style posts. The focus is on solving problems rather than general interestingness.
  • Many neuroscience blogs are fantastic, and have higher level discussions (e.g., Neuroskeptic), but they're run on a professional site and thus are for-pay content. And they're still "branded" by their individual ownership, leaving further discussions at the discretion of the owner.
  • The scope of Language Log is more technical, which allows the people "in the trenches" to dig into details that the general public may not find very interesting, but young researchers, journalists, etc. may find very useful as a resource.
  • A collaborative blog allows for better two-way discussion between the primary scientists and journalists covering neuroscience (a very hot topic in the public interest right now).
  • This would allow journalists better access to broader scientific opinions, and to give them an easy way to reach out to academics.


An intuitive explanation of over-fitting

A question over on Quora piqued my interest: What is an intuitive explanation of over-fitting? I use this blog and public writing to try and explain neuroscience topics, but I don't think I've ever taken a whack at explaining statistics. Which now strikes me as strange considering how much of my research is computational and methods-focused. So I gave it a go. Let me know if this makes sense.

Imagine you've got two normally distributed random variables, x and y. Here is their relationship:

When you know all of the data, it should be clear that knowledge of the value of x provides little information about y. That is, there is no clear predictive relationship. Certainly not a linear one.

We can test this by trying to fit a linear model (y = ax + b) to the data (the red line). This procedure shows us that the variance in x accounts for less than 1% of the variance in y.

But let's imagine you can't collect all of the data. Instead your sample includes only two of those data points. Again you try and fit a linear model and...
Holy crap! Your linear model (y = ax + b) explains 100% of the variance! Now you might say, "my linear model is perfect and completely explains the relationship between x and y". But wait... that fitting estimate doesn't look the same as the linear fit when we've got the full data:
In fact, our "perfect" model is pretty terrible!

That is overfitting. When your model has too many parameters relative to the number of data points, you're prone to overestimate the utility of your model.

We can keep going with this:
When your sample contains three data points, our linear model (y = ax + b) once again performs poorly (~7% of the variance) but our new model, a quadratic model (y = ax^2 + bx + c) explains 100% of the variance.

Again, sampling one more data point:
Well now our linear and quadratic models both perform poorly, but our new new model, a cubic model (y = ax^3 + bx^2 + cx + d), explains 100% of the variance.

And so on.


DIY Brain Stimulation

A few months ago I wrote a piece for the BBC on the foc.us do-it-yourself, consumer-based brain stimulator. Here's the article in full; full text follows:

For centuries people have sought quick fixes and miracle cures to enhance our vitality and make us faster, stronger, smarter, younger. Historically speaking, many of these "fixes" have been pharmacological. Freud used cocaine to improve his mood and treat his patients while Queen Victoria was known to enjoy a wine infused with the drug. In fact it's only relatively recently – the late 1960s – that the sale of amphetamines was made illegal.

Now it seems the next form of mental stimulant could come in the shape of a piece of technology, and not a drug. Looking like something Star Trek's Borg might wear, the foc.us device promises to "make your synapses fire faster" with the application of a small electrical current to the brain for 30 minutes. The promised result is that you'll have sharpened reflexes and faster reactions. As a gamer, I can see the attraction. Those extra split seconds can make all the difference. It's certainly a shrewd business decision too: video games were a £43 billion ($67 billion) dollar industry in 2012, and professional gaming draws in sponsorship and viewing figures that rival physical sports. The company's bold claims may draw in others looking for a mental boost, and not just people who like to play video games.

Unfortunately as a scientist I'm sceptical. The technique being used by the foc.us headset is known as transcranial direct current stimulation (tDCS). In the scientific world, studies using it have actually been quite promising. In carefully controlled experiments, low-level electrical excitation of just the right brain regions have provided a small boost to task performance. After a quick zap, volunteers have gotten slightly better at noticing if a square changed colour (after an hour of watching flashing coloured squares). But could this really extend to a task as complex as playing a video game?

The general idea is that if a certain brain region is involved in cognition – attention for example – then stimulating that area with an electrical pulse (priming it) should improve your ability to attend to what's happening around you. We also know that when you're engaged in a complex cognitive task (like playing Call Of Duty: Modern Warfare), certain neurones in specific brain areas fire more quickly.

So, I imagine the thinking at foc.us is that if we can speed up the rate of neuronal firing we should be able to enhance gaming ability. Unfortunately, that's a little like saying, "I can boost my computer's memory by jolting its hard drive with a battery!" – your IT department won't be happy.

Yes, sometimes tDCS provides a cognitive benefit; the data is indisputable. I myself make use of this technology in my lab. But its capabilities are too non- specific. In the brain, it matters which neurones are excited; alcohol excites a lot of neurones, but no one would claim that drinking four pints sharpens your reflexes.

And while stimulation methods have been proven to be safe for use in controlled experimental situations over the short- term, we don't know about long–term effects. We've been down this road before. Over the short term the data was very clear: cocaine and amphetamines were very effective at enhancing energy and mood, but we didn't know about long-term negative consequences. I will never say that we shouldn't test new drugs or new technology, but when companies promise to "excite your prefrontal cortex" to give you "the edge in online gaming" and "let the force of electricity excite your neurones into firing faster", well... buyer beware.


SfN 2013

Quick post about my time at the Society for Neuroscience conference this year.

First, if you're an undergrad looking to apply to PhD programs, or a grad student wondering what to do after the PhD, or a post-doc on the job market, please feel free to come chat with me. I'll be at the conference every day, and you can easily grab me after my two talks.

Professional Development

I'm speaking at the "Careers Beyond the Bench" panel Saturday morning at 9am. I'll be talking about my work with Uber and my "startup sabbatical", options for technically-oriented neuroscientists looking to leave academia (see my notes on data munging and "big data"), and why I chose to stay within the academy (and how I ended up at UCSD).

Here's the information about the panel:

Date & Time: Saturday, November 9, 2013 9am - 11am
Location: 31C
Panelists: Andrew Bean, PhD; Katja Brose, PhD; Joe Hardy, PhD; Bradley Voytek, PhD

The workshop will discuss career trajectories of individuals in non-academic settings. An emphasis will be placed on providing tips to prepare for a career shift, tools for networking, and strategies to build your resume. Panelists will address the following questions: 1. What career trajectories are available with an MS vs PhD in neuroscience? 2. What are the most effective strategies for transitioning into a non-academic research career? 3. What qualifications, skills and traits are needed for a career in science communication? 4. What are the skills that ‘tech' companies are looking for and how to use social and real-life networking to help make a career transition.

Research Presentation

I'm also giving a presentation on some of my latest research. This is a very short talk, but it's part of a much larger theme in what will be a major part of lab at UCSD. If any of the topics in the talk (aging, ECoG, neural simulation, etc.) are interesting to you, come talk with me!

Presentation Title: Aging increases neural noise in humans
Presentation time: Sunday, Nov 10, 2013, 10:00 AM -10:15 AM
Location: 25A
Abstract: With aging we are faced with the likelihood that our cognitive faculties will decline. Our neural and behavioral response times will be slower, our memories less certain, and our attention less focused. These behavioral phenomena are hypothesized to be a consequence of increased neural noise in aging, however, because neural noise is difficult to quantify, there is no physiological evidence for age-related increases in neuronal noise in humans.

At the neuronal level aging is associated with a loss of auditory cortical parvalbumin inhibitory interneurons. These interneurons, when stimulated, suppress spiking of excitatory neocortical neurons and set up gamma oscillations via feedback from excitatory pyramidal cells. This, in turn, reduces pyramidal cell noise and enhances network precision. We hypothesize that the loss of these neurons with age results in age-related increases in temporally decorrelated neural noise, a secondary consequence of which would result in decreased phase/amplitude coupling (PAC) between low-frequency theta (4-8 Hz) phase and high gamma (80-150 Hz) power.

Here we analysed electrocorticography (ECoG) data from 16 participants spanning 41 years of age performing auditory tasks. Intracranial ECoG provides researchers with invaluable data on mesoscale neuronal population activity. High gamma power in ECoG correlates with neuronal spiking. While task-related increases in local population spike rate results in an overall broadband increase in high gamma power, temporally de-correlated neural noise affects the slope of the high gamma power spectrum, providing an estimate of neural noise.

Consistent with the neural noise hypothesis of aging, we observed a significant correlation between participant’s age and the slope of their high gamma power spectrum such that older adults show a slope closer to zero (more flat) than younger adults (r = 0.61, p = 0.013). We also observed a significant negative correlation between age and theta/HG PAC (r = -0.66, p = 0.005).


Laziness and being an Academic Batman

Batman sucks compared to the rest of the Justice League: Superman could vaporize him before he could even think to pull out his anti-Superman spray; the Flash could pummel him at light speed; Wonder Woman could Lasso of Truth him. But Batman's gone toe-to-toe with each of them and won many times.

(Seriously if you haven't read Tower of Babel, stop reading my drivel right now and go do so.)

Other than the fact that it's all made up, how can this happen?

Batman's strength is in his planning. He thinks ahead and prepares for every contingency, no matter how small. There's a very important lesson here. It's okay if you're not the smartest, fastest, or strongest as long as you're mindful of your limitations and can admit your own weaknesses. To own them.

When I give academic talks, I've got 20-30 extra slides tacked onto the end that contain data and results from several side experiments. Not only is it thorough, but it shows you've thought hard about the problems you're addressing. It's amazing the kinds of reactions you get from the crowd when someone asks a pointed scientific question and you can say, "well according to this followup experiment we found that..."

It's no secret that I was a really poor undergraduate student. I only vaguely knew what I wanted to do and I couldn't admit that I didn't love where I was at the time.

I hated busy work. I hated the social pressure to be seen doing something as opposed to actually doing something. That's what I felt like college was.

After I graduated my two roommates and I spent most of that first post-college summer locked away in a bedroom playing video games until sunrise. All day, every day. We moved all of our desks into one bedroom so we could sit side-by-side-by-side and play Counterstrike and Team Fortress, Civ III, Starcraft, and whatever else grabbed our attention. This gaming marathon was punctuated by me heading out to party on the weekends.

My girlfriend of 4 months at the time (and now, by some stroke of luck I'll never fathom, my wife) would stop by the apartment on her way to work in the morning to say hello. We'd all still be sleeping, of course, because we'd just gone to bed after another marathon night of gaming.

From the outside this must have looked like incredible laziness. But it was largely apathy and paralysis induced by the unknown. I just didn't know how to get a job, how to be a scientist, or how to change my situation.

Since those days I've caught a lot of lucky breaks to help me get to where I am today.

But a big part of what helped my early scientific career was my laziness. Or rather, my disdain of busy work.

I'm a fan of "constructive laziness" and being too lazy to fail. That is, I try to take the time to think though the entirety of a problem before trying to attack it. This can take you a long way (though it can also hinder you if you're not careful...)

Algorithms are laziness!

Laziness is undervalued and "putting in your hours" is overvalued.

Wait, that's not quite right... maybe a better way for me to say that is: the power of constructive, focused laziness is undervalued and people put an unwarranted emphasis on "being seen at work". Think back to the story about Gauss's algorithm for summing the first 100 digits. It was busy work and he solved it by being smart (and possibly a smart ass), not by just chugging through it like everyone else.

I recognize that what I'm about to say comes from the perspective of a certain kind of privilege, but I've worked manual labor jobs and there is something to be said for a career that you can leave at work. Science isn't like that. There's a cost associated with it. When I was driving a forklift and working the loading dock, I sure as hell didn't lay in bed at night wondering how to do it better. But now I spend a lot of my time "on" and thinking about my research. I have to make time to turn off.

But that's a cost I accept because working the loading dock wasn't what I wanted to be doing forever.

Without my deep, internal drive to want to socialize, hang out, shoot the shit, and do nothing, I wouldn't be where I am today. I value spending time with my wife, son, and friends, and I value a career that allows me to do that.

"Being privileged to work hard for long hours at something you think is worth doing is the best kind of play." - Robert Heinlein, Lazarus Long, Time Enough for Love


Charles Trippy brain tumor resection video

We the Kings bassist and YouTube celebrity Charles Trippy recently uploaded to YouTube a video of his brain surgery.

This is an amazingly powerful thing to watch and I recommend taking the time to do so if you have any interest in the brain and/or medicine that you watch it. It's... not an easy viewing though, so I warn you.

I came across this video on reddit, and I took a shot at explaining the procedure and details over there. Most of what I've learned is from my research (e.g., A, B, C) in working with neurosurgeons, neurologists, and the incredible patients who undergo these surgeries.

The full text of that discussion is below.

Okay I'm gonna take a whack at explaining what's going on here for people who are interested. Note that I'm not a medical doctor, but I'm pretty sure most of what I'm about to say is correct to a first approximation. I'll fix any errors as I or others spot them.

Why's he awake? Why are they asking him to move his hands? Stuff like that.

Before I explain anything though, can I just say how damn amazing this is? Like, let's all take a moment to step back and recognize that a person, who would have been much worse off otherwise, put his life in the hands of a few highly skilled people and some amazing technology and let them open his skull and remove something from his brain. I've seen this many, many times, but it's good to just remember how amazing of a time we live in even though we don't have flying cars or jet packs or whatever. Every day people's lives are saved by medical technology like this.

Also, Mr. Trippy's a pretty amazing guy for volunteering to share this with the world. I don't think people appreciate this stuff enough. Good on you man. I'd love to use this video when I teach.

On to the details. From this image here you can see that the tumor is in his right frontal lobe:

The image I've circled shows the tumor-related hyperintensity (bright whiteness) on a t2-weighted MRI, which allows you to differentiate tissues from fluids. In t2-weighted MRI, water is brighter. This particular kind of tumor makes the blood-brain barrier more permeable and therefore the region around it has more water and thus is brighter.

Note that I say the tumor is in the right half of his brain, but the tumor in the MRI image looks like it's in the left. That's because neurologists "flip" the MRIs so that left is right and vice versa. Supposedly it's mirrored because that what the doctor sees when they're looking at the patient.

Anyway, the fact that the tumor is in the back part of his right frontal lobe means that it's close to the parts of the brain that control the muscles in the left half of his body. That is, the right motor cortex part of your brain innervates the muscles on the left side of your body and vice versa.

The reason the doctors keep saying stuff like "critical to move right now" and "how’s your hand doing there?" is because they want to minimize the possibility of cutting healthy brain tissue in his motor cortex. Neurosurgeons have a specific term for parts of the brain dealing with movement, speech, vision, and sensation. They call these regions "eloquent cortex", and they're very worried about damaging them... otherwise people could end up paralyzed, blind, or unable to speak or understand speech.

So they're really watching him very carefully, and that's why he's awake. They're making sure he's moving his hand just fine.

Given the location of the tumor, prior to his surgery they probably did what's called a "wada test" to see if his language functions are localized to the left or right hemisphere of his brain. If he's right handed, as a male he's got a 99% or so probability of language functions being in his left hemisphere, but usually surgeons check to make absolutely sure (right-handed women are about 95% left-hemisphere language dominant). Functional brain imaging (EEG, fMRI, MEG) just isn't quite good enough on a single person for surgeons to really be 100% positive. Whereas the effects of the wada test are... well... pretty conclusive.

The wada test is essentially using a barbiturate injected directly into the brain via the carotid artery to put half the brain to sleep. If they inject the barbiturate into the left hemisphere of your brain, and you're left hemisphere language dominant, then... well you'll feel funky for a while and have some language issues.

I've only half-jokingly asked my neurosurg friends to do this to me just to see what it feels like. Apparently "do no harm" and ethics and stuff, so no go so far.

Oh! Right at the beginning of the video a woman says "you'll feel a pinch and burn". I'm pretty sure she's doing a local anesthetic to the skin to numb it so they can screw the stereotactic frame into his skull. This lets them keep all of their instruments correctly positioned and still even though he's awake, talking, and moving around a little.

That's all I can think of for now. If anyone notices anything that I didn't address and they're curious about, ask me. If I don't know, I'll bug my neurosurg friends and update this post with answers.

Voytek B, D'Esposito M, Crone N, & Knight RT (2013). A method for event-related phase/amplitude coupling. NeuroImage, 64, 416-24 PMID: 22986076
Voytek B, Canolty RT, Shestyuk A, Crone NE, Parvizi J, & Knight RT (2010). Shifts in gamma phase-amplitude coupling frequency from theta to alpha over posterior cortex during visual tasks. Frontiers in human neuroscience, 4 PMID: 21060716
Voytek B, Secundo L, Bidet-Caulet A, Scabini D, Stiver SI, Gean AD, Manley GT, & Knight RT (2010). Hemicraniectomy: a new model for human electrophysiology with high spatio-temporal resolution. Journal of cognitive neuroscience, 22 (11), 2491-502 PMID: 19925193


What does it feel like to hold a human brain in your hands?

Heavier than I expected.

I feel like the answer should be that it was profound the first time. Enlightening. Humbling. But I just remember thinking how heavy it was.

The existential uppercut came later.

For years I'd heard that the human brain weighs just around 1.3 kg. But the thing I was holding was much heavier than that. It turns out that the human brain is very fragile. It has a consistency somewhat like jello: soft and squishy.

Without preservation and chemical hardening you couldn't pick a brain up. Couldn't dissect it. But this process adds significant weight.

Over the years I've handled a lot of brains at a lot of events--from teaching neuroanatomy at UC Berkeley for three semesters to public speaking engagements--and my perspective has shifted.

I've watched a person lie awake while their brain is operated on.

I've seen a brain extracted from the skull and cut apart to determine neuropathology. I've sat in a room having a chat with a neuropath colleague when a nurse came running in with a slice of tissue from a patient currently undergoing surgery. My colleague excused himself while he diagnosed their glioblastoma.

As with many people in neuroscience, I have a deeply personal first-hand knowledge of the vicissitudes of some neurons doing something wrong in a loved one's brain.

It's hard for me not to stand there, mass of tissue in hand, the somewhat sickening odor from the solution wafting up into my glomeruli, and get hit with the gravity of what's actually happening. The realization that in my hands I hold what just a few weeks before was a person's everything. Every petty jealousy. Every insecurity and fear. Every hope and joy and pleasure.

But I was taught gallows humor by the one with the noose around his neck, and I've grown to appreciate that. So though I may joke and speak lightly and do really goofy crap (like the zombie brain stuff), those are all part of my process.

For that reason, before launching into my lectures and jokes and interesting factoids, I always remind my students or the crowd of what they're actually seeing and touching.

My job is amazing, but there is an existential weight that is easily forgotten on a day-to-day basis, but whose shadow is always there.

So I guess not much has changed since I held that first human brain after all; they're still much heavier that I'd first imagined, but in a very different way.

(From my answer on Quora)


What's "big data" good for?

First, what is "big data" other than literally just a lot of data. While it's more of a marketing term than anything, the implication of "big data" is usually that you have so much data that you can't analyze all of the data at once because the amount of memory (RAM) it would take to hold the data in memory to process and analyze it is greater than the amount of available memory.

This means that analyses usually have to be done on random segments of data, which allows models to be built to compare against other parts of the data.

To break that down in simple words, let's say that Facebook wants to know which ads work best for people with college degrees. Let's say there are 200,000,000 Facebook users with college degrees, and they have been each served 100 ads. That's 20,000,000,000 events of interest, and each "event" (an ad being served) contains several data points (features) about the ad: what was the ad for? Did it have a picture in it? Was there a man or woman in the ad? How big was the ad? What was the most prominent color? Let's say for each ad there are 50 "features". This means you have 1,000,000,000,000 (one trillion) pieces of data to sort through. If each "piece" of data was only 100 bytes, you'd have about 93 GB of data to parse. That's pretty big (but still arguably not quite into "big data" territory), but you get the idea.

Your goal is to figure out how which features are most effective in getting college grads to click ads. Maybe your first-pass model on a random sample of 1,000,000 users finds that ads with people in them that are 200x200 pixels big and about food get the most clicks. Now you have a "prediction model" for what college grads want, and you can then test that to see how well your prediction (based on the 1,000,000 college grads) holds up when you compare it to the other 199,000,000 college grads.

Now, for what it can do in "daily life", well, pretty much any company with a significant tech group (Google, Twitter, Facebook, any bank or financial institution, any communications and mobile service, energy, etc.) are doing this kind of thing. To serve ads, to improve their services, to predict future growth and demand needs, whatever.

But what about other uses?

Google famously showed that they could predict flu outbreaks based upon when and where people were searching for flu-related terms:

There's the famous story about how Target's algorithms discovered a girl was pregnant.

Researchers are using Facebook statuses to look at how gender and age is affecting language use:

Doctors can look at what patients are writing about in online disease forums to try and get an idea of how off-label drug use affects certain diseases.

We can look at the evolution of language:

or the suppression of ideas:

We can look at how people move based on their cell phone use:

How money physically moves:

Or, like my work with Uber, their actual travel, and how various real world events (like the 2013 U.S. Federal Government Shutdown) affect the way people move around:

These are only the tip of the iceberg. 90% of the world's digital data was created in the last two years so we're just starting to figure out the possibilities. Note that in my cognition research I'm using a ton of data on peoples' behavior to try and infer how age, location, education, etc. affect our cognitive abilities. But those data aren't published or peer-reviewed yet, so it's not really appropriate to discuss quite yet. But the results are fascinating.

(This was originally a question on Quora.)

Schwartz HA, Eichstaedt JC, Kern ML, Dziurzynski L, Ramones SM, Agrawal M, Shah A, Kosinski M, Stillwell D, Seligman ME, & Ungar LH (2013). Personality, gender, and age in the language of social media: the open-vocabulary approach. PloS one, 8 (9) PMID: 24086296
Wicks P, Vaughan TE, Massagli MP, & Heywood J (2011). Accelerated clinical discovery using self-reported patient data collected online and a patient-matching algorithm. Nature biotechnology, 29 (5), 411-4 PMID: 21516084
Michel JB, Shen YK, Aiden AP, Veres A, Gray MK, Google Books Team, Pickett JP, Hoiberg D, Clancy D, Norvig P, Orwant J, Pinker S, Nowak MA, & Aiden EL (2011). Quantitative analysis of culture using millions of digitized books. Science (New York, N.Y.), 331 (6014), 176-82 PMID: 21163965
Song C, Qu Z, Blumm N, & Barabási AL (2010). Limits of predictability in human mobility. Science (New York, N.Y.), 327 (5968), 1018-21 PMID: 20167789
C. Thiemann, F. Theis, D. Grady, R. Brune, & D. Brockmann (2010). The structure of borders in a small world PLoS ONE arXiv: 1001.0943v1


Does North Korea publish peer-reviewed science?

To answer this I looked to Pubmed, which is the biomedical peer-review research search engine run by the National Library of Medicine of the US National Institutes of Health.

Pubmed's search capabilities allow you to search by authors' professional or university affiliations.

I limited my search query to Pyongyang, North Korea, DPR Korea, and/or DPRK, which (correctly) yielded 5 published peer-reviewed research publications, none before 2006:

  • Chae MH, Krull F, Lorenzen S, Knapp EW. Predicting protein complex geometries with a neural network. Proteins. 2010 Mar; 78(4):1026-39.
  • Yu SC, Borchert A, Kuhn H, Ivanov I. Synthesis of a new seleninic acid anhydride and mechanistic studies into its glutathione peroxidase activity. Chemistry. 2008; 14(23):7066-71.
  • Rim H, Kim S, Sim B, Gang H, Kim H, Kim Y, Kim R, Yang M, Kim S. Effect of iron fortification of nursery complementary food on iron status of infants in the DPRKorea. Asia Pac J Clin Nutr. 2008; 17(2):264-9.
  • Kim YS, Xiao HZ, Du EQ, Cai GS, Lu SY, Qi YP. Identification and functional analysis of LsMNPV anti-apoptosis genes. J Biochem Mol Biol. 2007 Jul 31; 40(4):571-6.
  • Choe CU, Flunkert V, Hövel P, Benner H, Schöll E. Conversion of stability in systems close to a Hopf bifurcation by time-delayed coupling. Phys Rev E Stat Nonlin Soft Matter Phys. 2007 Apr; 75(4 Pt 2):046206.
  • Chol PT, Suwannapong N, Howteerakul N. Evaluation of a malaria control project in DPR Korea, 2001-2003. Southeast Asian J Trop Med Public Health. 2005 May; 36(3):565-71.

What's intriguing to me are the non-Korean names attached to some of these papers. I'm curious what the process of collaboration is like between international scientists when one of them is from North Korea (which has well-known external communication restrictions).

(question originally answered on Quora)

Chae MH, Krull F, Lorenzen S, & Knapp EW (2010). Predicting protein complex geometries with a neural network. Proteins, 78 (4), 1026-39 PMID: 19938153
Rim H, Kim S, Sim B, Gang H, Kim H, Kim Y, Kim R, Yang M, & Kim S (2008). Effect of iron fortification of nursery complementary food on iron status of infants in the DPRKorea. Asia Pacific journal of clinical nutrition, 17 (2), 264-9 PMID: 18586646
Kim YS, Xiao HZ, Du EQ, Cai GS, Lu SY, & Qi YP (2007). Identification and functional analysis of LsMNPV anti-apoptosis genes. Journal of biochemistry and molecular biology, 40 (4), 571-6 PMID: 17669274
Choe CU, Flunkert V, Hövel P, Benner H, & Schöll E (2007). Conversion of stability in systems close to a Hopf bifurcation by time-delayed coupling. Physical review. E, Statistical, nonlinear, and soft matter physics, 75 (4 Pt 2) PMID: 17500977


The Prodigy Effect

This post might turn into a collaborative research project. If you're a data person, or an experimental psychologist interested in this, get in touch. I'd love to see if we can't formalize this idea.

Anyway, a recent (excellent) Radiolab episode about the strange, unproven ideas of Henry Heimlich (of Heimlich maneuver fame)--along with this Atlantic article about Linus Pauling's ideas about vitamin mega-dosing as a cancer cure--got me thinking about the "Nobel Disease" and the effect that early successes could have on people.

According to that link:

The Nobel disease is a term used to describe a phenomenon in which Nobel Prize-winning scientists endorse or perform "research" in pseudoscientific areas in their later years. In reality, this "disease" most likely demonstrates that even the most brilliant people are not immune to crank ideas and belief in such ideas will persist to some degree even among Nobelists. It also makes for a convenient argument from authority for lesser cranks, because if a Nobel Prize-winning scientist says it, it must be true!
As we know, we humans are susceptible to misconceptions, logical fallacies, and biases.

What if people who are exceptionally successful from an early age are biased into thinking that they're less fallible, at the exclusion of taking advice and/or criticism, while simultaneously their incredible success paralyzes other people from offering criticism?

To put that another way, what I'm calling the "Prodigy Effect" is a two-way cognitive bias wherein one gives their own innate abilities undue positive weight with regards to their early success in a domain (business, art, science, etc.) which at the same time makes others reluctant to offer criticism. This seems like a mix of the hot-hand fallacy, some kind of "beginner's luck" bias, the gambler's fallacy, with a dose of self-serving bias and the spiral of silence.

The hot-hand fallacy was made famous by the 1985 Cognitive Psychology paper by Gilovich, Vallone, and Tversky, "The hot hand in basketball: On the misperception of random sequences". Basically, this fallacy is one wherein someone who has had an early success at something believes they have a greater chance of success at future tasks.

One of the first things that came to mind about this idea was this chart about M. Night Shyamalan's moving ratings over time. Each successive movie is essentially rated more poorly than the last:

Or this review of Star Wars: The Phantom Menace (jump to 4:55) about people not calling out George Lucas on some bad ideas:

Both Shyamalan and Lucas had crazy successes with The Sixth Sense and Star Wars: A New Hope, respectively. But the subsequent movies that they directed were widely regarded as pretty awful.

But of course, these are anecdotes, and what counts is data.

My first idea was to take a look at the list of those who have fallen prey to the "Nobel Disease" and to look at their ages compared to the average age of other Nobelists. The hypothesis is that those who are "Nobel Diseased" would be, on average, much younger when they won their first Nobel than the average Nobelist.

Thankfully the Nobel Prize website has an API that provides these data, which I've cleaned and organized here. That file also includes the MATLAB script I used to analyze the data. For people with multiple Nobel prizes, I took their age at their first win.

What I found was that those 14 people who are classified as having the "Nobel Disease" were, on average, 8.4 years younger when they won their first Nobel Prize compared to "normal" Nobelists.

This is significant using a two-way t-test (p = 0.011) and using resampling methods (1000 surrogate group reassignments, p = 0.012).

I'd love to do the same analysis on the ratings difference between a director's first film and subsequent films, with the idea that directors who have an early huge hit would then have more poorly-rated future films, but I don't have time to dig into the Rotten Tomatoes API.

This hypothesis could also be tested in an experimental psychology setup, though I don't quite have an idea for the experimental design yet.

Anyway, interesting stuff regardless, I think.

If you'd like to help, let me know!

ResearchBlogging.org T Gilovich, R Vallone, A Tversky (1985). The hot hand in basketball: On the misperception of random sequences Cognitive Psychology DOI: 10.1016/0010-0285(85)90010-6


The neuroscience tenure-track

The point of this post is to tell my story and explain, as best I can and in as much detail as possible, my path toward accepting a tenure-track position at UCSD. I will attempt to give as many factual and quantitative details as possible (where appropriate) for people who might be looking to "compare" in their own job hunt.

Brief aside: I'm going to shy away from discussing issues of gender, sex, nationality, class, parenthood, and race because--while I know those factors play a huge role in academia--they're too complex for me to be able to adequately address. However I would like to happily note that the chair of my new home department of Cognitive Science, the chair of the search committee, and the two other computational faculty are all women from a diversity of backgrounds. For those who are curious, I myself am of mixed race from a working class family, but that is as much as I will say on the matter.

We all know that the academic path is variable and uncertain. When I was on the job hunt I remember looking at other peoples' CVs to see how mine stacked up. I looked at where other post-docs (my competition) published their papers. I looked at their h-indices and grants and conference proceedings. I remember thinking, "what do I have that could possibly make me stand out compared to these amazing scientists"?

We encounter this uncertainty repeatedly in academia:
  • Will I pass my qualifying exams?
  • Can I come up with a novel hypothesis and run an experiment, from start to finish, to test it properly?
  • Do I know enough to do so?
  • Can I successfully defend my thesis?
  • Find a post-doc?
  • Run experiments over and over again?
  • Delegate work?
  • Get a grant?
  • Get a tenure-track job?
  • Run a lab?
Do I even want to keep doing this or just go and make real money? Academia isn't the only job for us. If you don't really want it, you should seriously ask yourself why you're pursuing it.

I've struggled at every step along the way. There's been doubt, for sure, but I've talked about all these things in detail on here already and tried to give advice where possible. I'm also trying to do my part to mitigate the doubt of future scientists (and this post is yet another attempt at that) while being as honest as possible. That's important to me because I feel like there's a lack of clarity in the academic job process.

So let's get to the details (skip to the very end if you just want to compare numbers and CVs).

I first tried applying for faculty jobs in Oct/Nov 2010. I applied to psychology jobs at both Stanford and UCLA. This was very early; I'd just finished my PhD but three of my thesis projects were published that that year, one each in Neuron, PNAS, and Frontiers in Human Neuroscience, each cited (by now) at least 30 times (so they had a decent scientific impact). It was a big year so I thought I'd "dip my toe in the water" and strike while the iron was hot, so to speak.

Note that no one--no friends, faculty who wrote my letters of recommendation, etc.--actually thought I'd get a job, but I wanted the practice in applying.

(Ah, the academic life before parenthood... I finished writing my job applications on a ferry leaving Kobe, Japan after giving an invited presentation at the International Congress of Clinical Neurophysiology, which was an amazing trip for my wife and I.)

In response to this round of applications I got nothing back but form letter rejections around Feb/Mar the following year. But I honestly didn't mind because I had a good post-doc job and I wasn't expecting anything anyway. I really wasn't emotionally invested (by intent).

Pause here: I want to note that the applications I sent weren't great. The research statements were weak and I didn't do a good job explicitly stating my overall research theme and goals. This is a skill at which I often fail but one the importance of which I cannot emphasize enough:

You know your work better than anyone which makes it easy to forget to be explicit and connect all the dots, but you need to connect those dots and frame your work clearly within the bigger picture.

The committee reading your application won't know your subfield like you do. If you apply to a department that has a broad mission like Cognitive Science--which ranges from computer science to anthropology--you need to make sure you explain your work in a way that everyone can understand and appreciate.

Lesson learned.

When the 2011 application deadline came around I didn't bother applying for any jobs because I'd just had my first child a few months beforehand. Furthermore I'd just started my current post-doc and I was really excited to get those projects going. I wasn't in a rush.

For the 2012 applications (again, Nov/Dec) I decided to make a second tentative attempt. My post-doc grant still had a few more years of funding, but several jobs opened that just "felt right". Stanford had a vision science position that sounded like a good fit, UCLA had an opening in psychology, and UCSD had a position in the Department of Cognitive Science for a computational cognitive scientist doing work in data-mining, BCI, etc.

(Can you tell I really wanted to stay in California?)

Given that my brainSCANr paper was just published and some of the work I had currently under review were data-mining projects, UCSD seemed like an excellent, rare fit. Again, it was a bit early because none of my post-doc work was published and, again, I wasn't emotionally committed (meaning I was steeled against failure), but I'd have kicked myself if I didn't try. The cost of entry is very low.

My applications were much stronger this time. While my CV wasn't phenomenally better, by this point I'd begun running my own mini-lab inside my post-doc lab and I had two really strong papers under review at "good journals" (bleeeeech I hate saying that).

In late January I heard that I wasn't short-listed for the UCLA job. Apparently I was "strongly considered", and much-discussed, but in the end they wanted a behavioral neuroscientist (not a cognitive/computational person).

As opposed to the first time I tried applying, this time I got personal letters back from faculty on both the UCLA and Stanford committees ("I decided to write you to thank you personally for your application... I am a real fan of the work you are doing...") Honestly I wasn't even upset; those folks seriously know how to make someone feel good about being rejected!

As I was sitting at home with my wife the night I got the rejection from UCLA, I received an email from UCSD while I was nursing my wounds (i.e., watching Game of Thrones or whatever). They asked me if I was still interested in the position to which I had applied.  I re-read that email several times to make sure I wasn't misunderstanding. I had my wife read it, too, to verify. I was pleasantly surprised, to say the least. So I set up a phone meeting with the UCSD cognitive science search committee chair and we arranged a date for me to fly down for my job talk.

There were six weeks between that phone call and my interview. Now, I've given a lot of scientific talks, both for large audiences and small, for lay-people and specialists. When I gave my first practice talk at my lab's meeting a few weeks before my job talk, it was okay, but far from great. I made four mistakes that you should avoid:

  1. I did a poor job explaining the narrative flow linking one experiment to the next.
  2. I tried to fit in too much (the "everything plus the kitchen sink" approach). I'm excited about research--my research in particular (hence why I want to stay in academia)--but you have to know which parts to cut. Less really is more.
  3. I didn't leave people with a strong feeling of the overarching theme of my work...
  4. Nor did I adequately leave them with a sense of excitement about the future of my work.

These are hard problems to tackle! But the solution to several of them is essentially: be explicit. The narrative flow is an easy problem to address, because it essentially recapitulates what really happened during the course of your research career. Meaning papers n, and n+1 are extensions based off papers n-1, n-2, ...n-i.

So talk about them that way.

I practiced my talk in front of other people two more times: once with my PhD lab and once for my wife and a friend. I did a final practice run on my own, speaking slowly to make sure I wouldn't go over time.

Make sure you do not go over time.

Everyone hates that. Don't make everyone hate you because of a dumb issue like amateur time mismanagement. Everyone's time is valuable; don't hold your audience hostage. It's rude.

My visit to San Diego was not without some frustrations. My visit was scheduled for Wednesday and Thursday, with my talk being Wednesday at noon. The visit was to begin with an 8am breakfast, followed by meetings with the search committee chair and departmental chair, and then my talk. After the talk there were more meetings, dinner with some faculty, and then another full day of one-on-one meetings with faculty.

Unwisely (in hindsight) I decided to fly down late the night before my talk so that I could help my wife with getting our son from daycare, fed, bathed, etc. My flight was scheduled to arrive in San Diego at 10:40pm. I assumed I'd grab a taxi to my hotel near UCSD's campus and be in bed by midnight. No problem.

Unfortunately, as we were landing the pilot suddenly kicked the engines into high gear (or whatever the aeronautical equivalent term is) and began to ascend. After circling San Diego's airport for about 30-40 minutes we were informed that, because of dense fog, we were being diverted to Los Angeles. We finally landed at LAX at around 11:30pm, after which time we were to wait around for busses that would then drive us to the San Diego airport.

Now I was all carry-on so I was ready to go as soon as we landed. But I did the quick math and, by the time all the other passengers got their luggage, the mob figured out what to do, boarded the busses, and finally got on the road, it would be about 1am. Add a 2-hour drive to the San Diego airport, and then another 30-40 minutes back north to UCSD... that would get me into my hotel room at around 3-4am. I opted out of that arrangement.

After calling a few friends in LA to see if I could catch a late-night ride to San Diego (and offer up my hotel room to my generous driver for a night) I was reminded that the following morning (the morning of my job talk) was my good friend's memorial service. A close friend of ours had recently, unexpectedly, died, so all of my closest friends in LA were attending his services (which I had to miss for the job talk... ugh).

Thankfully I also work for a car company so I requested an Uber to come pick me up. I explained to the the poor guy that came to get me that this wasn't a quick 20 minute ride (long drive, late night, job interview, etc.) I told him I didn't expect him to do such a long trip so late at night in the middle of the week. Amazingly the guy agreed to drive me anyway and was a god damn champ. As I apologized for the bumper-to-bumper 1am Thursday LA traffic he just kept saying, "my job is to get you to your job interview well-rested, so don't worry."

By the time I got checked into my hotel it was about 2:00am. I figured I'd get up at 7:30, quick shower and shave, throw on my suit, and meet for breakfast in the lobby at 8am. But whomever stayed in the hotel room before me had different plans, so surprise! when the alarm in my room went off at 6:15am (I was using my phone's alarm plus a wake up call to wake me up... I hadn't checked the room's alarm clock). Once I figured out how to turn the damn thing off I was awake. Too excited from the cortisol/adrenaline rush to go back to sleep now!

So I took my time, got ready, had some coffee, checked my email, and went for a walk around the hotel (which was beautiful, by the way).

Needless to say I had no problem getting to my 8am breakfast on time. But I was looking at a non-stop 13-hour day of interviews, job talk, meetings, and socializing on about 3-3.5 hours of sleep (and knowing all of my closest friends were grieving for another very close friend).

When I first did that simple math I suddenly became very thankful for the last 18 months of parenthood for teaching me how to cope with sleep deprivation.

Surprisingly I was so thrilled to be there that I don't really remember feeling overly tired. Thankfully the department had a lot of water (and a nice welcome bag with UCSD CogSci schwag) waiting for me, so I just kept hydrated and kept talking!

If you're curious, here are the slides from my talk (not including all the extras I tack onto the end in case I get follow-up, clarification, or methodological questions... remember to always be a step or two ahead!) You'll note that I'm very text-light in my talks. I know my slides well so I use the figures and images on the pages to prompt me (no written or digital notes). I'm also a one-figure-per-page speaker when I can get away with it.

If you want to hear me give a very rough version of this talk, here's a video from a talk I did at Berkeley City College about 6 months beforehand:

As a point of comparison for you bean counters, here's my CV, frozen at the time I got the job offer from UCSD. My h-index was 8. I didn't have any Nature or Science papers (but I did have a PNAS and Neuron paper) nor any fancy grants (no K99, NRSA, etc.) My most highly-cited primary research paper was an open-access paper published in Frontiers in Human Neuroscience. I also had a few methods papers, including the more... non-standard... brainSCANr paper.

The other unusual thing I included was my "crowdsourced letter of recommendation" which definitely seemed to have gotten peoples' attention. My extracurricular activities (zombie neuroscience, blogging, outreach, etc.) weren't even really mentioned (in the positive or negative) so I'm not sure what the committee thought of all that.

There you go.

I hope this helps give some idea of what this process is like. Remember that there really is a lot of luck involved. The departments need to be looking for someone who fills a niche they're lacking. While you can't control your academic fate because of this "luck" factor, you can do a few things to tip the balance in your favor:
  1. (Obviously) do good science in a field you love.
  2. Network and get your name out there: speak at conferences, run symposia, reach out to faculty at universities when you're traveling to see if you can give a department colloquium or lab meeting. But I think physical travel is becoming less important as social media and email grow ever more pervasive (invasive?).
  3. Make sure other people know about your research (send PDFs of your papers to colleagues who've inspired your research).
  4. Keep your shit together and remember that this is a job and not your life.


Hello San Diego!

I've got a big announcement. It's an "I've been wanting to say this for months," kind of announcement.

This will soon be my new office:

Negotiations are finally completed. The offer has been accepted.

Beginning in 2014 I will joining the University of California, San Diego as a tenure-track Assistant Professor of Computational Cognitive Science in the Department of Cognitive Science and the Neurosciences Graduate Program.

When I started this blog on 2009 December 22, I told my wife "I think I'm gonna start a blog". That's it. I just wanted a place (outside of the bar) to semi-formally exercise my neuroscientific thoughts. A place where I could talk about science freely and openly, using my real identity, to try out weird ideas with fewer restrictions compared to peer-review.

I wanted a place where I could share--in however small a way--the love, joy, and excitement for science and for the job that brings me so much happiness (and its fair share of frustrations...)

At the time I was still working toward finishing my PhD. Now, nearly four years later, I'm still as in love with this career as I was then.

In the intervening years I've written around 150 posts here... about one every 10 days or so. Not a lot but, as I like to say, I'm a "scientist who blogs" not a "science blogger". Subtle, but important, difference.

Since I've been here I've gotten my PhD, did my "startup sabbatical", fulfilled my genetic duties,  published a few papers, made brainSCANr, did the zombie brain thing, and did two year-long post-docs, one each at Berkeley and UCSF.

UCSD is part of my academic heritage and the founding home of cognitive science. I am extremely proud to be calling it my new home.

I've made jokes about academia before: the tenure track is fraught with uncertainty, but academic science is where my heart is and I can't wait to begin this new part of my career.

This blog has put me in touch with a lot of new people. One of my favorite things is when people awkwardly come up to me at conferences to tell me they've read a post of mine. On the internet, everyone can hear you scream, but you're never sure if anyone actually gives a shit (as they like to say). So it's nice to know that cool people are interacting with my words sometimes.

I'd like to make that interaction less one-sided. I'm going to need some awesome people to help me set up a lab!

If you're a student reading this and thinking about doing a PhD in cognitive science or neuroscience, or if you're a PhD student finishing up and looking for future post-doc opportunities, shoot me an email or grab me at a conference (I'll be at SfN in San Diego this November). San Diego is an amazing city and UCSD is one of the top cognitive science and neuroscience research facilities in the world.

My pubmed/Google Scholar citations will give you a decent idea about my research, but I've been moving into new worlds of data mining and analytics, and I would be happy to share the drafts of the four(!) papers currently under review.

Check out my CV for some idea of the other "stuff" I do.

So unless you hate amazing weather and working within walking distance of beautiful beaches, consider working with me! Obviously you'll get bonus points for name-dropping my blog; now that I'm a professor I need to cultivate an inflated ego :D

In all seriousness, thanks to everyone who made this academic process seem less uncertain and lonely; and a huge thank you to everyone who contributed to my crowdsourced letter of recommendation. As I told the search committee:
Much of my outreach and education efforts exist in an invisible space where metrics and assessments cannot easily reach. To try and give an index of my extracurricular outreach, education, and science communication efforts I reached out to my digital network of people who read my blog, watch my videos, and follow my writings on twitter and other social networks. I asked them to submit to me a statement of what—if anything—my blogging, public speaking, etc. has meant to them.
I received 22 letters in support of my job application; it went over very well. It's impossible for me to honestly express how overwhelmingly awesome it was to get letters and statements from you all.

I'm going to follow this announcement up with a more detailed story about the actual search process for those interested in the fine details.

But look at me still talking when there's Science to do. I've got experiments to run and there is research to be done...


Non-linear systems

(Another answer of mine from Quora)

Non-linear dynamics are fascinating, if for no other reason than so many statistical models are linear, so testing for non-linearity often requires a more strong hypothetical foundation for understanding.

I'll discuss these topics in rough order of my personal favorites.

Oscillatory Dynamics

Oscillatory dynamics are the best. The most simple to understand is a pendulum, but you add a second pendulum to the end of the first and suddenly the non-linearities lead to some crazy complexities. Check it out:

I love this because oscillations play an important role in neural communication, perception, etc. This is the focus of my neuroscience research and is, of course, the name of this blog, This is a figure from a paper of mine currently under review, for example, showing a model wherein neuronal oscillations coordinate communication between brain regions. This is nice because it allows for "cleaner" communication in a noisy environment (think about a radio):



Closely related to oscillatory effects, but can be nice to demonstrate certain emergence effects, in contrast to some of the chaotic effects of other non-linear systems. Check out this simple demonstration of resonance and emergence:


Simulating 10^6 neurons sending signals to one another can also lead to interesting emergent "waves" of activity caused by the non-linearities of the interneuronal communication effects:

Power Laws

Social networks are so hot right now. Social networks follow a power law (aka the six degrees of Kevin Bacon effect). Some people (influencers) know a lot of people, whereas most people know relatively few. This is the infamous "black swan" "long tail" effect.

Without getting into too many details, power laws are "scale-free" in that they have no preferred length scale. This (roughly) means you can "zoom in" on any portion and retain the original complexity. By definition, this is a fundamental feature of fractals.

I find fractals and power laws super interesting not only because of how prevalent they are in neuroscience, but also their real world consequences are fascinating. The classic example is the question of How Long Is the Coast of Britain?

The famous Zipf's Law is a power law distribution concerning word frequency. Some words appear a lot in language, while many words are very infrequently used. Here's a plot of the English language Wikipedia's word frequency:

Another power law that's received a lot of attention is Benford's Law, which states that the distribution of digits in a lot of data are not even, but power law distributed. For example, in base 10, the number one should, all things being equal, appear 10% of the time. But in many data sources one appears around 30% of the time. This fact is actually used to help detect fraud in, for example, tax returns.

One of my favorite answers on Quora, that by Michael Wolfe to Engineering Management: Why are software development task estimations regularly off by a factor of 2-3?, embodies power laws in human perception very nicely. We tend to subjectively experience time as the logof objective, actual time. Meaning we're really really bad at estimating how long something took, or how much time has passed. Check out the results from this paper:

In the words of the authors, "subjective estimates of future time horizon change less than the corresponding change in objective time." Here's the same data, but log-transformed to highlight the power law effect:

U-shaped curves

These are ubiquitous in learning, economics, and pharmacology. The classic is the Kuznets curve:

U-shaped (or inverted U-shaped curves, such as the one above) are also highly prevalent in pharmacology. This is captured by the idea of Hormesis, "It is conjectured that low doses of toxins or other stressors might activate the repair mechanisms of the body. The repair process fixes not only the damage caused by the toxin, but also other low-level damage that might have accumulated before without triggering the repair mechanism.":

Other non-linearities

Non-linear relationships can be modelled using any number of non-linear equations. For example, I've got data that suggests that cognitive slowing increases not as a linear function of age, but rather it accelerates with age:

Non-linear systems are, uhhh, awesome.

ResearchBlogging.orgBenoit Mandelbrot (1967). How Long Is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension Science DOI: 10.1126/science.156.3775.636 GAL ZAUBERMAN, B. KYU KIM, SELIN A. MALKOC, & JAMES R. BETTMAN (2008). Discounting Time and Time Discounting: Subjective Time Perception and Intertemporal Preferences Journal of Marketing Research DOI: 10.1509/jmkr.46.4.543 Calabrese EJ (2008). U-shaped dose response in behavioral pharmacology: historical foundations. Critical reviews in toxicology, 38 (7), 591-8 PMID: 18709567