Caveat lector: This blog is where I try out new ideas. I will often be wrong, but that's the point.

Home | Personal | Entertainment | Professional | Publications | Blog

Search Archive


Science Creates Wonder

Why do we have children?

I'm generally not the kind of person to make a "think of the children" argument because it's an argument from emotion that can subvert logical thinking.

However that's exactly why I'm using it here because I believe that's the language our House of Representatives speaks. Because, according to a report from Science yesterday,
...over the course of two contentious hearings, the new chairman of the House of Representatives Committee on Science, Space, and Technology floated the idea of having every [National Science Foundation] grant application include a statement of how the research, if funded, "would directly benefit the American people."
Chairman Lamar, I ask you:

Why do we have children? Why do you have children?

Is it because they "directly benefit" us? Because honestly they're expensive. They're a drain on personal and national resources. They're exhausting. They're time consuming and prevent us from being optimal workers.

Sure, some of them grow up to be "productive members" of society, but some of them will also grow up to become murderers or rapists.

Why would we, as a country, take such a chance? Viewed from a myopic cost/benefit analysis perspective, child-bearing certainly doesn't doesn't seem to fit any criteria for "direct benefit" that I can imagine.

But Mr. Chairman, not every decision we make as a society should be based on immediate perceived gains. When I pick up my son in the evening--exhausted and after a long day of work--I'm not thinking about how much his daycare costs, or how I might not get much sleep tonight because he's teething. When we're running around together in the park laughing and playing funny toddler games the benefits I receive are intangible. What he adds to my life cannot be defined in a line-item budget by some committee looking for "direct benefits".

Science isn't about making money or immediately improving our GDP. Sure, the NIH annual budget is $30 billion dollars. Research costs our country a lot of money. But in return the United States has a life sciences industry that employes 7 million people and returns $69 billion annually. The United States is the country it is today because for the last several decades it has lead the world in research spending (although that claim will soon no longer hold true).

Of course science as practiced by people has its flaws but those human flaws get smoothed out and corrected over time. Science is longer than any of our egos and pettiness. Even that of our politicians.

Science is a cultural endeavor. It provides us as a nation and society with so much more than any poorly-defined "direct benefits to the American people". Science gives us hope and excitement; we've cured horrific diseases while creating amazing technologies. It has saved lives and enriched them.

Science gives us wonder. Every day there is another groundbreaking scientific finding that propels us as a species ever farther toward truth and understanding. Headline after headline expounds the great strides being made in neuroscience, genetics, cosmology, and so many other scientific fields

Even the most "frivolous" seeming scientific projects may hold the key to unlocking the mysteries of the brain, of life, and of our universe. If you don't believe me, allow me to offer but few examples from a very long list:
  • Studying monkey social behaviors and eating habits lead to insights into HIV. (Radiolab: Patient Zero)
  • Research into how algae move toward light paved the way for optogenetics, a method that uses lasers to control brain cells. (Nature 2010 Method of the Year)
  • Black hole research gave us WiFi. (ICRAR award)
  • Optometry informs architecture and saved lives on 9/11. (APA Monitor)
  • The Search for Extraterrestrial Intelligence (SETI) development of cloud-computing service SETI@HOME paved the way for citizen science and recent breakthroughs in protein folding. (Popular Science)
  • Studies about slime mold informed better designs for city planning and infrastructure. (Nature)
I daresay a governmental panel charged with deciding whether or not those basic scientific findings "would directly benefit the American people" would be hard pressed to answer "yes", yet the costs to our culture, health, and happiness would be so high if they were not funded.

When I, a father and a scientist, read about how you and your Congressional colleagues, including House Majority Leader Eric Cantor, are trying to restrict scientific funding by creating goofy websites that ask non-scientists to submit "questionable" NSF grants that should lose their funding, I cannot help but think that you do not truly believe that a shortsighted accounting of returns is what motivates us scientists in our work.

Please, next time you consider cutting research spending or creating yet another layer of oversight, think of your children and what budgetary items they add to your life.

Because sir, you cannot legislate innovation and you cannot democratize a breakthrough. You can, however, guide the system to maximize the probability that we scientists can make breakthroughs occur. We scientists stand on the shoulders of giants and gaze out in wonder. The more research you fund, the more wonder we uncover. Because the role of science in a society, just like the role of children in our lives, is to enrich our experience in this world through long-term awe and wonder.

Short-term costs be damned.


Primer on Sleep

This is my answer to the question over on Quora, What is the neurological definition of sleep?

Sleep has a common, folk definition (the one we all know) which is roughly a state of reversible unconsciousness.

Neurologically speaking, sleep is generally defined by macroscale features of the surface EEG. In other words, doctors put passive recording electrodes on your head to record the average electrical activity of millions of neurons to see if there's neural evidence for "wakefulness", drowsiness, or the various stages of sleep. While this can be done algorithmically via a computer these features are usually pretty easy to spot by eye.
(source: NIH NINDS - Brain Basics: Understanding Sleep)

(source: NIH NINDS - Brain Basics: Understanding Sleep)

As sleep progresses the brain is said to increasingly "synchronize", meaning large groups of neurons begin firing at around the same times in a periodic fashion, which causes those oscillating peaks/troughs seen in Stage 4 sleep above.

This occurs up to the point of rapid eye movement (REM) sleep, which is the stage of "active" sleep, which we call dreaming. While during the other stages of sleep we're unconscious, we're debatably not unconscious during REM, though our bodies are atonic. "Atonic" means "without tone", meaning a lack of muscle tone, meaning our bodies have gone limp. In fact, for neuroscientific sleep research scientists record muscle activity along with neural activity to help differentiate between wakefulness and sleep because the rough EEG looks fairly similar between the two states.

(source: Scholarpedia - Neurobiology of sleep and wakefulness)

During a standard night's sleep we move back and forth between these different stages many times, often briefly revisiting an awake stage (though we may not remember it).

(source: Scholarpedia - Neurobiology of sleep and wakefulness)

There are some who argue that our society's current habit of sleeping in one 6-10 hour stretch is a relatively modern construction.
One of the first signs that the emphasis on a straight eight-hour sleep had outlived its usefulness arose in the early 1990s, thanks to a history professor at Virginia Tech named A. Roger Ekirch, who spent hours investigating the history of the night and began to notice strange references to sleep. A character in the “Canterbury Tales,” for instance, decides to go back to bed after her “firste sleep.” A doctor in England wrote that the time between the “first sleep” and the “second sleep” was the best time for study and reflection. And one 16th-century French physician concluded that laborers were able to conceive more children because they waited until after their “first sleep” to make love. Professor Ekirch soon learned that he wasn’t the only one who was on to the historical existence of alternate sleep cycles. In a fluke of history, Thomas A. Wehr, a psychiatrist then working at the National Institute of Mental Health in Bethesda, Md., was conducting an experiment in which subjects were deprived of artificial light. Without the illumination and distraction from light bulbs, televisions or computers, the subjects slept through the night, at least at first. But, after a while, Dr. Wehr noticed that subjects began to wake up a little after midnight, lie awake for a couple of hours, and then drift back to sleep again, in the same pattern of segmented sleep that Professor Ekirch saw referenced in historical records and early works of literature.
(source: New York Times - Rethinking Sleep)

The fact that our sleep/wake cycle seems to occur on a fairly regular 24-hour cycle led many to suppose that over evolution the light/dark cycle caused by the sun shaped our biology. In the 1970s scientists discovered a gene in fruit flies (Drosophila melanogaster, a common organism used in genetic research) dubbed "Period", which is thought to be the gene controlling the underlying molecular mechanism for this 24-hour cycle (called the circadian rhythm). An analogous gene called CLOCK was then discovered in mammals in the 1990s.

Okay, but what's happening in the brain during sleep? Unsurprisingly it's complicated, but very simplistically neurons in the brainstem that form what's called the "ascending activating system" actively help maintain the neural systems for maintaining conscious (which is why a blow or damage to the brainstem can cause unconsciousness or coma).

(source: Scholarpedia - Neurobiology of sleep and wakefulness)

During sleep, activity in this activating system slows and consciousness-supporting neural activity decreases.
(source: Scholarpedia - Neurobiology of sleep and wakefulness)

All that said, even though our neural systems do not support consciousness during deep sleep, our brains are not inactive! In fact, recent work by Matt Wilson, "Biasing the content of hippocampal replay during sleep", published in Nature Neuroscience in 2012 shows quite the opposite:
The hippocampus is essential for encoding self-experienced events into memory. During sleep, neural activity in the hippocampus related to a recent experience has been observed to spontaneously reoccur, and this 'replay' has been postulated to be important for memory consolidation.
Our brains are never totally quiet! Amazingly, certain animals such as the cetaceans (like dolphins) never seem to be "fully" asleep. These animals have to continually resurface in order to breathe. But they also don't not sleep... it just appears as though they "rest" half their brains at a time.

Wow after writing this summary, I kind of regret not getting into sleep research. This stuff is really cool.

Bendor D, & Wilson MA (2012). Biasing the content of hippocampal replay during sleep. Nature neuroscience, 15 (10), 1439-44 PMID: 22941111


How does my online presence impact my citations?

Someone over on Quora asked this question about me a while back; I think my readers here would appreciate the answer: "Has Bradley Voytek's online presence had an impact on his number of citations?"

Because n = 1 and time is unidirectional, there's no way for me to run an experiment to know the answer to this question. However based upon inferences and converging evidence I would weakly conclude that my "online presence" has probably resulted in an increase in my citation count.

As of right now I have about 17 peer-reviewed journal publications which have been moderately cited (about 300 total citations). My four "main" publications on which I was the primary author, for which I wrote lay-level blog posts about, that I shared on social media, and that were published at least two years ago have been cited a total of 109 times, for an average of 27.25 citations per article. According to an analysis by Times Higher Education on articles published from 2008-2010, on average in this two-year period a neuroscience article will be cited 8.09 times. On the surface this means my papers show a more than three-fold increase in citation rate than the average neuroscience article.

Keep in mind that citation rate is not normally distributed and has a heavy tail, and only about 50-70% of peer-reviewed papers are ever cited at all.

The second piece of evidence that suggests my online presence has improved my citation count is from an informal analysis I did at the journal level. This analysis shows that the number of Facebook page "likes" for a journal, as well as the number of twitter followers the journal has, correlates with the total number of citations the journal receives. Thus, while the causality can certainly be questioned here, one could infer that an individual with more twitter followers would also receive more citations.

The final piece of evidence is from a recent paper by Jason Priem, Heather Piwowar, and Bradley Hemminger, "Altmetrics in the Wild: Using Social Media to Explore Scholarly Impact". Among many of the fascinating findings in this report is that article-level citation count correlates with "social reference saves" on sites such as Mendeley and CiteULike.

The authors also note that, "Articles in the most-tweeted quartile after one week were eleven times more likely to be in the most-cited quartile after two years."

To reiterate, there is no experiment I can run on myself to truly know the answer to this question, but the data converge upon a weak suggestion that yes, my online presence has probably increased my citation count.


Colbert gets fake EEG'd

Hot on the heels of the huge push this week about the NIH BRAIN project, Francis Collins, Director of the massive NIH, made an appearance on the always-awesome Colbert Report (which several friends pointed me to).

During the interview Collins has Colbert place an EEG cap on his head and shows off what's supposed to be Colbert's brain's electrical activity.

Except it's clearly not. At least, not really happening live anyway.

I know a thing or two about EEG and over the years I've personally collected EEG data from more than 100 people. I'm willing to give 5:1 odds on the data being shown being pre-recorded (probably not from Colbert).

This isn't a big deal. But I find it amusing because, remember, Collins is the head of an institute that doles out $30.9 billion each year in biomedical research funding and just announced with the White House a huge $100M project specifically geared toward understanding the brain.

And with all that money Collins still can't get good enough tech to do an honest real-time EEG demonstration.

Here's Colbert with the cap on:

See that thing hanging by his eye? That's an external electrode used to record eye movements and to use as a signal reference. Not hooked up. So what's the signal being referenced to? Maybe another electrode?

Here's a data trace:

The big fluctuations behind Colbert's head are visual alpha (see here for a primer on alpha). The high-frequency activity behind Collins (in the lower left) is muscle artifact (probably frontal cortical); remember muscles work off electrical activity, too, and our face muscles are much closed to the EEG electrodes than the brain so muscle activity shows up as a huge artifact in EEG.

In the upper left you can see some lower frequency (maybe 2Hz?) shifts. While I can't see a time-scale, knowing that the activity behind Colbert is about a 10Hz oscillation (because that's what the visual cortex does) gives me a rough time scale. This low-frequency activity tells me the data aren't being filtered below at least 2Hx.

This all tells me several things:
  1. If this were real data, Collins is boring the shit out of Colbert given how high that visual alpha activity is (visual alpha activity increases when someone is drowsy or has their eyes closed).
  2. The data are not being filtered using too low or too high of a bandpass filter, so artifacts like muscle and movement artifacts (high frequency artifacts) and eye movements and blinks (low frequency) should be visible.
  3. The lack of crazy data noise when Colbert turns his head tells me these data aren't real. When you move your head like Colbert the physical movement of the electrodes and wires, along with all the muscles used to move your head, shows up as a huge artifact.
  4. The lack of any eye blinks or movement artifacts picked up by the frontal electrodes (near the eyes) tells me these data aren't real.
Pretty wild what you can infer from a bunch of squiggles once you know what to look for, huh?


Why I Chose Academia

A little over a week ago I ran two panels at a conference called Beyond Academia organized by a group of UC Berkeley PhD students and post-docs.

This was a great conference particularly because this is the right time, and the Bay Area is the right place, for people with strong quantitative skills looking for other opportunities outside of academia. The startup-up culture, the high-density of exciting technical work, and the density of a highly educated populous offer a lot of options for people looking.

The desire to "jump ship" is further compounded by the terribly poor pay for post-docs and grad students. Most of our pay is set nationally by the NIH and is not adjusted for cost-of-living differences, which means that NIH-funded post-docs in San Francisco (with a median rent of $1363/mo) get paid the same as post-docs in Iowa City (with a median rent of $734/mo).

After however many years of education for a PhD my UCSF take-home pay after federal and state taxes, etc. is about $2800/mo. I'm a father; if I wanted to use UCSF daycare and live in UCSF post-doc housing I would be paying $1998/mo for daycare and at least $1099/mo for a studio. Imagine if I was a single parent? This would make my net take-home pay negative $297/mo.

Ivory Towers indeed.

You can see why this was such a highly-attended conference and why programs like Insight Data Science are exploding in popularity right now (disclosure: Jake Klamka of Insight is a friend of mine).

As some of you may know, I've done some work outside of academia with a fantastic, exciting company. My work with Uber has been fascinating: they're working on hard problems, they've got a lot of cool data for me to play with, and I really believe in the utility of the product and what it provides. Furthermore, unlike most startups they have a strong revenue stream; I was offered a lot of stock options and an incredibly high salary (for academics) to come on board.

Nevertheless, for a variety of reasons I chose to not leave academia to work with them full-time (though I continue to work with them, albeit in a different capacity now). My primary reason for staying in academia was because I love neuroscience.

Seriously, I love my job. I love science. I love discovery and uncertainty and failure and the challenges and joys that go along with it.

The decision to not join Uber full-time was done with full knowledge of just how much money I would be sacrificing (I know what their revenue graphs look like), both in terms of salary and long-term stock performance.

This is partly, of course, a decision from privilege: how many people even have the chance to not work for riches in lieu of their passions? I'm very grateful to my wife and to Uber's continued financial support that I can even get to make a decision like this. It was a very hard choice, largely because of how not-well-off my family is. They're not going to be able to retire for a long time (and not because they haven't saved enough to pay off their SUV or whatever other luxuries the US upper middle-class confuses for necessities). My father was incredulous (but supportive).

My choice was a gamble. What if I don't get the faculty job I want? Do I branch out on my own to try my hand at the start-up world?

What if all of the interesting neuroscience problems stop being in academia and move to industry? Like Ian Sommerville said in this fantastic post:
In the 1970s, there is no doubt that the most exciting work in practical computer science and software engineering was going on in universities and research labs. Working in industry mostly meant programming in FORTRAN or COBOL or doing ‘systems analysis’ – I was never very sure what that meant. Now, the challenging software engineering problems all stem from scale – dealing with vast number of computers, building systems with thousands of distributed components and so on. Universities, unfortunately, simply cannot afford to create such large-scale experimental environments and most of the leading-edge work concerned with scale has moved to companies such as Google and Amazon.
That could happen. I could be backing the wrong horse on this one, but I've made my decision for now and few decisions in life are final.

But at the end of the day I get to spend my evenings in the park with my son, work on problems very few people can, and get to work with a lot of freedom.

And I now know, down to the exact dollar, how much those freedoms are worth to me.