darb.ketyov.com

Caveat lector: This blog is where I try out new ideas. I will often be wrong, but that's the point.

Home | Personal | Entertainment | Professional | Publications | Blog

Search Archive

Loading...

8.8.14

The language of science

The Washington Post has a headline that reads, verbatim, "A toddler squeezed through the White House gate and caused a security alert. Seriously."


Isn't the Washington Post a "real" newspaper with like, journalistic standards and stuff? Does the headline really need the "Seriously." part, just in case we all thought they were just kiddingsies?

Given how little actually annoys or bothers me, I'm surprised at my own internal response to this. I'm all for the evolution of language, but this seems weirdly out of place.

If I tried to write a scientific paper titled, "Oscillations are fucking rad and you wouldn't believe the four behaviors they control!" it might more accurately capture my personal feelings and excitement, but I wouldn't do it because it's such a culturally-narrow, biased way of talking about the topic.

What I mean by that is that, the language we use conveys information not just through the words, but through the combinations of words, their structures, and so on that provide context about when they were written and their emotional content.

Science papers are often (rightly) criticized for being dry, but that "blandness" is a cultural artifact of an attempt at impartiality and a recitation of facts with minimal emotional bias.

The fact that major news outlets are dropping even the pretense of this is what bothers me, I guess?

27.6.14

Biologists, stop trying to make "moon shot" a thing

Kennedy's "moon shot" was a huge success with regards to throwing a ton of government money at a problem to facilitate and expedite a solution. Getting humans on the moon--and then getting them back home, all using decades-old computer technology--was an incredible feat of engineering, cooperation, and technology development.


But it was a well-defined problem with a clear goal. Here's how it works:

"Hey look, there's the moon. It's pretty far away. Let's go to there."
"Cool! We can get stuff into space. Let's see if we can't make that stuff better so that humans can go, too."

But in biology, the problems are never so well-defined, and sometimes the goals aren't, either! Yet the "moon shot" metaphor is pervasive, especially now with regards to Obama's BRAIN Initiative. Which is unfortunate because calling upon that metaphor as a way of drumming up public support also sets public expectations and if the project fails to meet those expectations then the public trust in large-scale, government-supported research endeavors will erode.

Metaphors are very powerful, and carry with them a lot of meaning and emotional weight, and thus should not be called upon lightly.

In biology, there is no clear goal, nor even a well-defined problem. For the moon landing, we could easily envision what a solution could look like. For neuroscience, we don't know the scope of the problem, nor do we even know what form a solution to "understanding the brain" would look like.

Is "understanding the brain" something done at the cellular or sub-cellular level? What about brain/body interactions? What about emergent phenomena that only arise when individual neurons are all wired up and placed in a complex, dynamic electrochemical environment such as the brain?

We just don't know.

Sadly, this isn't the first time this metaphor has been called upon in biology.

In the 1996, the Human Genome Project was the "moonshot":
"If the human sequence is biology's moonshot, then having the yeast sequence is like John Glenn orbiting the earth a few times," said Francis Collins, M.D., Ph.D., director of the National Institutes of Health's National Center for Human Genome Research... (source)
The Human Genome Project was a great success in terms of having a clear goal (sequence the human genome) and reaching it. But of course, as we all know, some of the promises about what that information would provide in terms of treating disease, especially mental illness, have fallen short.
Collins noted that most diseases have both genetic and behavioral components that jointly shape the course of disease and the body's response to particular drug treatments. Many such diseases are of interest to psychologists, including bipolar illness, schizophrenia, autism and attention-deficit and hyperactivity disorder. Medical scientists will soon be able to examine which of the 0.1 percent of the genome that varies across humans correlates with which of these diseases, Collins said. (source)
 Today, for Collins, the moon shot is the Brain Initiative:
“While these estimates are provisional and subject to congressional appropriations, they represent a realistic estimate of what will be required for this moon-shot initiative,” Collins said. “As the Human Genome Project did with precision medicine, the BRAIN Initiative promises to transform the way we prevent and treat devastating brain diseases and disorders while also spurring economic development. (source)
This moon shot metaphor appears to be a major talking point, but as always I'm concerned about what leveraging such metaphors does to erode public support in the long term as yet another major efforts fails to find effective treatments or cures for major neurological and psychiatric disorders.

9.6.14

A Neuroscientist Walks Into a Startup (redux)

Someone just pointed to me that a video of the talk I gave at the 2013 Society for Neuroscience conference on careers beyond the bench is online.


This talk is informal and conversational (which is how I normally talk). Distilling out all my rambling, the main points I touch on are:

  • Why networking?
  • Social networking as an alternative to travel (good for parents like me!)
  • What skills do we have as PhDs?
  • Science communication.
  • Doing something versus knowing something.
  • Breaking into data science.
  • The importance of side projects.

I've said a few of these things before (my time with Uber and my decision to stay in academia) but this is the most comprehensive collection of what little I've learned along the way.

8.6.14

The Language of Peer Review

When I sent off the draft of my first paper to my PhD advisor I really felt like all the hard work was finished. I'd spent years getting my project started: getting IRB approval, identifying subjects, collecting data, and so on. Then I spent many more months analyzing the data, pouring over every detail. Then I spent many more months putting together figures and writing the draft.

But all of that?

That's only the tip of the iceberg. Let's get into my personal statistics (since those are the data from which I have to draw) to show you the language of peer review.

Over the past few years I've been co-author on 15 research manuscripts, of which I have been the main author on 7. I currently have 3 more that have undergone at least one round of peer review. These 10 first-author manuscripts have collectively undergone about 20 rounds of review, each consisting of 2-4 reviewers.

More than 16,000 words have been written by reviewers about these 10 papers. In response to them, I have written over 14,000 more.

Of note, the total word length for all 10 of these papers comes in at around 40,000 or so (not including references). This means that, for every paper I write, I will probably have to write around 35% more than what I consider to be the "final" version in order to justify its publication to reviewers.

Keep that in mind next time you try to estimate how much longer it will take for you to publish your manuscript. I often forget this.

Now, on the flip side, I've performed approximately 25 reviews for some of the most prestigious journals in cognitive neuroscience including: NaturePNASNature NeuroscienceNeuronJournal of NeuroscienceNeurologyNeuroImageJournal of Cognitive NeuroscienceCerebral Cortex, and so on.

For these 25 reviews I have written over 14,000 words. That may seem light at only ~560 words per review but my personal reviewing philosophy is to be concise but thorough. It is much easier to critique than to create!

If you have not experienced it, peer review is a strange affair. It's sort of like a masquerade ball in that peoples' identities are unknown (at least in one direction), but you know that you'll probably have to see these people unmasked at some point so you better give the appearance of propriety.

There's a lot of weird and interesting language play and kowtowing, with phrases such as "we thank the reviewers for their insightful comments" and "this is a very interesting manuscript, but..."

The word of the day is "asteism": "Polite irony; a genteel and ingenious manner of deriding another."

This generally sums up the peer-review process.

Just out of curiosity I decided to run all my reviews, comments received by reviewers, and response to reviewers through a tag cloud generator (thank you tagcrowd) just to see what it looked like. Check it out, I believe this to be a decent, quick insight into the language of peer-review.

My Reviews
You'll notice right away that certain keywords appear that represent the general class of manuscripts I'm asked to review: "EEG", "coupling", "gamma", "patients", and so on. The appearance of words such as "addressed", "important", and "interesting" belie the kinds of language I use (I generally do find most papers I review to be genuinely interesting, by the way).

Comments Received by Reviewers

Here again you see "interesting" appear. Everyone is interesting! We're all special snowflakes!

This is a good exercise for me though because I can see that reviewers have a tendency to use words like "specific" and "literature" against me. When I write I have a tendency to "jump ahead" and just assume that people will follow my logic without "showing my work". This is sloppy on my part and I struggle with the need to explicitly connect my thoughts.

My wife has to remind me of this fact for every. Single. Paper. That I write. For every talk I give. You'd think I'd have learned my lesson by now.

Response to Reviewers

This is great. You see how "reviewer", "correct", "thank", and "suggested" show up a lot in my response to reviewers? This is another interesting aspect of peer review. This shows the deferential language that scientists use in responding to their peers. This represents all the times I've said, "the reviewer is correct" and "we thank the reviewers for their suggestions" and the like.

Anyway, this was my attempt to peel back the curtain on peer review a bit if you don't have a lot of experience with it.

I don't have a clever or insightful ending for this post, so, uh, I'd like to thank the readers for their valuable time and for the intelligent, thought-provoking comments that are sure to follow.

21.3.14

A decade of reverse-engineering the brain

Salesmanship trumps science. Every. Single. Time.

The big news in the tech world today is the superstar team-up of Elon Musk, Mark Zuckerberg, and Ashton Kutcher investing $40M in Vicarious, whose aim is to, "[t]ranslate the neocortex into computer code". Because then “you have a computer that thinks like a person," according to Vicarious co-founder Scott Phoenix. “Except it doesn’t have to eat or sleep.”

I took at look at this mystery team of neuroscientists who've secretly reverse-engineered how the human brain works and, according to the Vicarious team page, the scientific talent (and I assume lead) is Dileep George.

George was formerly the CTO of Numenta, the company that was spun out of Palm founder Jeff Hawkins' book On Intelligence (which is a fine book with a neat theory, by the way).

Hawkins founded the Redwood Neuroscience Institute which eventually was absorbed into UC Berkeley as the Redwood Center for Theoretical Neuroscience. This was all happening right when I began my PhD at Berkeley.

In 2004.

George gave a talk at the Accelerating Change conference in 2005, the abstract of which reads:
We are at a juncture where great progress has been made in the understanding of the workings of the human neocortex. This gives us a unique opportunity to convert this knowledge into a technology that will solve important problems in computer vision, artificial intelligence, robotics and machine learning. In this talk, based on joint work with Jeff Hawkins, I will describe the state of our understanding of neocortical function and the role Numenta is playing in the development of a new technology modeled after the neocortex.
My question is, how is Vicarious different? What's changed in the last 9 or 10 years or so? Because the high-level press release stuff sounds exactly the same as the Numenta stuff from a decade ago.

What happened to Numenta's lofty aims?

They're now called "Grok" and, according to their about page:
Grok, formerly known as Numenta, builds solutions that help companies automatically and intelligently act on their data. Grok’s technology and product platform are based on biologically inspired machine learning technology first described in co-founder Jeff Hawkin's book, On Intelligence. Grok ingests data streams and creates actionable predictions in real time. Grok's automated modeling and continuous learning capabilities makes it uniquely suited to drive intelligent action from fast data.
George did some amazing computational neuroscience research at Numenta. But for all the talk about how slow academia is, you'd think after ten years and tens (hundreds?) of millions of dollars in spent in the fast-paced world of private industry, the sales pitch would have changed by now.

The Blue Brain Project is nearing the end of its first decade as well. And, again, there's some great work coming out of these places, but I cannot overstate my frustration at the hype-to-deliverables ratio of these organizations.

Granted, I wasn't in the meetings. Maybe a lot has changed, but none of that change is making its way out to anywhere where the rest of us can see it.

Having watched this stuff for a decade now, the grand promises have not been delivered on. It's clear to me that VCs need some skeptics on their advisory teams. Any neuroscientist and/or machine learning researcher in that meeting would certainly ask:

"What's different?"