darb.ketyov.com

Caveat lector: This blog is where I try out new ideas. I will often be wrong, but that's the point.

Home | Personal | Entertainment | Professional | Publications | Blog

Search Archive

Loading...

29.11.11

Is the female brain innately inferior?

Did the title get your attention? It sure as hell caught mine when USC grad student Rohan Aurora sent me the link.

Recently Stanford's Clayman Institute for Gender Research published an interview with a Stanford neuroscientist about male vs. female brains.

I'm not going to lie: when I clicked the link I was expecting the usual attention-grabbing, rabble-rousing crap I've come to expect when I see a headline like that.


I was prepared for some serious eye rolling and frustrated groaning in response to the inevitable logical errors, overreaching conclusions, and other neuro-nonsense. Especially given how much gender has been on my mind lately (what with my recent foray into fatherhood and the whole #womanspace thing).

This certainly wouldn't have been the first time a neuroscientist has railed against silly neuroscientific claims in gender research.

Instead I got a well thought out, sane, clear-headed, and scientifically sound interview. I was especially pleased because the neuroscientist in question is a friend, colleague, and collaborator of mine: Prof. Josef Parvizi (co-author on one of my most-cited first-author papers). I'm especially proud to call him a collaborator after reading this. Seriously, this is a great interview and Josef just nails it.

The whole piece is short, but concise, and a great reference to counter some of the more common neuro-gender crap. They tackle three myths; I've quoted my favorite parts below.

Gender Brain Myth #1: Brain size matters
...if absolute brain size were all that mattered, whales and elephants, both of which have much larger brains than humans, would outwit men and women.

Gender Brain Myth #2: Women and men have different brains due to estrogen and testosterone
...if estrogen and testosterone did shape the brain in different ways, it is an unsubstantiated, logical leap to conclude that such differences cause, "...men to occupy top academic positions in the sciences and engineering or top positions of political or social power, while women are hopelessly ill-equipped for such offices."

Gender Brain Myth #3: Men are naturally better at math
According to Parvizi, this logic is flawed: "Differences seen in cognitive tests do not necessarily provide direct evidence that those differences are in fact innate."

Finally, I love the following. It should be copy-and-pasted into any online argument. Or shouted repeatedly at any TV/magazine puff-piece writer.

"...if we are to entertain the idea that humans 'experience' life differently, and that different experiences mold the brain function differently, then we must also seriously consider that gender (along with class, ethnicity, age, and many other factors) would also contribute to this experience, and that they will contribute to molding of the brain...

So if women and men have systematically different life experiences and face dissimilar expectations from birth, then we would expect that their brains would become different (even if they are not innately dissimilar), through these different life experiences. Even if neuroscientists see differences in the brains of grown men and women, it does not follow that these differences are innate and unchangeable.

For instance, if girls are expected to be more adept at language, and are placed in more situations that require communication with others, it follows that the networks of the brain associated with language could become more efficient in women. Conversely, if boys receive more toy trucks and Lego's, are given greater encouragement in math and engineering classes, and eventually take more STEM (science, technology, engineering and math) courses, it follows that the sections of the brain associated with mathematics could become more efficient in men...

The tricky part is that we do not make the mistake of taking account of these differences as evidence for biological determinism."

18.11.11

Ranking biomedical retractions

This past week I was in Washington, DC for the 30,000+ Society for Neuroscience conference. Recently I wrote about (my interpretations of) the purpose of peer review. Well this prompted a group of us to get together to discuss open science and the future of scientific publishing.

Obviously science and the process of science have been on my mind.

My fellow neuroscientists Jason Snyder and Björn Brembs and I got together for lunch to talk about... lots of stuff, actually... but I came away with a few questions that seemed to have empirical answers.

Before I jump in though, if you haven't seen Björn's talk What's wrong with scholarly publishing today? check out the slides at the end of this post. They're packed with some mind boggling data about the business of peer-review.

At some point during our lunch, Retraction Watch (which is an amazing site), came up, and ultimately inspired two questions:
  • Which journals have the most retractions?
  • Which biomedical fields have the most retractions?
I did a quick-and-dirty check of this using PubMed's API, because they have a nice "search by retraction" option:

http://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi?db=pubmed&field=word&term=hasretractionin&retmax=10000&usehistory=y

There was one issue: every article--regardless of scientific field--for the general science journals (Science, Nature, PNAS) are indexed in PubMed. So if an article (or a dozen) about semiconductors (see: Jan Hendrik Schön) was retracted from Science, it would still show up in this analysis. The result was inflated biomedical retraction counts for those journals, so I had to manually adjust counts down by removing non-biomedical retractions (just to put everything on par, since PubMed doesn't index non-biomedical peer-review journals).

Here are the results for the 1922 retractions across 796 journals:
Bradley Voytek retraction counts Science Nature PNAS

PNAS (59 retractions) and Science (52) lead the pack, followed by J Biol Chem (40), J Immunol (33), and Nature (31).

Next I counted words that appeared in the titles of the retracted articles to get a feel for what kinds of papers are being retracted. Here's all words that appear at least 50 times in paper titles:
  • cells (189)
  • activity (154)
  • effects (152)
  • human (148)
  • patients (136)
  • protein (108)
  • factor (104)
  • gene (103)
  • expression (102)
  • receptor (96)
  • study (81)
  • cancer (70)
  • treatment (57)
  • surgery (54)
  • disease (54)
  • DNA (53)
  • virus (50)
(Similar words were grouped using TagCrowd)

At first blush, it looks like cell/molecular/micro biology represents a big chunk of the retractions (cells, protein, factor, gene, expression, receptor, DNA, virus), but human patient research isn't much better off... (human, patients, surgery).

I've heard the argument before (sorry, can't remember where) that fields where the data is more difficult to collect and replicate are more prone to shady research practices... I'm not sure if that's exactly being reflected here, but the exercise was an interesting one.

Thoughts?

12.11.11

...the rest is just details

(This cross-posted from my piece at Nature)

Meet the electric brain. A pinnacle of modern science! This marvel comes complete with a "centrencephalic system", eyes, ears, medial and lateral geniculate, corpora quadrigemina, and visual cortex.

(click to enlarge)

The text reads:
A giant electrified model of the human brain's control system is demonstrated by Dr. A.G. Macleod, at the meeting of the American Medical Association in New York, on June 26, 1961. The maze of twisting tubes and blinking lights traces the way the brain receives information and turns it into thought and then action.

It's a cheap journalistic trick to pull out a single example of hubris from the past at which to laugh and to highlight our own "progress". But where did the Electric Brain fail? Claims to understanding or modeling the brain have almost certainly been made countless times over the course of human thinking.

Hell, in moments of excitement with colleagues over a pint (or two) I've been known to shout, "I've figured out the brain!" But, of course, I have always been wrong.

So here we are, exactly 50 years post Electric Brain, and I find myself once again at the annual Society for Neuroscience conference (SfN). Each year we push back the curtain of ignorance and, just as I have every year since I began my neuroscience career in 2003, I find myself surrounded by 30-40,000 fellow brain nerds.

How do the latest and greatest theories and findings on display at SfN compare to the Electric Brain? One would like to think that, with this much brain power (har, har), surely we must be close to "understanding the brain" (whatever that might mean). Although any model of the human brain feels like an act of hubris, what good are countless scientific facts without an integrated model or framework in which to test them?

The Electric Brain is an example of a connectionist model in which the brain is composed of a collection of connected, communicating units. Thus, the brain can be modeled by the interconnections between all the subregions of it; behavior is thought to be an emergent property of the complexities of these interconnected networks of units.

A "unit" in the Electric Brain appears to be a whole region, whose computations are presumably modeled by a simple input/output function. The modern incarnations of this movement are seen in the rapidly maturing field with the sexy name of connectomics, the underlying belief of which is that if we could model how every neuron connects to every other neuron, we would understand the brain.

With advancements in computational power, we've moved beyond simplified models of entire brain regions and toward attempts to model whole neurons, such as this model of 10^11 neurons by Izhikevich and Edelman (Large-Scale Model of Mammalian Thalamocortical Systems, PNAS 2008).



There are also attempts to model the brain at the molecular level, such as with the Blue Brain Project.

But as Stephen Larson, a PhD candidate at UCSD astutely noted on Quora,

To give you a sense of the challenge here, consider the case of a simple organism with 302 neurons, the C. elegans. There has been a published connectome available since 1986... however, there is still no working model of that connectome that explains the behavior of the C. elegans.

One issue with this approach is that the brain is dynamic, and it is from this dynamism that behavior arises (and the complexities of which are hidden in a static wiring diagram). Wilder Penfield summed it up nicely, "Consciousness exists only in association with the passage of impulses through ever changing circuits of the brainstem and cortex. One cannot say that consciousness is here or there" (Wilder Penfield: his legacy to neurology. The centrencephalic system, Can Med Assoc J 1977).

In my most cynical moments, connectionist approaches feel like cargo cult thinking, whereby aping the general structure will give rise to the behavior of interest. But how can we learn anything without first understanding the neuroanatomy? After all, our anatomy determines the rules by which we are biologically constrained.

This year I spoke at the 3rd International Workshop on Advances in Electrocorticography. Across two days there were a total of 23 lectures on cutting-edge methods in human and animal electrophysiology. Last year I attended a day-long, pre-SfN workshop titled "Analysis and Function of Large-Scale Brain Networks".

During both of these sessions I sometimes found it difficult to restrain my optimism and enthusiasm (yes, even after all these years "in the trenches"). And while wandering through a Kinko's goldmine of posters and caffeinating myself through a series of lectures, occasionally I forget my skepticism and cynicism and think, "wow, they're really on to something."

And that's what I love about this conference: every year it gives me a respite from cynicism and skepticism to see so many scientists who are so passionate about their work. Sure, it might be hubris to think that we can model a brain, but so what? When the models fail, scientists will learn, the field will iterate, and the process will advance.

That's what we do, and that's why I keep coming back. The Society for Neuroscience conference is my nerd Disneyland.

ResearchBlogging.orgIzhikevich EM, & Edelman GM (2008). Large-scale model of mammalian thalamocortical systems. Proceedings of the National Academy of Sciences of the United States of America, 105 (9), 3593-8 PMID: 18292226
Jasper HH (1977). Wilder Penfield: his legacy to neurology. The centrencephalic system. Canadian Medical Association journal, 116 (12), 1371-2 PMID: 324600

2.11.11

What is peer-review for?

(This is re-posted from the Scientific American Guest Blog)

There is a lot of back and forth right now amongst the academic technorati about the "future of peer review". The more I read about this, the more I've begun to step back and ask, in all seriousness:

What is scientific peer-review for?

This is, I believe, a damn important question to have answered. To put my money where my mouth is I'm going to answer my own question, in my own words:

The scientific peer-review process increases the probability that the scientific literature converges--at long time scales--upon scientific truth via distributed fact-checking, replication, and validation by other scientists. Peer review publication gives the scientific process "memory".

(Cover of the first scientific journal, Philosophical Transactions of the Royal Society, 1665-1666. Source: Wikipedia)

Note that publication of methods and results in a manner that they can be peer-reviewed is a critical component here. Given that, let's take part in a hypothetical regarding my field (neuroscience) for a moment.

In some far-distant future where humanity has learned every detail there is to know about the brain, what does the scientific literature look like in that world? Is it scattered across millions of closed, pay-walled, static manuscripts as much of it is now? Does such a system maximize truth-seeking?

And, given such a system, who is the megamind that manages to read through all of those biased (or incomplete, or incorrect) individual interpretations to extract the scientific truths to distill a correct model of the human brain and behavior (whatever that might entail)?

I am hard-pressed to imagine that this future scientific literature looks like what we currently possess. In fact, there are many data-mining efforts underway designed to overcome some of the limitations introduced by the current system (such as, for cognitive neuroscience, NeuroSynth and my own brainSCANr).

The peer-review system we have now is incomplete.

I'm not attacking peer-review, I'm attacking peer-review based on journal editors hand-picking one to three scientists who then read a biased presentation of the data without being given the actual data used to generate the conclusions. Note that although I am only a post-doc, I am not unfamiliar with the peer-review process, as I have "a healthy publication record" and have acted as reviewer for a dozen "top journals".

To gain a better perspective, I read this peer-review debate published in Nature in 2006.

In it, there were two articles of particular interest, titled:

* What is it for?
* The true purpose of peer review

These articles, in my opinion, are lacking answers to the questions that are their titles.

Main points from the first article:

* "For authors and funding organizations, peer review provides an important veneer of respectability."
* "For editors, peer review can help inform the decision-making process... Prestigious journals base their reputation on their exclusivity, and take a 'top-down' approach, creaming off a tiny proportion of the articles they receive."
* "For readers with limited time, peer review can act as a filter, reducing the amount they think they ought to read to stay abreast."

The first two points are issues of reputation management, which ideally have nothing to do with actual science (note, I say ideally...) The second point presupposes that publishing results in journals is somehow the critical component, rather than the experiments, methods, and results themselves. The final point may have been more important before the advent of digital databases, but text-based searches lessens its impact.

Notably, none of these mention anything about science, fact-finding, or statements about converging upon truth. (Note, in the past I've gone so far as to suggest that even the process of citing specific papers is biased and flawed, and that we would be better off giving aggregate citations of whole swathes of the literature.)

The second article takes almost an entirely economic, cost-benefit perspective of peer-review again focused on publishing results in journals. Only toward the end does the author directly address peer-review's purpose in science by saying:

...[T]he most important question is how accurately the peer review system predicts the longer-term judgments of the scientific community... A tentative answer to this last question is suggested by a pilot study carried out by my former colleagues at Nature Neuroscience, who examined the assessments produced by Faculty of 1000 (F1000), a website that seeks to identify and rank interesting papers based on the votes of handpicked expert 'faculty members'. For a sample of 2,500 neuroscience papers listed on F1000, there was a strong correlation between the paper's F1000 factor and the impact factor of the journal in which it appeared. This finding, albeit preliminary, should give pause to anyone who believes that the current peer review system is fundamentally flawed or that a more distributed method of assessment would give different results.

I strongly disagree with his final conclusion here. A perfectly plausible explanation for this result would be that scientists rate papers in "better" journals higher because they're published in journal perceived to be better. This would appear to be a source of bias and a major flaw of the current peer-review system. Rather than giving me pause as to whether the system is flawed, one could easily interpret that result as proof of the flaw.

The most common response that I encounter when speaking with others scientists about what they think peer-review is for, however, is some form of the following:

Peer-review improves the quality of published papers.

I'm about to get very meta here, but post-doc astronomer Sarah Kendrew recently wrote a piece in The Guardian titled, "Brian Cox is wrong: blogging your research is not a recipe for disaster".

This was followed by a counter post in Wired by Brian Romans titled "Why I Won’t Blog Unpublished Results". In that piece, Brian also says that peer-review improves papers:

First of all, the formal peer-review process has definitely improved my submitted papers. Reviewers and associate editors can catch errors that elude even a team of co-authors. Sometimes these are relatively minor issues, in other cases it may be a significant oversight. Reviewers typically offer constructive commentary about the formulation of the scientific argument, the presentation of the data and results, and, importantly, the significance of the conclusions within the context of that particular journal. Sure, I might not agree with every single comment from all three or four reviewers but, collectively, the review improves the science. Some might respond with ‘Why can’t we do this on blogs! Wouldn’t that be great! Internets FTW!.’ Perhaps someday. For now, it’s difficult to imagine deep and thorough reviews in the comment thread of a blog.
(emphases mine)

Although Brian concedes (but dismisses) the fact that none of these aspects of peer-review need be done in formal journals, he argues that because his field doesn't use arXiv and there is currently no equivalent for it, then journals are still necessary.

We also see an argument in there about how reviewers guide statements of significance for a particular journal, and the conclusion that somehow these things "improve the science". But even the narrative that peer-review improves papers can be called into question:

Smith R. Peer review: a flawed process at the heart of science and journals. J R Soc Med, 99(4); 2006
Another answer to the question of what is peer review for is that it is to improve the quality of papers published or research proposals that are funded. The systematic review found little evidence to support this, but again such studies are hampered by the lack of an agreed definition of a good study or a good research proposal.

Rothwell PM and Martyn CN. Reproducibility of peer review in clinical neuroscience. Brain, 123(9); 2000
Peer review is central to the process of modern science. It influences which projects get funded and where research is published. Although there is evidence that peer review improves the quality of reporting of the results of research, it is susceptible to several biases, and some have argued that it actually inhibits the dissemination of new ideas.

To reiterate: peer-review should be to maximize the probability that we converge on scientific truths.

This need not happen in journals, nor even require a "paper" that needs improvement by reviewers. Papers are static snapshots of one researcher or research teams' views and interpretations of the results of their experiments.

Why are we even working from these static documents anyway?

Why--if I want to replicate or advance an experiment--should I not have access to the original data and analysis code off which to build? These two changes would drastically speed up the scientific process. Almost any argument against implementing a more dynamic system seems to return to "credit" or "reputation". To be trite about it, if everyone has access to everything, however will they know how clever I am? Some day I expect a Nobel Prize for my cleverness!

But a "Github for Science" would alleviate even these issues. Version tracking would allow ideas to be traced back to the idea's originator with citations inherently built into the system.

I'm not saying publishing papers is bad. Synthesis of ideas allows us to publicly establish hypotheses for other scientists to attempt to disprove. But most results that are published are minor extensions of current understanding that don't merit long-form manuscripts.

But the current system of journals, editors who act as gatekeepers, one to three anonymous peer-reviewers, and so on is an outdated system built before technology provided better, more dynamic alternatives.

Why do scientists--the heralds of exploration and new ideas in our society--settle for such a sub-optimal system that is nearly 350 years old?

We can--we should--do better.

ResearchBlogging.orgWager, E. (2006). What is it for? Analysing the purpose of peer review. Nature DOI: 10.1038/nature04990
Jennings, C. (2006). Quality and value: The true purpose of peer review Nature DOI: 10.1038/nature05032
Editors (2005). Revolutionizing peer review? Nature Neuroscience, 8 (4), 397-397 DOI: 10.1038/nn0405-397
Smith, R. (2006). Peer review: a flawed process at the heart of science and journals Journal of the Royal Society of Medicine, 99 (4), 178-182 DOI: 10.1258/jrsm.99.4.178
Rothwell, P. (2000). Reproducibility of peer review in clinical neuroscience: Is agreement between reviewers any greater than would be expected by chance alone? Brain, 123 (9), 1964-1969 DOI: 10.1093/brain/123.9.1964