darb.ketyov.com

Caveat lector: This blog is where I try out new ideas. I will often be wrong, but that's the point.

Home | Personal | Entertainment | Professional | Publications | Blog

Search Archive

Loading...

8.1.13

#overlyhonestmethods for neuroimaging

So there's a popular hashtag on twitter that I'm enjoying reading. #overlyhonestmethods really is brutally honest. My only contribution made me think of many more that just don't fit into tweets. So here's my take on an entire overly honest methods section.
No one I know actually does (or admits to) any of these things below, but over the years these are the things I've heard people bitch about. Honestly I (naively?) believe most of the below offenses are ghost stories that we tell each other to keep one another in check.


Methods
The methods section is difficult to write because the data on which this paper is based were collected 4 years ago by a roton (Author Two) and dumped on Author One because he didn't have any other idea what to do for his PhD and this is, quite frankly, his last hope.

Data were collected from 35 subjects (our friends), but the 20 that we presented in the paper are the real subjects. All participants gave written informed consent to participate in the study in accordance with the review board of the relevant institution, but no one actually read through the consent forms.

Functional magnetic resonance imaging (fMRI) data were collected to justify the massive funds in the R01 so that the senior author can keep bringing in money to the university and make good with the Dean and Departmental Chair. Data were preprocessed 8 different times, tweaking lots of settings because Author One kept screwing up or the data kept "looking weird" with significant activation in the ventricles and stuff, which can't be right.

We're pretty sure motion correction was performed, but the data are so old that no one really remembers; but the senior grad student in the lab (Author Four) is pretty sure she remembers hearing from Author Two that it was. We didn't really look at the raw data, though, so it doesn't really matter.

Data were smoothed to all hell so when we show regions of significant activation take them with a huge grain of salt.

Oh and we collected behavioral data but don't really care except that it lets us hunt for a behavioral correlate which helps bump up the impact factor. The behavioral task is really boring and consists mostly of some beeps and flashing images of cars or vases or plusses or something. They appear at some specific times to maximize the probability that we get an effect. But honestly, does anyone really believe locking a person's head in a vice in a tube that blasts sounds like the terrible 90s happy hardcore while they sit in the dark for two hours staring at a screen is going to help us unlock the mysteries of the human mind?

Anyway, resting state data were also collected because one of the authors (Author Three) went to a graph theory workshop that the senior author paid a lot of money for her to attend and wants to get something out of it. Also it's pretty cheap data to collect so why not? It's only like, another 20 minutes in the scanner for the subjects and, hey, if something turns out that would be pretty cool.

We're not really sure how resting state data were analyzed because first of all the entire concept barely makes any sense. But we used the standard analysis suite and copied and pasted the methods section from the papers of the people who ran the workshop. We tried to change enough words to make it not look like outright plagiarism even though no one really cares about plagiarizing methods sections anyway.

All statistical analyses were performed repeatedly, without correcting for multiple comparisons, to find the most interesting results which we then presented as a priori hypotheses.

We never checked any our data for normality, but we assumed the hell out of that shit anyway, Anscombe's Quartet be damned. We could have performed permutation statistics, which would have been much better, but seriously do you know how long that takes? All statistical tests used are the ones that give us p<0.05 for the most interesting results.

For the figures, "representative data" are anything but. All error bars are probably inappropriately used but make the errors look visually smaller.

Acknowledgements
This study was funded by an NIH grant for an entirely different project. These data will be presented in a new R01 application as "preliminary data" so that it can look like we're making progress when we actually publish on this topic, thus perpetuating the cycle of writing grants to get money for projects we've already finished. The authors would also like to thank the editors of the 5 journals that scuttled this manuscript for making us wait forever, giving Author One anxiety issues from checking his email in the hopes that he'll actually get a publication and be able to graduate on time.


Authors Contributions
Author One wrote the first draft of this paper in its entirety and resurrected this paper from the ashes like a science phoenix rising from the ashes of a departmental washout. This was done not out of a sense of duty to the taxpayers who funded the original research, but rather out of panic and uncertainty. Author Two collected all the data and performed all the preprocessing, but dropped out long ago and is probably now making more money than anyone else on this paper. Author Three did all the graph theory stuff so if someone out there actually understands it don't bug anyone but her. Author Four is on the paper because she's been in the lab a really long time and probably showed people how to do stuff. Authors Five through Seven are so far gone that they were really hard to track down to sign the copyright transfer. Author Eight wrote all the grants and rewrote the paper in its entirety.


Supplemental Online Materials
Are only included because the reviewers wanted to show the editor they kind of read the paper. The supplemental results were quickly thrown together because we're sick of this study and want to get rid of it and did the bare minimum required effort to appease the reviewers.