darb.ketyov.com

Caveat lector: This blog is where I try out new ideas. I will often be wrong, but that's the point.

Home | Personal | Entertainment | Professional | Publications | Blog

Search Archive

Loading...

16.9.11

Thank you Mario! But your methods are in another field!

My love for video games is no secret. I just finished Mass Effect 2 and started Dragon Age 2 (I'm a sucker for Bioware RGPs).


One of the first true peer-review papers I remember reading was Green & Bavelier's 2003 Nature paper "Action video game modifies visual selective attention". In that study the authors performed a series of experiments showing that people who had a lot of experience playing "action video games" performed better than non-video game players on a variety of attention tasks. Particularly striking, for example, was just how much better gamers did on a spatial attention task. Check it out:
I mean, look at that! Not even close (VGP = video game players; NVGP = non-VGP). Gamers blow non-gamers out of the water.

So my interest was especially piqued by some news going around today published on Nature's site (written by Mo Costandi) stating that "Video-game studies have serious flaws: Poor design of experiments undermines idea that action games bring cognitive benefit".

Oh. Snap.

Them's fighin' words!

Mo's article references a new Perspective just published in Frontiers in Psychology by Boot, Blakely, and Simons (open access!)

In it the authors note that cognitive transfer studies are quite contentious (see, for example, last year's Owen et al. Nature paper "Putting brain training to the test").

So what are the issues with the video game studies, according to Boot, Blakely, and Simons? They ennumerate 5 separate issues:

  • Overtrecruiting (possible differential demand characteristics)
  • Unspecified recruiting method
  • Potential third-variable/directionality problems (cross-sectional design)
  • No test of perceived similarity of tasks and gaming experience
  • Possible differential placebo effects

So basically they're saying that these video game studies aren't using the proper research methodology.

They note that:
Claims that gaming causes cognitive improvements require an experimental design akin to a clinical trial; in this case, a training experiment.

It amazes me how many meta peer-review papers are written that simply reiterate basic research and/or statistical methodologies.

Another recent example of this was a paper in Nature Neuroscience (!) last week by Nieuwenhuis, Forstmann, and Wagenmakers ("Erroneous analyses of interactions in neuroscience: a problem of significance") whose entire point basically boils down to: use an ANOVA to test for interactions.

This is stats 101! Hell, it's stats 001! (Ben Goldacre did a great piece on this in the Guardian).

I don't get why these things have to be re-explained every few years. These things just reinforce how domain-specific many academics are in their knowledge (and why I think branching out is so important).

In 2006 there was a paper in The American Statistician by Gelman & Stern titled "The Difference Between “Significant” and “Not Significant” is not Itself Statistically Significant".

Same basic stats issue as the one that Nieuwenhuis, Forstmann, and Wagenmakers got into Nature Neuroscience just 5 years later.

I mean I write papers wherein I get locked in detailed, intricate statistics battles with reviewers. Apparently the trick is to just fail at the most low level possible?

But seriously I'm not calling any of the video game studies bad; nor do I think were Boot, Blakely, and Simons.

Science is iterative and it's important to publish, even if the methods and data aren't perfect.

Embrace the imperfections and failures!

That's how this science stuff works.

But it would be nice if the failures weren't so, uh... basic?

ResearchBlogging.orgGreen CS, & Bavelier D (2003). Action video game modifies visual selective attention. Nature, 423 (6939), 534-7 PMID: 12774121
Boot, W., Blakely, D., & Simons, D. (2011). Do Action Video Games Improve Perception and Cognition? Frontiers in Psychology, 2 DOI: 10.3389/fpsyg.2011.00226
Owen, A., Hampshire, A., Grahn, J., Stenton, R., Dajani, S., Burns, A., Howard, R., & Ballard, C. (2010). Putting brain training to the test Nature, 465 (7299), 775-778 DOI: 10.1038/nature09042
Nieuwenhuis, S., Forstmann, B., & Wagenmakers, E. (2011). Erroneous analyses of interactions in neuroscience: a problem of significance Nature Neuroscience, 14 (9), 1105-1107 DOI: 10.1038/nn.2886
Gelman, A., & Stern, H. (2006). The Difference Between “Significant” and “Not Significant” is not Itself Statistically Significant The American Statistician, 60 (4), 328-331 DOI: 10.1198/000313006X152649