darb.ketyov.com

Caveat lector: This blog is where I try out new ideas. I will often be wrong, but that's the point.

Home | Personal | Entertainment | Professional | Publications | Blog

Search Archive

Loading...

27.9.11

Zombies (and me!) at the California Academy of Sciences NightLife!

You like zombies? Of course you do.
You like science? Who doesn't?!!

Well this October 27 the California Academy of Sciences will have yours truly giving a talk at their ever-so-popular NightLife event!

I'll be bringing real human brains for a hands-on neuroanatomy demonstration!


I'll also be talking about the zombie brain and how neuroscience can help you and your loved ones survive the zombie apocalypse! :D


Seriously this is going to be a lot of fun: there will be a zombie drag show and costume contest, undead makeup artists, demos of the new zombie video game hotness Dead Island, and a special planetarium event about "zombie" stars.

Buy tickets now!

26.9.11

Thinking with Portals

My love for Portal, Portal 2, and video games in general is certainly no secret. Nor is my love for comics and other geekery.

Recently I played through the Portal 2 single- and multi-player campaigns (with my Activision and Call of Duty buddy, Bryan).

I love the Portals.

(Screenshot from Portal 2)

Of course, me being... well... me, I couldn't help but think about why I loved the game and then a bunch of weird brainy neuro stuff.

Basically the end result of my over-intellectualization was that sometimes games are just fun.

That's it. End of post! Hope you enjoyed it.


.
.
.


Oh. You're still here? ::sigh::

Okay,I guess I can continue on. For science. Or rather, for neuroscience.

If you're unfamiliar with the Portal games, a wonderfully homicidal AI is trying to murder you in the name of advancing science. Your goal is to stay alive by surviving her tests using only a gun. But this is a special gun. It creates portals that allow you to teleport between portal A and portal B. Momentum, a function of mass and velocity, is conserved between portals. In layman's terms: speedy thing goes in, speedy thing comes out.

This game has some ridiculously crazy 3D spatial reasoning puzzles and I got to thinking about how the hell I was able to shoot a portal at a wall, shoot another across the room, and know that when I went into one I would come out the other. There's a lot going on here to let the brain do this!

Seriously, some of these puzzles are mind-bending. Watch this speed run:



Get all that? There's a reason why scicurious gets motion sick when she tries to play the Portal games.

(As an aside, it also contains one of the best end game songs ever, written by Jonathan Coulton.)

In this post I'll give a brief introduction about how the human brain can even conceive of teleporting between two portals. More specifically I'll talk about visual attention and working memory. You can't really conceive of moving from portal A to portal B without first knowing where the two portals are relative to each other and to the room you're in.

I want to make it clear how hard it is to study something like spatial attention and memory. These concepts are more metaphors or placeholder terms we use in neuroscience to describe observable psychological and behavioral phenomena than actual brain processes. They're kind of ill-defined and nebulous, though there's a massive literature that attempts to unite the behavioral with the neuronal.

The first mind-blowing spatial attention thing to know about is hemispatial neglect. The most common form of hemispatial neglect results from damage to the right posterior parietal lobe. It manifests as the inability to conceive of or see one visual hemifield.

What does that mean?

Well, check this out:

Literally, there is no conception of "leftness".

The above examples are of two drawings from a patient with hemineglect. Notice that, when copying a drawing, the patient is missing the whole left half of the object. And when free drawing, they show a similar effect. There's a great review on this topic by Masud Husain and Chris Rorden from 2003 in Nature Reviews Neuroscience.

In that paper the authors do an amazing job summarizing what we knew about neglect at the time. I'll also take this moment to emphasize yet again how much we've learned about human cognition from work done with patients with brain lesions.

Visual attention is an enormously huge domain that strongly overlaps with research into visual working memory.

The thing about working memory is that there's a lot of controversy in the field about well, how it works. There's a really cool researcher by the name of Paul Bays (who worked with Husain of the previously-mentioned paper) that summed up the debate very succinctly in the introduction of a 2009 paper they wrote in the Journal of Vision:

The mechanisms underlying visual working memory have recently become controversial. One account proposes a small number of memory "slots", each capable of storing a single visual object with fixed precision. A contrary view holds that working memory is a shared resource, with no upper limit on the number of items stored; instead, the more items that are held in memory, the less precisely each can be recalled.

This all stems from George Miller's famous paper from 1956, The magical number seven, plus or minus two: some limits on our capacity for processing information (decent Wikipedia summary here).

Anyway, visual attention and working memory are extremely difficult to disentangle. You can see the strong relationship between the two concepts in brainSCANr:
(In fact, I believe a big chuck of working memory research is conflated with attention... I've got a project to try and demonstrate just that).

The number of experimental paradigms used to study attention and/or working memory is enormous so, although there are fairly "standard" paradigms, even slight differences between stimulus presentation, timing, task, etc. can lead to fairly big differences in results.

However, the visual experiments that require a person to sit in a darkened room while some images flash at them on a computer screen has lead a number of researchers to call into question what is actually being tested in these situations. Does remembering when an X appears on a screen really encapsulate human memory? Does noticing a green square in a sea of red squares typify attention?

Of course, more ethological ("real-world" scenario) experiments sacrifice control for validity... making the whole damned thing a mess.

But this messiness is part of the allure... if it was easier we'd have figured it out decades ago and I'd be out of a job! :)

ResearchBlogging.orgHusain, M., & Rorden, C. (2003). Non-spatially lateralized mechanisms in hemispatial neglect Nature Reviews Neuroscience, 4 (1), 26-36 DOI: 10.1038/nrn1005
Bays, P., Catalao, R., & Husain, M. (2009). The precision of visual working memory is set by allocation of a shared resource Journal of Vision, 9 (10), 7-7 DOI: 10.1167/9.10.7
Miller GA (1956). The magical number seven plus or minus two: some limits on our capacity for processing information. Psychological review, 63 (2), 81-97 PMID: 13310704

22.9.11

"Peer-review" does not equal "publisher-owned journal"

Yesterday was my first real day working at UCSF. Most of it was spent filling out paperwork and completing all of the regulatory training.

For those of you unfamiliar with the regulations required for human research, let me just say that they are legion, for they are many. In order to work with human subjects you have to (among other requirements) complete several online courses and questionnaires.

This is usually all well and good if not a bit annoying... (Yes, I know I shouldn't inject my subjects with radioactive spider venom without their consent. No, I didn't consider that radioactive spider venom would be an Investigational New Drug. Yes, I disclosed my consultancy income from OSCORP.)

Anyway, many universities such as UCSF, Berkeley, etc. require researchers who work with human subjects to complete their online training on a specific website: the Collaborative Institutional Training Initiative.

For example I had to complete the "CITI Good Clinical Practice", "Human Subjects Research", and the "Responsible Conduct of Research" curricula. While answering questions in the latter, I encountered the following (which I got "wrong"):
(Someone needs to "peer-review" the inconsistant capitalization in the headers and that superfluous comma in that "Comment".)

This question struck me as especially odd. "Why," I asked, "is this unscientific, unsubstantiated question here? Are they afraid that researchers will stop publishing their research in peer-review and just BLOG everything?"

Does "those who have an interest in the work" not include peers? Have the authors not heard of arXiv.org?

Of course "BLOGS" aren't a replacement for peer-review. Blogs can be peer-reviewed, though, and "peer-review" is not equivalent to a publisher-owned journal. Where does the line between "blog" end and an online journal with commenting begin?

As Bora said over at the SciAm blogs:
Blog is software. Blog is primarily a platform. It is a piece of software that makes publishing cheap, fast and easy. What one does with that platform is up to each individual person or organization.

He points out the open science approach by Rosie Redfield as an example of peer-review that can be moved onto a blog to some extent.

Blogs may not replace peer-review, but they are certainly part of it already.

As was pointed out in a Nature News piece:
To many researchers, such rapid response is all to the good, because it weeds out sloppy work faster. "When some of these things sit around in the scientific literature for a long time, they can do damage: they can influence what people work on, they can influence whole fields," says [David] Goldstein.

Personally, I use my blog for many things. I talk about my own research a lot.

But I also post some fun, off-the-cuff ideas and analyses that, were I more motivated and had infinite time, I could probably polish and write up for peer-review.

It doesn't make those ideas less valuable, just more rough. And I hope someone takes some of them and runs with them. But of course, a major issue here is one of idea attribution.

As Ben Goldacre said in his Correspondence to Nature:
The growth of blogs, Twitter and free online access have caused a welcome explosion in scientific content. But this is atomized and interconnected by a hotchpotch of linking and referencing conventions. If we are going to harness its true value, we shall need dedicated librarians and information scientists to find ways of automating the process of linking content together again. That in itself would be a transgressive scientific innovation.

The current academic reward structures don't give me anything for this blog. So in the meantime I'll continue doing research with people and publishing it in "real" peer review while rolling my eyes at the occasional awkwardness with which academia approaches technology.

ResearchBlogging.orgGoldacre, B. (2011). Harnessing value of dispersed critiques Nature, 470 (7333), 175-175 DOI: 10.1038/470175b
Mandavilli, A. (2011). Peer review: Trial by Twitter Nature, 469 (7330), 286-287 DOI: 10.1038/469286a

16.9.11

Thank you Mario! But your methods are in another field!

My love for video games is no secret. I just finished Mass Effect 2 and started Dragon Age 2 (I'm a sucker for Bioware RGPs).


One of the first true peer-review papers I remember reading was Green & Bavelier's 2003 Nature paper "Action video game modifies visual selective attention". In that study the authors performed a series of experiments showing that people who had a lot of experience playing "action video games" performed better than non-video game players on a variety of attention tasks. Particularly striking, for example, was just how much better gamers did on a spatial attention task. Check it out:
I mean, look at that! Not even close (VGP = video game players; NVGP = non-VGP). Gamers blow non-gamers out of the water.

So my interest was especially piqued by some news going around today published on Nature's site (written by Mo Costandi) stating that "Video-game studies have serious flaws: Poor design of experiments undermines idea that action games bring cognitive benefit".

Oh. Snap.

Them's fighin' words!

Mo's article references a new Perspective just published in Frontiers in Psychology by Boot, Blakely, and Simons (open access!)

In it the authors note that cognitive transfer studies are quite contentious (see, for example, last year's Owen et al. Nature paper "Putting brain training to the test").

So what are the issues with the video game studies, according to Boot, Blakely, and Simons? They ennumerate 5 separate issues:

  • Overtrecruiting (possible differential demand characteristics)
  • Unspecified recruiting method
  • Potential third-variable/directionality problems (cross-sectional design)
  • No test of perceived similarity of tasks and gaming experience
  • Possible differential placebo effects

So basically they're saying that these video game studies aren't using the proper research methodology.

They note that:
Claims that gaming causes cognitive improvements require an experimental design akin to a clinical trial; in this case, a training experiment.

It amazes me how many meta peer-review papers are written that simply reiterate basic research and/or statistical methodologies.

Another recent example of this was a paper in Nature Neuroscience (!) last week by Nieuwenhuis, Forstmann, and Wagenmakers ("Erroneous analyses of interactions in neuroscience: a problem of significance") whose entire point basically boils down to: use an ANOVA to test for interactions.

This is stats 101! Hell, it's stats 001! (Ben Goldacre did a great piece on this in the Guardian).

I don't get why these things have to be re-explained every few years. These things just reinforce how domain-specific many academics are in their knowledge (and why I think branching out is so important).

In 2006 there was a paper in The American Statistician by Gelman & Stern titled "The Difference Between “Significant” and “Not Significant” is not Itself Statistically Significant".

Same basic stats issue as the one that Nieuwenhuis, Forstmann, and Wagenmakers got into Nature Neuroscience just 5 years later.

I mean I write papers wherein I get locked in detailed, intricate statistics battles with reviewers. Apparently the trick is to just fail at the most low level possible?

But seriously I'm not calling any of the video game studies bad; nor do I think were Boot, Blakely, and Simons.

Science is iterative and it's important to publish, even if the methods and data aren't perfect.

Embrace the imperfections and failures!

That's how this science stuff works.

But it would be nice if the failures weren't so, uh... basic?

ResearchBlogging.orgGreen CS, & Bavelier D (2003). Action video game modifies visual selective attention. Nature, 423 (6939), 534-7 PMID: 12774121
Boot, W., Blakely, D., & Simons, D. (2011). Do Action Video Games Improve Perception and Cognition? Frontiers in Psychology, 2 DOI: 10.3389/fpsyg.2011.00226
Owen, A., Hampshire, A., Grahn, J., Stenton, R., Dajani, S., Burns, A., Howard, R., & Ballard, C. (2010). Putting brain training to the test Nature, 465 (7299), 775-778 DOI: 10.1038/nature09042
Nieuwenhuis, S., Forstmann, B., & Wagenmakers, E. (2011). Erroneous analyses of interactions in neuroscience: a problem of significance Nature Neuroscience, 14 (9), 1105-1107 DOI: 10.1038/nn.2886
Gelman, A., & Stern, H. (2006). The Difference Between “Significant” and “Not Significant” is not Itself Statistically Significant The American Statistician, 60 (4), 328-331 DOI: 10.1198/000313006X152649

5.9.11

Can brain trauma cause cognitive enhancement?

Another post inspired by Quora. Someone asked the question: "Can brain trauma cause cognitive enhancement?".

Obviously this topic is dear to me, so I felt compelled to answer.

(Read previously on my TEDx talk, my Neuron paper on functional recovery after stroke, my PNAS paper on working memory network deficits after stroke, why we don't need a brain, and my discussion of Rep. Grabrielle Giffords' brain surgery).

The full response to the Quora question is below.

*****

Maybe! But most likely only in very specific cases of brain damage, and only for very specific types of cognitive task.

In 2005, Carlo Reverberi and colleagues published a really cool peer-reviewed paper in the journal Brain:
Better without (lateral) frontal cortex? Insight problems solved by frontal patients. Reverberi C, Toraldo A, D'Agostini S, Skrap M. Brain. 2005 Dec; 128(Pt 12):2882-90.

They studied patients with damage (lesions) to the prefrontal cortex:

They had these patients perform an "insight"-based task. Very simply subjects were given a math problem arranged in toothpicks. The goal was to make the arithmetic work by only moving one toothpick.

Visually:

So for the very first problem, you can see it starts by saying "4 = 3 - 1" which is clearly wrong. But by moving one of the toothpicks in the equal sign over to the minus sign, you swap the two, making an arithmetically sound equation: "4 - 3 = 1".

Without getting into a ton of details, it turns out that in some very specific cases, patients specifically with lateral prefrontal damage performed better than people without brain damage.

The theory behind this sort of fits with what we know about decision making and expectation. Basically, because of these patients' lesions they have some deficits in using contextual information and internal cues to inform their decision-making. But in a difficult task such as this with a large "search space", rather than getting stuck in specific patterns, they're a bit more "freed up" from internal expectancies and thus can hit on the correct solution more quickly.

ResearchBlogging.orgReverberi, C. (2005). Better without (lateral) frontal cortex? Insight problems solved by frontal patients Brain, 128 (12), 2882-2890 DOI: 10.1093/brain/awh577