darb.ketyov.com

Caveat lector: This blog is where I try out new ideas. I will often be wrong, but that's the point.

Home | Personal | Entertainment | Professional | Publications | Blog

Search Archive

Loading...

27.9.11

Zombies (and me!) at the California Academy of Sciences NightLife!

You like zombies? Of course you do.
You like science? Who doesn't?!!

Well this October 27 the California Academy of Sciences will have yours truly giving a talk at their ever-so-popular NightLife event!

I'll be bringing real human brains for a hands-on neuroanatomy demonstration!


I'll also be talking about the zombie brain and how neuroscience can help you and your loved ones survive the zombie apocalypse! :D


Seriously this is going to be a lot of fun: there will be a zombie drag show and costume contest, undead makeup artists, demos of the new zombie video game hotness Dead Island, and a special planetarium event about "zombie" stars.

Buy tickets now!

26.9.11

Thinking with Portals

My love for Portal, Portal 2, and video games in general is certainly no secret. Nor is my love for comics and other geekery.

Recently I played through the Portal 2 single- and multi-player campaigns (with my Activision and Call of Duty buddy, Bryan).

I love the Portals.

(Screenshot from Portal 2)

Of course, me being... well... me, I couldn't help but think about why I loved the game and then a bunch of weird brainy neuro stuff.

Basically the end result of my over-intellectualization was that sometimes games are just fun.

That's it. End of post! Hope you enjoyed it.


.
.
.


Oh. You're still here? ::sigh::

Okay,I guess I can continue on. For science. Or rather, for neuroscience.

If you're unfamiliar with the Portal games, a wonderfully homicidal AI is trying to murder you in the name of advancing science. Your goal is to stay alive by surviving her tests using only a gun. But this is a special gun. It creates portals that allow you to teleport between portal A and portal B. Momentum, a function of mass and velocity, is conserved between portals. In layman's terms: speedy thing goes in, speedy thing comes out.

This game has some ridiculously crazy 3D spatial reasoning puzzles and I got to thinking about how the hell I was able to shoot a portal at a wall, shoot another across the room, and know that when I went into one I would come out the other. There's a lot going on here to let the brain do this!

Seriously, some of these puzzles are mind-bending. Watch this speed run:



Get all that? There's a reason why scicurious gets motion sick when she tries to play the Portal games.

(As an aside, it also contains one of the best end game songs ever, written by Jonathan Coulton.)

In this post I'll give a brief introduction about how the human brain can even conceive of teleporting between two portals. More specifically I'll talk about visual attention and working memory. You can't really conceive of moving from portal A to portal B without first knowing where the two portals are relative to each other and to the room you're in.

I want to make it clear how hard it is to study something like spatial attention and memory. These concepts are more metaphors or placeholder terms we use in neuroscience to describe observable psychological and behavioral phenomena than actual brain processes. They're kind of ill-defined and nebulous, though there's a massive literature that attempts to unite the behavioral with the neuronal.

The first mind-blowing spatial attention thing to know about is hemispatial neglect. The most common form of hemispatial neglect results from damage to the right posterior parietal lobe. It manifests as the inability to conceive of or see one visual hemifield.

What does that mean?

Well, check this out:

Literally, there is no conception of "leftness".

The above examples are of two drawings from a patient with hemineglect. Notice that, when copying a drawing, the patient is missing the whole left half of the object. And when free drawing, they show a similar effect. There's a great review on this topic by Masud Husain and Chris Rorden from 2003 in Nature Reviews Neuroscience.

In that paper the authors do an amazing job summarizing what we knew about neglect at the time. I'll also take this moment to emphasize yet again how much we've learned about human cognition from work done with patients with brain lesions.

Visual attention is an enormously huge domain that strongly overlaps with research into visual working memory.

The thing about working memory is that there's a lot of controversy in the field about well, how it works. There's a really cool researcher by the name of Paul Bays (who worked with Husain of the previously-mentioned paper) that summed up the debate very succinctly in the introduction of a 2009 paper they wrote in the Journal of Vision:

The mechanisms underlying visual working memory have recently become controversial. One account proposes a small number of memory "slots", each capable of storing a single visual object with fixed precision. A contrary view holds that working memory is a shared resource, with no upper limit on the number of items stored; instead, the more items that are held in memory, the less precisely each can be recalled.

This all stems from George Miller's famous paper from 1956, The magical number seven, plus or minus two: some limits on our capacity for processing information (decent Wikipedia summary here).

Anyway, visual attention and working memory are extremely difficult to disentangle. You can see the strong relationship between the two concepts in brainSCANr:
(In fact, I believe a big chuck of working memory research is conflated with attention... I've got a project to try and demonstrate just that).

The number of experimental paradigms used to study attention and/or working memory is enormous so, although there are fairly "standard" paradigms, even slight differences between stimulus presentation, timing, task, etc. can lead to fairly big differences in results.

However, the visual experiments that require a person to sit in a darkened room while some images flash at them on a computer screen has lead a number of researchers to call into question what is actually being tested in these situations. Does remembering when an X appears on a screen really encapsulate human memory? Does noticing a green square in a sea of red squares typify attention?

Of course, more ethological ("real-world" scenario) experiments sacrifice control for validity... making the whole damned thing a mess.

But this messiness is part of the allure... if it was easier we'd have figured it out decades ago and I'd be out of a job! :)

ResearchBlogging.orgHusain, M., & Rorden, C. (2003). Non-spatially lateralized mechanisms in hemispatial neglect Nature Reviews Neuroscience, 4 (1), 26-36 DOI: 10.1038/nrn1005
Bays, P., Catalao, R., & Husain, M. (2009). The precision of visual working memory is set by allocation of a shared resource Journal of Vision, 9 (10), 7-7 DOI: 10.1167/9.10.7
Miller GA (1956). The magical number seven plus or minus two: some limits on our capacity for processing information. Psychological review, 63 (2), 81-97 PMID: 13310704

22.9.11

"Peer-review" does not equal "publisher-owned journal"

Yesterday was my first real day working at UCSF. Most of it was spent filling out paperwork and completing all of the regulatory training.

For those of you unfamiliar with the regulations required for human research, let me just say that they are legion, for they are many. In order to work with human subjects you have to (among other requirements) complete several online courses and questionnaires.

This is usually all well and good if not a bit annoying... (Yes, I know I shouldn't inject my subjects with radioactive spider venom without their consent. No, I didn't consider that radioactive spider venom would be an Investigational New Drug. Yes, I disclosed my consultancy income from OSCORP.)

Anyway, many universities such as UCSF, Berkeley, etc. require researchers who work with human subjects to complete their online training on a specific website: the Collaborative Institutional Training Initiative.

For example I had to complete the "CITI Good Clinical Practice", "Human Subjects Research", and the "Responsible Conduct of Research" curricula. While answering questions in the latter, I encountered the following (which I got "wrong"):
(Someone needs to "peer-review" the inconsistant capitalization in the headers and that superfluous comma in that "Comment".)

This question struck me as especially odd. "Why," I asked, "is this unscientific, unsubstantiated question here? Are they afraid that researchers will stop publishing their research in peer-review and just BLOG everything?"

Does "those who have an interest in the work" not include peers? Have the authors not heard of arXiv.org?

Of course "BLOGS" aren't a replacement for peer-review. Blogs can be peer-reviewed, though, and "peer-review" is not equivalent to a publisher-owned journal. Where does the line between "blog" end and an online journal with commenting begin?

As Bora said over at the SciAm blogs:
Blog is software. Blog is primarily a platform. It is a piece of software that makes publishing cheap, fast and easy. What one does with that platform is up to each individual person or organization.

He points out the open science approach by Rosie Redfield as an example of peer-review that can be moved onto a blog to some extent.

Blogs may not replace peer-review, but they are certainly part of it already.

As was pointed out in a Nature News piece:
To many researchers, such rapid response is all to the good, because it weeds out sloppy work faster. "When some of these things sit around in the scientific literature for a long time, they can do damage: they can influence what people work on, they can influence whole fields," says [David] Goldstein.

Personally, I use my blog for many things. I talk about my own research a lot.

But I also post some fun, off-the-cuff ideas and analyses that, were I more motivated and had infinite time, I could probably polish and write up for peer-review.

It doesn't make those ideas less valuable, just more rough. And I hope someone takes some of them and runs with them. But of course, a major issue here is one of idea attribution.

As Ben Goldacre said in his Correspondence to Nature:
The growth of blogs, Twitter and free online access have caused a welcome explosion in scientific content. But this is atomized and interconnected by a hotchpotch of linking and referencing conventions. If we are going to harness its true value, we shall need dedicated librarians and information scientists to find ways of automating the process of linking content together again. That in itself would be a transgressive scientific innovation.

The current academic reward structures don't give me anything for this blog. So in the meantime I'll continue doing research with people and publishing it in "real" peer review while rolling my eyes at the occasional awkwardness with which academia approaches technology.

ResearchBlogging.orgGoldacre, B. (2011). Harnessing value of dispersed critiques Nature, 470 (7333), 175-175 DOI: 10.1038/470175b
Mandavilli, A. (2011). Peer review: Trial by Twitter Nature, 469 (7330), 286-287 DOI: 10.1038/469286a

16.9.11

Thank you Mario! But your methods are in another field!

My love for video games is no secret. I just finished Mass Effect 2 and started Dragon Age 2 (I'm a sucker for Bioware RGPs).


One of the first true peer-review papers I remember reading was Green & Bavelier's 2003 Nature paper "Action video game modifies visual selective attention". In that study the authors performed a series of experiments showing that people who had a lot of experience playing "action video games" performed better than non-video game players on a variety of attention tasks. Particularly striking, for example, was just how much better gamers did on a spatial attention task. Check it out:
I mean, look at that! Not even close (VGP = video game players; NVGP = non-VGP). Gamers blow non-gamers out of the water.

So my interest was especially piqued by some news going around today published on Nature's site (written by Mo Costandi) stating that "Video-game studies have serious flaws: Poor design of experiments undermines idea that action games bring cognitive benefit".

Oh. Snap.

Them's fighin' words!

Mo's article references a new Perspective just published in Frontiers in Psychology by Boot, Blakely, and Simons (open access!)

In it the authors note that cognitive transfer studies are quite contentious (see, for example, last year's Owen et al. Nature paper "Putting brain training to the test").

So what are the issues with the video game studies, according to Boot, Blakely, and Simons? They ennumerate 5 separate issues:

  • Overtrecruiting (possible differential demand characteristics)
  • Unspecified recruiting method
  • Potential third-variable/directionality problems (cross-sectional design)
  • No test of perceived similarity of tasks and gaming experience
  • Possible differential placebo effects

So basically they're saying that these video game studies aren't using the proper research methodology.

They note that:
Claims that gaming causes cognitive improvements require an experimental design akin to a clinical trial; in this case, a training experiment.

It amazes me how many meta peer-review papers are written that simply reiterate basic research and/or statistical methodologies.

Another recent example of this was a paper in Nature Neuroscience (!) last week by Nieuwenhuis, Forstmann, and Wagenmakers ("Erroneous analyses of interactions in neuroscience: a problem of significance") whose entire point basically boils down to: use an ANOVA to test for interactions.

This is stats 101! Hell, it's stats 001! (Ben Goldacre did a great piece on this in the Guardian).

I don't get why these things have to be re-explained every few years. These things just reinforce how domain-specific many academics are in their knowledge (and why I think branching out is so important).

In 2006 there was a paper in The American Statistician by Gelman & Stern titled "The Difference Between “Significant” and “Not Significant” is not Itself Statistically Significant".

Same basic stats issue as the one that Nieuwenhuis, Forstmann, and Wagenmakers got into Nature Neuroscience just 5 years later.

I mean I write papers wherein I get locked in detailed, intricate statistics battles with reviewers. Apparently the trick is to just fail at the most low level possible?

But seriously I'm not calling any of the video game studies bad; nor do I think were Boot, Blakely, and Simons.

Science is iterative and it's important to publish, even if the methods and data aren't perfect.

Embrace the imperfections and failures!

That's how this science stuff works.

But it would be nice if the failures weren't so, uh... basic?

ResearchBlogging.orgGreen CS, & Bavelier D (2003). Action video game modifies visual selective attention. Nature, 423 (6939), 534-7 PMID: 12774121
Boot, W., Blakely, D., & Simons, D. (2011). Do Action Video Games Improve Perception and Cognition? Frontiers in Psychology, 2 DOI: 10.3389/fpsyg.2011.00226
Owen, A., Hampshire, A., Grahn, J., Stenton, R., Dajani, S., Burns, A., Howard, R., & Ballard, C. (2010). Putting brain training to the test Nature, 465 (7299), 775-778 DOI: 10.1038/nature09042
Nieuwenhuis, S., Forstmann, B., & Wagenmakers, E. (2011). Erroneous analyses of interactions in neuroscience: a problem of significance Nature Neuroscience, 14 (9), 1105-1107 DOI: 10.1038/nn.2886
Gelman, A., & Stern, H. (2006). The Difference Between “Significant” and “Not Significant” is not Itself Statistically Significant The American Statistician, 60 (4), 328-331 DOI: 10.1198/000313006X152649

13.9.11

Uberdata: How prostitution and alcohol make Uber better

As some of you know, I've been doing some work for a transportation startup in San Francisco called Uber (prior to starting my post-doc at UCSF).

Weird job for a neuroscientist, I know. But I'll explain why in a bit.

I don't want this post to sound like an advertisement for them, but I think a lot of the readers of this blog (who are here for the neuroscience) might find my most recent blog post for Uber interesting. And you might also find why I'm working with them interesting.

Academia ia great. I love it. But there's a certain lethargy and insularity that I wanted to break out of for a bit. Working at an awesome, tech-driven startup with kick-ass engineers for 3 months was an amazing experience. My startup sabbatical. And I learned a lot of data storage, retrieval, and analysis techniques--as well as gaining some new programming skills--that I just wouldn't have gotten out of academia.

Part of what I've been doing for Uber is writing data-driven blog posts. I think the most recent one is pretty wild. So here's the post in its entirety, but you can read it over on the Uber blog here.

*****

What up humans?! Bradley Voytek here again. Man do we have some crazy #uberdata for you today.

Today is Uber: Freakonomics edition.

In this post I'll show how where crimes occur—specifically prostitution, alcohol, theft, and burglary—improves Uber's demand prediction models.

As you know, the three of us in Uber team Science (below) are pretty busy nerding it up around the Uber offices all day adding numbers together, pouring colored liquids into beakers, that sort of thing.

Revenge of the Uber Nerds!


One of the most important jobs we do (second only to keeping our Uber Science mutants securely locked up) is to accurately predict demand to make sure you get a car when you want one. We've managed to do this pretty well so far, but we're continually making tweaks to improve things.

One way of predicting demand is by knowing when people want to ride with us.

Another factor is knowing where people will want rides. Our drivers have a pretty intuitive understanding of this, but we believe that math makes everything better, so we wanted to have some quantification.

This is a harder problem.

But a few weeks ago I attended Sci Foo at Google and got a ton of crazy, dirty #dataporn ideas.

As you know, location is important to us because proper supply positioning lets us reduce pickup times. For example, we're obviously going to see an increase in trips in SoMA near AT&T Park before and after Giants games.

The first issue we encountered is determining an easy, intuitive way to break a city into discrete "places". While mathematically this isn't necessary, in terms of communicating the data internally it's very important.

Thanks to Zillow we were able to extract the complex boundaries for neighborhoods in each Uber city. Check out the 34 neighborhoods in San Francisco:

San Francisco neighborhoods


First: shut up. I don't care if your neighborhood isn't part of that map. I know that the Tender Nob is sick. But if I had to figure out all of the boundaries of all of the sub-sub-sub-neighborhoods of San Francisco, I'd be able to figure out the length of the British coastline (nerd joke).

Unfortunately figuring out whether or not a geographic point is inside one of those complicated shapes is complicated.

Haha, just kidding. We got this.

The first thing we did was to look at how many trips we've done per neighborhood. Check it out:

Uber Mondrian - SF


(Map colors based on www.ColorBrewer.org, by Cynthia A. Brewer, Penn State.)


Uber Mondrian!

Here's Manhattan (and Williamsburg):

Uber Mondrian - NYC


You'll notice that we do the most San Francisco trips in the downtown and SoMa areas. These also happen to be large, densely populated regions, so that's to be expected. So in our spatial demand predictions we clearly need to take into account population density.

But there's a catch. While neighborhood population density might account for some of the variance in our demand, we also need to take into account where people are hanging out, going to work, etc. This is different from census data. Where people live, where people work, and where people play are (usually) in very different neighborhoods in a densely populated city.

So we needed a simple surrogate metric for where people are. We could do that by counting the number of businesses or bars or whatever in a neighborhood... but we had a better idea.

Crime.

We hypothesized that crime would be a proxy for non-residential population density.

According to the data from San Francisco Crimespotting (HUGE shout-out to Stamen Design for the data; you guys are awesome!), there were 75,488 crimes in San Francisco since Uber's launch on 2010 June 01. These crime data are broken down into 12 categories: murder, robbery, aggravated assault, simple assault, arson, theft, vehicle theft, burglary, vandalism, narcotics, alcohol, and prostitution.

Let's map that:

San Francisco crime


If it looks kind of like the trips map to you, that's because the two are decently correlated (r = 0.56, p < 0.001). (For you math sticklers, crime and trip data are log distributed by neighborhood, so all correlations are Spearman rank correlations, but log-log Pearson correlations give approximately the same results).

Neighborhoods with more crime (more people hanging out) have more Uber rides.

But we also wanted to know if any specific crimes might be better predictors of rides than others.

To examine this we looked at the correlation between the number of each type of crime and the number of trips we've done in each neighborhood. All types of crime except murder, vehicle theft, and arson were positively correlated with number of trips. After correcting for multiple comparisons, four crimes remained significantly correlated (p < 0.05, Bonferroni corrected):
  • Prostitution
  • Alcohol
  • Theft
  • Burglary

In other words:
The parts of San Francisco that have the most prostitution, alcohol, theft, and burglary also have the most Uber rides! Party hard but be safe, Uberites!

Of course this isn't in any way causal. I don't think our Uber riders are causing more prostitution. Right guys?

Like I said above, this effect probably reflects population density in terms of where people socialize: the more people that are hanging out in an area, the more prostitution, alcohol, and theft there is. Makes sense.

Now, let's go back to the timing thing. We know that Uber rides change by hour and day of week. What about crime?

Across all crimes there's not much variation in the total number of crimes between days. However within a day there's a lot of ups and downs. It turns out that the number of crimes peaks between 6 and 8pm.

But there was one surprise. One crime, beyond all the rest, had a specifically BIG peak on a specific day.

Prostitution.

On Wednesday nights.

This was so surprising to me that I doubled-checked the effect by looking at crimes in Oakland, too. Oakland Crimespotting also had a lot more data: 152,730 crimes in the database since 2008 Jan 01.

We got the same effect. Check out Oakland's data:

Oakland hump day


Now mind you, at this point I've strayed from the Uber ride-prediction path. Crime is a good proxy for the "activity" of a city, but the timing of the crimes doesn't really correlate with our ride patterns.

From here on out in this post, everything is purely for my love of #dataporn and my inner scientist getting all giddy with a neat effect. This was just too fascinating of a finding for me to let go (I'm a scientist, dammit!) I needed to figure out why.

Why Wednesday nights?!

Hell, I even stopped to talk to two cops in Berkeley to see if they knew of any reason why prostitution crimes peaked at this time (seriously). They had no idea. And they probably thought that the weird math nerd babbling to them about statistics and prostitution was off his nut.

But then someone pointed out to me that Social Security and welfare checks arrive on the second, third, and fourth Wednesdays of each month.

Oh man. Now I've gotten myself into dangerous, politically-charged territory.

Keep in mind we're only talking about 4-5 prostitution crimes each Wednesday. This is pretty low considering the cities we're talking about have populations in the hundreds of thousands to millions. So before you go running off screaming about how the welfare state is subsidizing sexy times for retirees, chill out and keep that in mind.

It turns out that there are significantly more prostitution crimes on the second Wednesday of each month compared to the first (p < 0.01):

Prostitution Wednesdays


Why? Well one possibility is that on the second Wednesday, people get their checks after two weeks without any income. The first Wednesday: no checks. Second Wednesday: cash in hand!

It might be that any time there's an influx of cash into a city, there's also a bump in prostitution crimes. That's harder to check, but worth following up.

Mind you, I don't see this effect for any other types of crimes. Just prostitution.

This doesn't prove anything conclusively, of course. And again, we're talking about a difference of, on average, only a few extra cases of prostitution. But because we have so much data we can get a good assessment of the statistical significance of this effect.

This one of the coolest things about working for a data-driven company like Uber: on the surface we're a transportation company, but below the hood there are so many ways to look at our data. And sometimes that freedom to play leads to interesting results.

This finding is a perfect example of the fascinating insights you can get when you combine big datasets. By trying to figure out how to predict where to position our cars, we got a peek at the ebb and flow of the life and crimes of San Francisco. Expect more of these kinds of posts in the next couple of weeks.

We've got a lot of cool stuff in store, I promise you!

5.9.11

Can brain trauma cause cognitive enhancement?

Another post inspired by Quora. Someone asked the question: "Can brain trauma cause cognitive enhancement?".

Obviously this topic is dear to me, so I felt compelled to answer.

(Read previously on my TEDx talk, my Neuron paper on functional recovery after stroke, my PNAS paper on working memory network deficits after stroke, why we don't need a brain, and my discussion of Rep. Grabrielle Giffords' brain surgery).

The full response to the Quora question is below.

*****

Maybe! But most likely only in very specific cases of brain damage, and only for very specific types of cognitive task.

In 2005, Carlo Reverberi and colleagues published a really cool peer-reviewed paper in the journal Brain:
Better without (lateral) frontal cortex? Insight problems solved by frontal patients. Reverberi C, Toraldo A, D'Agostini S, Skrap M. Brain. 2005 Dec; 128(Pt 12):2882-90.

They studied patients with damage (lesions) to the prefrontal cortex:

They had these patients perform an "insight"-based task. Very simply subjects were given a math problem arranged in toothpicks. The goal was to make the arithmetic work by only moving one toothpick.

Visually:

So for the very first problem, you can see it starts by saying "4 = 3 - 1" which is clearly wrong. But by moving one of the toothpicks in the equal sign over to the minus sign, you swap the two, making an arithmetically sound equation: "4 - 3 = 1".

Without getting into a ton of details, it turns out that in some very specific cases, patients specifically with lateral prefrontal damage performed better than people without brain damage.

The theory behind this sort of fits with what we know about decision making and expectation. Basically, because of these patients' lesions they have some deficits in using contextual information and internal cues to inform their decision-making. But in a difficult task such as this with a large "search space", rather than getting stuck in specific patterns, they're a bit more "freed up" from internal expectancies and thus can hit on the correct solution more quickly.

ResearchBlogging.orgReverberi, C. (2005). Better without (lateral) frontal cortex? Insight problems solved by frontal patients Brain, 128 (12), 2882-2890 DOI: 10.1093/brain/awh577