Thursday, February 25, 2016

Town hall on methodological issues

Our department just had its first ever town hall event. The goal was to have an open discussion of issues surrounding reproducibility and other methodological challenges. Here's the announcement: 
Please join us for a special Psychology Colloquium event: Town Hall on Contemporary Methodological Issues in Psychological Science.

Professors Lee Ross, Mike Frank, and Russ Poldrack will each give a ten-minute talk, sharing their perspectives on contemporary methodological issues within their respective fields. There will be opportunities for both small and large group discussion.
I gave a talk on my evolving views on reproducibility, many summarized here, specifically focusing on the issue that individual studies tend not to be definitive. I advocated for a series of changes to our default practice, including: 
  1. Larger Ns
  2. Multiple internal replications
  3. Measurement and estimation, rather than statistical significance
  4. Experimental “debugging” tools (e.g., manipulation checks, negative/positive controls)
  5. Preregistration where appropriate 
  6. Everything open – materials, data, code – by default
I then illustrated this with a couple of recent examples of work I've been involved in. If you're interested in seeing the presentation, my slides are available here. Overall, the town hall was a real success, with a lot of lively discussion and plenty of student voices discussing their concerns. 

Thursday, February 18, 2016

Explorations in hierarchical drift diffusion modeling

tl;dr: Adventures in using different platforms/methods to fit drift diffusion models to data. 

The drift diffusion model (DDM) is increasingly a mainstay of research on decision-making, both in neuroscience and cognitive science. The classic DDM defines a pseudo random-walk decision process that describes a distribution on both accuracies and reaction times. This kind of joint distribution is really useful for capturing tasks where there could be speed-accuracy tradeoffs, and hence where classic univariate analyses are uninformative. Here's the classic DDM picture, this version from Vandekerckhove, Tuerlinckx, & Lee (2010), who have a nice tutorial on hierarchical DDMs:


We recently started using DDM to try and understand decision-making behavior in the kinds of complex inference tasks that my lab and I have been studying for the past couple of years. For example, in one recently-submitted paper, we use DDM to look at decision processes for inhibition, negation, and implicature, trying to understand the similarities and differences in these three tasks:


We had initially hypothesized that performance in the negation and implicature tasks (our target tasks) would correlate with inhibition performance. It didn't, and what's more the data seemed to show very different patterns across the three tasks. So we turned to DDM to understand a bit more of the decision process for each of these tasks.* Also, in a second submitted paper, we looked at decision-making during "scalar implicatures," the inference that "I ate some of the cookies" implies that I didn't eat all of them. In both of these cases, we wanted to know what was going on in these complex, failure-prone inferences.

An additional complexity was that we are interested in the development of these inferences in children. DDM has not been used much with children, usually because of the large number of trials that DDM seems to require. But we were inspired by a recent paper by Ratcliff (one of the important figures in DDMs), which used DDMs for data from elementary-aged children. And since we have been using iPad experiments to get RTs and accuracies for preschoolers, we thought we'd try and do these analyses with data from both kids and adults.

But... it turns out that it's not trivial to fit DDMs (especially the more interesting variants) to data, so I wanted to use this blogpost to document my process in exploring different ecosystems for DDM and hierarchical DDM.