Friday, January 20, 2017

How do you argue for diversity?

During the last couple of months I have been serving as a member of my department's diversity committee, charged with examining policies relating to diversity in graduate and faculty recruitment. I have always put a value on the personal diversity of the people I worked with. But until this experience, I hadn't thought about how unexamined my thinking on this topic was, and I hadn't explicitly tried to make the case for diversity in our student population. So I was unprepared for the complexity of this issue.* As it turns out, different people have tremendously different intuitions on how to – and whether you should – argue for diversity in an educational setting.

In this post, I want to enumerate some of the arguments for diversity I've collected. I also want to lay out some of the conflicting intuitions about these arguments that I have encountered. But since diversity is an incredibly polarizing issue, I also want to be sure to give a number of caveats. First, this blogpost is about the topic of other people’s responses to arguments for diversity; I’m not myself making any of these arguments here. I do personally care about diversity and personally find some of these arguments more and less compelling, but that’s not what I’m writing about. Second, all of this discussion is grounded in the particular case of understanding diversity in the student body of educational institutions (especially in graduate education). I don’t know enough about workplace issues to comment. Third, and somewhat obviously, I don’t speak for anyone but myself. This post doesn’t represent the views of Stanford, the Stanford psych department, or even the Stanford Psych diversity committee.

Tuesday, January 3, 2017

Onboarding

Reading twitter this morning I saw a nice tweet by Page Piccinini, on the topic of organizing project folders:
This is exactly what I do and ask my students to do, and I said so. I got the following thoughtful reply from my old friend Adam Abeles:
He's exactly right. I need some kind of onboarding guide. Since I'm going to have some new folks joining my lab soon, no time like the present. Here's a brief checklist for what to expect from a new project.

Friday, November 4, 2016

Don't bar barplots, but use them cautiously

Should we outlaw the the commonest visualization in psychology? The hashtag #barbarplots has been introduced as part of a systematic campaign to promote a ban on bar graphs. The argument is simple: barplots mask the distributional form of the data, and all sorts of other visualization forms exist that are more flexible and precise, including boxplots, violin plots, and scatter plots. All of these show the distributional characteristics of a dataset more effectively than a bar plot.

Every time the issue gets discussed on twitter, I get a little bit rant-y; this post is my attempt to explain why. It's not because I fundamentally disagree with the argument. Barplots do mask important distributional facts about datasets. But there's more we have to take into account.

Friday, July 22, 2016

Preregister everything

Which methodological reforms will be most useful for increasing reproducibility and replicability?I've gone back and forth on this blog about a number of possible reforms to our methodological practices, and I've been particularly ambivalent in the past about preregistration, the process of registering methodological and analytic decisions prior to data collection. In a post from about three years ago, I worried that preregistration was too time-consuming for small-scale studies, even if it was appropriate for large-scale studies. And last year, I worried whether preregistration validates the practice of running (and publishing) one-offs, rather than running cumulative study sets. I think these worries were overblown, and resulted from my lack of understanding of the process.

Instead, I want to argue here that we should be preregistering every experiment do. The cost is extremely low and the benefits – both to the research process and to the credibility of our results – are substantial. Starting in the past few months, my lab has begun to preregister every study we run. You should too.

The key insights for me were:
  1. Different preregistrations can have different levels of detail. For some studies, you write down "we're going to run 24 participants in each condition, and exclude them if they don't finish." For others you specify the full analytic model and the plots you want to make. But there is no study for which you know nothing ahead of time. 
  2. You can save a ton of time by having default analytic practices that don't need to be registered every time. For us these live on our lab wiki (which is private but I've put a copy here).  
  3. It helps me get confirmation on what's ready to run. If it's registered, then I know that we're ready to collect data. I especially like the interface on AsPredicted, that asks coauthors to sign off prior to the registration going through. (This also incidentally makes some authorship assumptions explicit). 

Tuesday, July 12, 2016

Minimal nativism

(After blogging a little less in the last few months, I'm trying out a new idea: I'm going to write a series of short posts about theoretical ideas I've been thinking about.)

Is human knowledge built using a set of of perceptual primitives combined by the statistical structure of the environment, or does it instead rest on a foundation of pre-existing, universal concepts? The question of innateness is likely the oldest and most controversial in developmental psychology (think Plato vs. Aristotle, Locke vs. Descartes). In modern developmental work, this question so bifurcates the research literature that it can often feel like scientists are playing for different "teams," with incommensurable assumptions, goals, and even methods. But these divisions have a profoundly negative effect on our science. Throughout my research career, I've bounced back and forth between research groups and even institutions that are often seen as playing on different teams from one another (even if the principals involved personally hold much more nuanced positions). Yet it seems obvious that neither has sole claim to the truth. What does a middle position look like?

One possibility is a minimal nativist position. This term is developed in Noah Goodman and Tomer Ullman's work, showing up first in a very nice paper called Learning a Theory of Causality.* In that paper, they write:
... this [work] suggests a novel take on nativism—a minimal nativism—in which strong but domain-general inference and representational resources are aided by weaker, domain-specific perceptual input analyzers.
This statement comes in the context of the authors proposal that infants' theory of causal reasoning – often considered a primary innate building block of cognition – could in principle be constructed by a probabilistic learner. But that learner would still need some starting point; in particular, here the authors' learner had access to 1) a logical language of thought and 2) some basic information about causal interventions, perhaps from the infant's innate knowledge about contact causality or the actions of social agents (these are the "input analyzers" in the quote above).

Tuesday, June 21, 2016

Reproducibility and experimental methods posts

In celebration of the third anniversary of this blog, I'm collecting some of my posts on reproducibility. I didn't initially anticipate that methods and the "reproducibility crisis" in psychology would be my primary blogging topic, but it's become a huge part of what I write about on a day-to-day basis.

Here are my top four posts in this sequence:


Then I've also written substantially about a number of other topics, including publication incentives and the file-drawer problem:


The blog has been very helpful for me in organizing and communicating my thoughts, as well as for collecting materials for teaching reproducible research. Hoping to continue thinking about these topics in the future, even as I move back to discussing more developmental and cognitive science topics. 

Sunday, June 5, 2016

An adversarial test for replication success

(tl;dr: I argue that the only way to tell if a replication study was successful is by considering the theory that motivated the original.)

Psychology is in the middle of a sea change in its attitudes towards direct replication. Despite their value in providing evidence for the reliability of a particular experimental finding, incentives for direct replications have typically been limited. Increasingly, however, journals and funding agencies now increasingly value these sorts of efforts. One major challenge, however, has been evaluating the success of direct replications studies. In short, how do we know if the finding is the same?

There has been limited consensus on this issue, so many projects have used a diversity of methods. The RP:P 100-study replication project, reports several indicators of replication success, including 1) the statistical significance of the replication, 2) whether the original effect size lies within the confidence interval of the replication, 3) the relationship between the original and replication effect size, 4) the meta-analytic estimate of effect size combining both, and 5) a subjective assessment of replication by the team. Mostly these indicators hung together, though there were numerical differences.

Several of these criteria are flawed from a technical perspective. As Uri Simonsohn points out in his "Small Telescopes" paper, as the power of the replication study goes to infinity, the replication will always be statistically significant, even if it's finding a very small effect that's quite different from the original. And similarly, as N in the original study goes to zero (if it's very underpowered), it gets harder and harder to differentiate its effect size from any other, because of its wide confidence interval. So both statistical significance of the replication and comparison of effect sizes have notable flaws.*