Saturday, April 26, 2014

Data analysis, one step deeper

tl;dr: Using git+R is good for reproducible research. If you already knew that, then you won't learn a lot here.

I just read Dorothy Bishop's new post, Data analysis: Ten tips I wish I'd known sooner. I really like her clear, commonsense recommendations and agree with just about all of them. But in the last couple of years, I've become convinced that even for the less technically-minded student (let alone for the sophisticated researcher), the spirit of many of her tips can be implemented using open tools like git and R. As a side benefit, many of the recommendations are a natural part of the workflow in these tools, rather than requiring extra effort.

My goal in this post is to explain how this ecosystem works, and why it (more or less) does the right thing. I also want to show you why I think it's a pretty good tradeoff between learning time and value added. Of course, you can get even more reproducible, managing your project on the Open Science Framework (and using their git support), and use sweave and LaTeX to typeset exactly the same code you've written. These are all great things. But for many people, starting out with such a complex, interlocking set of tools can be quite daunting. I'd like to think the git+R setup that we use strikes a good balance.

Bishop's recommendations implicitly address several major failure modes for data analysis:
  1. Not being able to figure out what set of steps you did, 
  2. Not being able to figure out what those steps actually accomplished, and
  3. Not being able to reconstruct the dataset you did them to.
These are problems in the reproducibility of your analysis, and as such, pose major risks to basic science you're trying to do. The recommendations that address these issues are very sensible: keep track of what you did (recs 8 and 9), label and catalogue your variables in a semantically transparent way (recs 2 and 4), archive and back up your data (recs 5 and 6). Here's how I accomplish this in git+R.

Writing analyses in R as a keystone of reproducible analysis

Bishop's recommendations focus on the importance of keeping a log of analyses. This is the classic best-practices approach in bench science: keep a lab notebook! Although I don't think you can go wrong with this approach, it has a couple of negatives. First, it requires a lot of discipline. If you get excited and start doing analyses, you have to stop yourself and remember to document them fully. Second, keeping a paper lab notebook means going back and forth between computer and notebook all the time (and having to work on analyses only when you have a notebook with you). On the other hand, using an electronic notebook can mean you run into major formatting difficulties in including code, data, images, etc.

These problems have been solved very nicely by iPython, an interactive notebook that allows the inclusion of data, code, images, and text in a single flexible format.  I suspect that once this approach is truly mature and can be used across languages, interactive notebooks are what we all should be using. But I am not yet convinced that we should be writing python code to analyze our data yet – and I definitely don't think we should start students out this way. Python is a general-purpose language (and a much better one than R) but the idioms of data analysis are not yet as codified or as accessible in it, even though they are improving rapidly.

In the mean time, I think the easiest way for students to learn to do reproducible data analysis is to write well-commented R scripts. These scripts can simply be executed to produce the desired analyses. (There is of course scripting functionality in SPSS as well, but the combination of clicking and scripting can be devastating to reproducibility: the script gives the impression of reproducibility while potentially depending on some extra ad-hoc clicks that are not documented).

The reasons why I think R is a better data analysis language for students to learn than python are largely due to Hadley Wickham, who has done more than anyone else to rationalize R analysis. In particular, a good, easy-to-read analysis will typically only have a few steps: read in the data, aggregate the data across some units (often taking means across conditions and subjects), plot this aggregated data, and apply a statistical model to characterize patterns seen in the plots. In the R ecosystem, each of these can be executed in only one or at most a few lines of code.

Here's an example from a repository I've been working on with Ali Horowitz, a graduate student in my lab. This is an experiment on children's use of discourse information to learn the meanings of words. Children across ages choose which toy (side) they think a word refers to, in conditions with and without discourse continuity information. The key analysis script does most of its work in four chunks:

#### 1. read in data
d <- read.csv("data/all_data.csv") 

#### 2. aggregate for each subject and then across subjects
mss <- aggregate(side ~ subid + agegroup + corr.side + condition, 
                 data = d, mean)
ms <- aggregate(side ~ agegroup + corr.side + condition, 
                data = mss, mean)

#### 3. plot
qplot(agegroup, side, colour = corr.side, 
      facets = .~condition,  
      group = corr.side, 
      geom = "line", 
      data = ms)

## 4. linear mixed-effects model
lm.all <- glmer(side ~ condition * corr.side * age + 
                (corr.side | subid), 
                data = kids, family = "binomial")

This is simplified somewhat – I've left out the confidence intervals and a few pieces of data cleaning – but the overall schema is one that reappears over and over again in my analyses. Because this idiom for expressing data analysis is so terse (but still so flexible), I find it extremely easy to debug. In addition, if the columns of your original datasheet are semantically transparent (e.g. agegroup, condition, etc.), your expressions are very easy to read and interpret. (R's factor data structure helps with this too, by keeping track of different categorical variables in your data). Overall, there is very little going on that is not stated in the key formula expressions in the calls to aggregate, qplot, and glmer; this in turn means that good naming practices make it easy to interpret the code in terms of the underlying design of the experiment you ran. It's much easier to debug this kind of code than your typical matlab data analysis script, where rows and columns are often referred to numerically (e.g. plot(d(:,2), d(:,3)) rather than qplot(condition, correct, data=d)). 

Often the data you collect are not in the proper form to facilitate this kind of analysis workflow. In that case, my choice is to create another script, called something like "preprocessing.R" that uses tools like reshape2 to move from e.g. a mechanical turk output file to a tidy data format (a long-form tabular dataset). That way I have a two-step workflow, but I am saving both the original data and the intermediate datafile, and can easily check each by eye in a text editor or Excel for conversion/reformatting errors. 

Overall, the key thing about using R for the full analysis is that – especially when the analysis is version controlled, as described below – you have a full record of the steps that you took to get to a particular conclusion. In addition, with the general workflow described above, the steps in the analysis are described in a semantically transparent way (modulo understanding the particular conventions of, say, ggplot2, which can take some time). Both of these dramatically improve reproducibility by making debugging, rerunning, and updating this codebase easier. 

Archiving analyses on git

When I am ready to analyze the data from an experiment (or sometimes even before), I have started setting up a git repository on It took me a little while to get the hang of this practice, but now I am convinced that it is overall a huge time-saver. (A good tutorial is available here). The repository for an experimental project is initialized with the original datafile(s) that I collect, e.g. the eye-tracking records, behavioral testing software outputs, or logfiles, suitably anonymized. These source datafiles should remain unchanged throughout the lifetime of the analysis – confirmed by their git history.

I work on a local copy of that repository and push updates back to it so that I always have the analysis backed up. (I've begun doing all my analysis in the clear on github, but for academic users you can get free private repositories if that makes you uncomfortable). This strategy helps me keep track of the original data files, intermediate processed and aggregated data, and the analysis code, all in one place. So at its most basic it's a very good backup.

But managing data analysis through git has a couple of other virtues, too:
  • The primary benefits of version control. This is the obvious stuff for anyone who has worked with git or subversion before, but for me as a new user, this was amazing! Good committing practices – checking in versions of your code regularly – mean that you never have to have more than one version of a file. For example, if you're working on a file called "analysis.R," you don't have to have "analysis 4-21-14 doesn't quite work.R" and "analysis 4-22-14 final.R." Instead, "analysis.R" can reflect in its git history many different iterations that you can browse through whenever you want. You can even use branches or tags to keep track of multiple different conflicting approaches in the same file. 
  • Transparency within collaborations. Your collaborators can look at what you are doing while the analysis is ongoing, and they can even make changes and poke around without upsetting the applecart or creating totally incommensurable analysis drafts. This transparency can dramatically reduce sharing overhead and crosstalk between collaborators in a large project. It also means that it is way easier for authors to audit the analysis on a paper prior to submission – something that I think should probably be mandatory for complex analyses. 
  • Ease of sharing analyses during the publication and review process. When you're done – or even while analysis is still ongoing – you can share the repository with outsiders or link to it in your publications. Then, you can post updates to it if you have corrections or extensions, and new viewers will automatically see these rather than having to track you down. This means sharing your data and analysis is always as simple as sharing a link – no need to hunt down a lot of extra dependencies and clean things up after the fact (something that I suspect is a real reason why many data sharing requests go unanswered).
The open git analysis approach is not perfect for all datasets – the examples that come to mind are confidential data that cannot easily be anonymized (e.g. transcripts with lots of identifying information) and neuroimaging, where the data are too large to push back and forth to external repositories all the time. But for a wide range of projects, this can be a major win.


It takes an initial, upfront investment to master both git and R. Neither is as easy as using pivot tables in Excel. But the payoff is dramatic, both in terms of productivity and in terms of reproducibility. There are steps further that you can take if you are really committed to documenting every step of your work, but I think this is a reasonable starting point, even for honors students or beginning graduate students. For any project longer than a quick one-off, I am convinced that the investment is well worth while.

Of course, I don't mean to imply that you can't do bad or irreproducible research using this ecosystem – it's very easy to do both. But I do believe that it nudges you towards substantially better practices than tools like Excel and SPSS. And sometimes a nudge in the right direction can go a long way towards promoting the desired behavior.

Saturday, April 19, 2014

Tuesday, April 1, 2014

Assessing cognitive models: visualization strategies

(This post is written in part as a reference for a seminar I'm currently teaching with Jamil Zaki and Noah Goodman, Models of Social Behavior).

The goal of research in cognitive science is to produce explicit computational theories that describe the workings of the human mind. To this end, a major part of the research enterprise consists of making formal artifacts – computational models. These are artifacts that take as their inputs some stimuli, usually in coded form, and produce as their outputs some measures that are interpretable with respect to human behavior. 

In this post I'll discuss a visualization-based strategy for assessing the fit of models to data, based on moving between plots at different levels of abstraction. I often refer to this informally as "putting a model through its paces." 

I take as my starting point the idea that a model is successful if it is both parsimonious itself and if it provides a parsimonious description of a body of data. To unpack this, a bit more, the basic idea is that any model is created as a formal theory of some set of empirical facts. If you know the theory, you can predict the facts – and so you don't need to remember them because they can be re-derived. A model can fail because it's too complicated – it predicts the facts, but at the cost of having so many moving parts that it is itself hard to remember. Or or it can fail because it is consistent with all patterns of data – and hence doesn't compress the particular pattern of data much at all. (I've discussed this this  "minimum description length" view of cognitive modeling in more depth both in this post and in my academic writing here and here. Note that it's consistent with a wide range of modeling formalisms from neural networks to probabilistic models).

So how do we assess models within this framework, and how do we compare them to data? Although the minimum description length framework provides a guiding intuition, it's not that easy to say exactly how parsimonious a particular model is; there are actually fundamental mathematical difficulties with computing parsimony. But there are nevertheless many methods for statistical comparison between models, and these can be very useful – especially when you are using models that are posed in coherent and equivalent vocabularies. Here are great slides from a tutorial that Mark Pitt and Jay Myung gave at CogSci a couple of years ago on model comparison.

My focus in this post is a bit more informal, however. What I want to do is to discuss a set of plots for model assessment that can be used together to gain understanding about the relationship of a model or set of models to data:

  1. A characteristic model plot, one that lets you see details of the model's internal state or scoring so that you can understand what it has learned or why it has produced a particular result. 
  2. A plot of model results across conditions or experiments, in precisely the same format as the experimental data are typically plotted. 
  3. A scatter plot of model vs. data for comparing across experiments and across models.
  4. A plot of model fit statistics as parameters are varied. 

In each of these, I've used examples from the probabilistic models literature, taken mostly from my work and the work of my collaborators. This choice is purely because I know these examples well, not because of anything special about these examples. The broader approach is due to Andrew Gelman's philosophy of exploratory model checking (on display, e.g. in this chapter).

1. Characteristic Model Plot

The first plot I typically make when I am working on a model has the goal of understanding why the model produces a particular result when it is given a certain pattern of input data. This plot is often highly idiosyncratic to the particular representation or model framework that I am using – but it gives insight into the guts of the model. It typically doesn't include the empirical data.

Here is one example, from Frank, Goodman, & Tenenbaum (2009). We were trying to understand how our model of word learning performed on a task that's sometimes called "mutual exclusivity" that had been used in the language acquisition literature. The task is simple: you show a child a known object (e.g. a BIRD) and a novel object (a DAX), and you say "can you show me the dax?" Children in this task reliably pick the novel object. 

Our model made this same inference, but we wanted to understand why. So we chose four different hypotheses that the model could have about what the word "dax" meant (represented at the top of each of the four panels) and computed the model's scores for each of these. In our model, the posterior score of a hypothesis about word meanings was the product of the prior probability of the lexicon, the probability of a corpus of input data, and the probability assigned in this specific experimental situation. So we plotted each of those numbers on a relative probability scale such that we could easily compare them. The result was an interesting insight: The major reason why the model preferred lexicon B (the one consistent with the data) was that it placed higher probability on previous utterances in which the word "dax" hadn't been heard before even though the BIRD object had been seen. It was this unseen data that made it odd to think that "dax" actually did mean BIRD after all (lexicons C and D).

Another example of this kind of plot comes from a follow-up to this paper (Lewis & Frank, 2013), again looking at the "mutual exclusivity" phenomenon. In this case, we plotted the relative probabilities assigned to different lexicons in a simple bar graph (with shading indicating different priors), but we used a graphical representation of the lexicon as the axis label. The generalization that emerged from this plot is that the lexicon where each word is correctly mapped one-to-one (the middle lexicon with one gray bar and one green bar), almost regardless of the prior that is used.

In general, visualizations of this class will be very project- and model-specific, because they will depend on the relevant aspects of the model that you find most informative in your explanations. Nevertheless, they form a crucial tool for diagnosing why the model produced a particular result for a given input configuration.

2. Data-space plot

This style of plot is very important for evaluating the correspondence of a model to human data. It's the first thing I typically do after getting a model working. I try to produce predictions about what the experimental data should look like, on the same axes (or at least in the same format) as the original plots of the experimental data. 

Here is one example, from a paper on word segmentation (Frank et al., 2010). We tracked human performance in statistical word segmentation tasks while we varied parameters of the language the participants were segmenting in three different experiments. We then examined the fit of a range of models to these data. The human data here are abstracted away from any variability and are just solid curves repeated across plots; the model predictions make clear that while there is some variability in performance on the first two experiments, it is Experiment 3 where all models fail:

Here's a second example, this one from Baker, Saxe, & Tenenbaum (2009). They had participants judge the goals of a cartoon agent at various different "checkpoints" along a path. For example, in panel (a), condition 1, the agent wended its way around a wall and then headed for the corner. At each numbered point, participants made a judgment about whether the agent was headed towards goal A, B, or C. In panel (b), experimental data show graded ratings for each goal, and in panel (c) you can compare model-derived ratings:

The general point here is that visualization is about comparison (a point I identify with William Cleveland but don't have a good citation for). This plot makes it easy to compare the gestalt pattern of model vs. experimental data in a format where you can readily identify the particular conditions under which deviations occur – sometimes even across models.

3. Cross-dataset, cross-model plot

These plots are more obvious and more conventional. You plot model predictions on the horizontal and human behavior on the vertical, resulting in a scatter plot where greater coherence indicates greater correlation between model and data. Of course, the result can be quite deceptive because it obscures the model's degrees of freedom. But this sort of plot is a simple and powerful tool for quickly assessing fit. Here's one nice one:

This example comes from Orban et al. (2008), a paper on visual statistical learning. They plot log probability ratio of test items by proportion correct in human experiments. Each datapoint is a separate condition, and the key relates each datapoint back to the experimental data, so that you can look up which points don't fit as well. When possible it's nice to plot the the actual labels on the axes so that you don't have to use a key, but sometimes this strategy can get overly messy. A minor note: I really like to plot confidence intervals (rather than standard error) here so that we can better assess by eye whether variability is due to measurement noise or a truly incorrect prediction by the model.

This sort of plot can also be very good at comparing across models, because you can see at a glance both which model fits better and which points are negatively affecting fit. Here's another version, from Sanjana & Tenenbaum (2002). The data aren't labeled here, but I like both the matrix of small plots (the grid is different models on the columns, different experimental conditions on the rows) and the prominent reporting of the correlation coefficients in each plot:

One interesting thing that you can see by presenting the data this way is that there are sets of conditions that don't vary in their model predictions (e.g. the bunches of dots on the right side of the center panel). As an analyst I would be interested to know whether these conditions are truly distinct experimentally and getting lumped inappropriately by this model. My next move would probably be to make plots #1 and #2 with just these conditions included. More generally, I find it extremely important to have the ability to move flexibly back and forth between more abstract plots like this one and plots that are more grounded in the data.

4. Parameter sensitivity plots

Even the simplest cognitive models typically have some parameters that can be set with respect to the data (free parameters). And all of the plots described above produce an output given some settings of the model's free parameters, allowing you to assess fit to data. But, as a number of influential critiques point out, a good fit is not necessarily what we should be looking for in a cognitive model. I take these critiques to be largely targeted at excessive flexibility in models – which can lead to overfitting. 

So an important diagnostic for any model is how its fit to data varies in different parameter regimes. Showing this kind of variability can be tricky, however. Sometimes parameters are cognitively interesting, but in other circumstances they are not – yet it is important to explore them fully. 

These plots typically plot either model performance (in the case of a small number of conditions) or a summary statistic capturing goodness of fit or performance (in the case of more data) as a function of one or more free parameters. The goodness of fit statistic is usually a measure like mean squared error (deviation from human data) or simple Pearson correlation with the data, but can be any number of other summary statistics as well.  In models with only one or two free parameters, visualizing the model space is not too difficult, but as the model space balloons, visualization can be difficult or impossible. 

Here is one example, from a paper I wrote on modeling infants' ability to learn simple rules:

It's a combination of a #4 plot (left) and a #2 plot, middle/right. On the left, I show model predictions across five conditions, as alpha (a noise parameter) was varied. The salient generalization from this plot is that the ordering of conditions stays the same, even though the absolute values change. The filled markers call out the parameter value that is then plotted in the middle as a bar graph for easy comparison with the right-hand panel. As you can see, the fit is not perfect, but the relative ordering of conditions is similar. 

Here is another example, one that I'm not proud of from a visualization perspective (especially as it uses the somewhat unintuitive matlab jet colormap):

This comes from our technical report on the word learning model described above. The model had three free parameters, alpha, gamma, and kappa. This plot shows a heatmap of f-score (a measure of the model's word learning performance) as a function of all three of those continuous parameters, with the maxima for each plot labeled. Although this isn't a great publication-ready graphic, it was very useful for me as a diagnostic of the parameter-sensitivity of the model – and led to us using a parameter-reduction technique to try to avoid this problem. 

Parameter plots may not always make for the best viewing, but they can be an extremely important tool for understanding how your model's performance varies in even a high-dimensional parameter space. 


I've tried to argue here for an exploratory visualization approach to model-checking for cognitive models especially. The approach is predicated on having plots at multiple levels of abstraction, from diagnostic plots that let you understand why a particular datapoint was predicted in a certain way, all the way up to plots that let you consider the stability of summary statistics throughout parameter space. It is not always trivial to code up all of these visualizations, let alone to create an ecosystem in which you can move flexibly between them. Nevertheless, it can be extremely useful in both debugging and in gaining scientific understanding.