Last year, Berkowitz et al. published a randomized controlled trial of a learning app. Children were randomly assigned to math and reading app groups; their learning outcomes on standardized math and reading tests were assessed after a period of app usage. A math anxiety measure was also collected for children’s parents. The authors wrote that:
The intervention, short numerical story problems delivered through an iPad app, significantly increased children’s math achievement across the school year compared to a reading (control) group, especially for children whose parents are habitually anxious about math.I got excited about this finding because I have recently been trying to understand the potential of mobile and tablet apps for intervention at home, but when I dug into the data I found that not all views of the dataset supported the success of the intervention. That's important because this was a well-designed, well-conducted trial. But the basic randomization to condition did not produce differences in outcome, as you can see in the main figure of my reanalysis.
My extensive audit of the dataset is posted here, with code and their data here. (I really appreciate that the authors shared their raw data so that I could do this analysis – this is a huge step forward for the field!). Quoting from my report:
In my view, the Berkowitz et al. study does not show that the intervention as a whole was successful, because there was no main effect of the intervention on performance. Instead, it shows that – in some analyses – more use of the math app was related to greater growth in math performance, a dose-response relationship that is subject to significant endogeneity issues (because parents who use math apps more are potentially different from those who don’t). In addition, there is very limited evidence for a relationship of this growth to math anxiety. In sum, this is a well-designed study that nevertheless shows only tentative support for an app-based intervention.Here's a link to my published comment (which came out today), and here's Berkowitz et al.'s very classy response. Their final line is:
We welcome debate about data analysis and hope that this discussion benefits the scientific community.
I agree. Perhaps the most important upshot of this discussion for me is that there's no one right way to analyze a dataset. In such cases, especially with high-value datasets like the one Berkowitz et al. collected, preregistration rules out post-hoc analytic selection (e.g., the "garden of forking paths"), decreasing or eliminating the kinds of doubts I raised about their choice of analytic strategy.
Another way of putting it is to look at my interpretation of one effect (top) – the interaction of app usage and parent math anxiety – and theirs (bottom):
There are some differences (e.g. outlier exclusion on their right side), and a color flip. But a basic question is whether there is truly a quadratic trend – whether the reversal in the middle of their graph is reliable – or whether this is essentially a null finding. I don't know the answer to that question. But this case has convinced me that preregistration is a critical tool in decreasing this kind of ambiguity in the future.
The link to your supplementary materials (http://science.sciencemag.org/content/351/6278/aad8008/suppl/DC1) appears to be broken.
ReplyDeleteThanks, Dean!
ReplyDelete