Monday, September 23, 2013

M discovers objects

(M's bouncy chair, complete with newly fascinating objects.)

We've just returned from a trip to see family, in honor of my daughter M's two month birthday, and something very amazing has happened. M has discovered objects.

M has been completely fascinated by faces almost since the beginning of her life (see a previous post on this). She is incredibly social, tracking everyone around her, making eye contact* and smiling. And along with that social fascination came a complete indifference to objects. She couldn't - and still can't -hold onto anything that isn't tightly pressed into her palm (activating the palmar reflex). But it was more than that. She just didn't seem to care if we moved a toy near her. We could maybe get a little rise out of her with a rattle, but any face was enough to provoke her to track attentively.

We noticed a huge change this morning. We put M in her bouncy chair (see photo above), and - just for kicks - snapped on the toy bar that comes with the chair. We had tried this earlier and gotten absolute disinterest from her. But this morning, she was in love! She sat and cooed and bounced for probably around 45 minutes, fixating the toys the whole time. There's a face on one of the toys, though (for extra perceptual oomph, perhaps?). So we repeated the experiment, this time with some non-face toys in her crib. Again, she was mesmerized.

Clearly this general phenomenon is something toy manufacturers know about - or else why would every baby crib or seat come with floating, bouncing toys above it (think the ubiquitous pack-'n-play)? But a shift to an interest in objects around 2 months is something that I haven't read about in the developmental literature. A very interesting sequence of papers documents a shift in attention to faces relative to objects a little later, e.g. around 4 months. In a new eye-tracking study, we reviewed this literature, concluding that
"The evidence on sustained attention to faces is thus consistent across studies: 3-month-olds do not prefer faces in either dynamic displays or static stimulus arrays, while older children [5 - 6 month-olds] show a clear face preference."
But these findings are all about whether faces trump objects when they are put in direct competition. I bet M would show the attention capture that babies in all of these studies show - they look at a salient object and then don't seem to be able to tear themselves away, even when the competitor is a face. What they don't capture is a growth of interest in objects per se, when there are no other competitors.

There are also a bunch of wonderful studies from Jessica Sommerville, Amy Needham, and collaborators examining growth in babies' perception of faces, objects, and intentions based on their abilities to reach and grab (e.g. this instant classic and this newer one). But M can't consistently reach for objects yet, and probably won't be able to for a while now. She also doesn't seem to be swiping for the toys on the bouncy chair. So I don't think the development of reaching is what's driving this change in visual interest.

I'd love to know if anyone has any ideas about whether this phenomenon - a dramatic increase in interest to objects - is something that has been studied before (or even if they've seen it in their own children). Regardless of mechanism, though, it's been a pleasure to watch M as she discovers a whole world of new things to see.

---
* Actually, as Haith, Bergman, & Moore (1977) noted, she did a lot of "forehead contact" early; now  that behavior has morphed into something that looks much more like true eye contact.

Tuesday, September 10, 2013

Post-publication peer review and social shaming

Is the peer review system broken? Arguably. Peer review can be frustrating, both in what gets through and what doesn't. So recently there has been been a lot of talk about the virtues of post-publication peer review (e.g. links here, here, and here), where folks on the internet comment on scientific publications after they are public. One suggestion is that post-publication peer review might even one day replace the standard process.

Commenting on papers after they are published has to be a good idea in some form: more discussion of science and more opportunities to correct the record! But I want to argue against using post-publication peer review, at least in its current form, as the primary method of promoting stronger cultural norms for reliable research.

1. Pre-publication peer review is a filter, while post-publication peer review is an audit.  

With few exceptions, peer-review is applied to all submitted papers. And despite variations in the process from journal to journal, the overall approach is quite standard. This uniformity doesn't necessarily lead to perfect decisions, where all the good papers are accepted and bad papers are rejected. Nevertheless, peer review is a filter that is designed to be applied across the board so that the scientific record contains only those findings that "pass through" (consider the implicit filter metaphor!).

When I review a journal article I typically spend at least a couple of hours reading, thinking, and writing. These hours have to be "good hours" when I am concentrating carefully and don't have a lot of disruptions. I would usually rather be working on my own research or going for a walk in the woods. But for a paper to be published, a group of reviewers needs to commit those valuable hours. So I accept review requests out of a sense of obligation or responsibility, whether it's to the editor, the journal, or the field more generally. 

This same mechanism doesn't operate for post-publication review. There is no apparatus for soliciting commenters post-publication. So only a few articles, particularly those that receive lots of media coverage, will get the bulk of the thoughtful, influential commentary (see e.g. Andrew Gelman's post on this). The vast majority will go unblogged.

Post-publication peer review is thus less like a filter and more like an audit. It happens after the fact and only in select cases. Audits are also somewhat random in what attracts scrutiny and when. There is always something bad you can say about someone's tax return - and about their research practices.

2. Post-publication peer review works via a negative incentive: social shaming.

People are generally driven to write thoughtful critiques only when they think that something is really wrong with the research (see e.g. links hereherehere, and here). This means that nearly all post-publication peer review is negative.

The tone of the posts linked above is very professional, even if the overall message about the science is sometimes scathing. But one negative review typically spurs a host of snarky follow-ons on twitter, leaving a single research group or paper singled out for an error that may need to be corrected much more generally. Often critiques are merited. But they can leave the recipients of the critique feeling as though the entire world is ganging up against them.

For example, consider the situation surrounding a failure to replicate John Bargh's famous elderly priming study. Independent of what you think of the science, the discussion was heated. A sympathetic Chronicle piece used the phrase "scientific bullying" to describe the criticisms of Bargh, noting that this experience was the "nadir" of his career. That sounds right to me: I've only been involved in one, generally civil, public controversy (my paperreplymy reply backfinal answer) and I found that experience extremely stressful. Perceived social shaming or exclusion can be a very painful process. I'm sure reviewers don't intend their moderately-phrased statistical criticisms to result in this kind of feeling, but - thanks to the internet - they sometimes do.

3. Negative incentives don't raise compliance as much as positive cultural influences do.

Tax audits (which carry civil and  criminal penalties, rather than social ones) do increase compliance somewhat. But a review of the economics literature suggests that cultural factors - think US vs. Greece - matter more than the sheer expected risk due to audit enforcement (discussion here and here).* For example, US audit rates have fluctuated dramatically in the last fifty years, with only more limited effects on compliance (see e.g. this analysis).**

Similarly, if we want to create a scientific culture where people follow good research practices because of a sense of pride and responsibility - rather than trying to enforce norms through fear of humiliation - then increasing the post-publication audit rate is not the right way to get there. Instead we need to think about ways to change the values in our scientific culture. Rebecca Saxe and I made one pedagogical suggestion here, focused on teaching replication in the classroom.

Some auditing is necessary for both tax returns and science. The overall increase in post-publication discussion is a good thing, leading to new ideas and a stronger articulation and awareness of core standards for reliable research. So the answer isn't to stop writing post-pub commentaries. It's just to think of them as a complement to - rather than a replacement for - standard peer review.

Finally, we need to write positive post-publication reviews. We need to highlight good science and most especially strong methods (e.g. consider The Neurocomplimenter). The options can't be either breathless media hype that goes unquestioned or breathless media hype that is soberly shot down by responsible academic critics. We need to write careful reviews, questions, and syntheses for papers that should remain in the scientific literature. If we only write about bad papers, we don't do enough to promote changes in our scientific standards.

---
* It's very entertaining: The tone of this discussion is overall one of surprise that taxpayers' behavior doesn't hew to the rational norms implied by audit risk.
** Surprisingly, I couldn't find the exact graph I wanted: audit rates vs. estimated compliance rates. If anyone can find this, please let me know!

Monday, September 2, 2013

iPad experiments for toddlers

(Update 3/24/15 – this post is subsumed by a paper we wrote on this topic, available here).
---

The iPad should be a fabulous experimental tool for collecting data with young children: It's easy to use, kids love it, and it's even cheaper and easier to transport than a small Mac laptop. Despite these advantages, there has been one big drawback: creating iPad apps requires mastering a framework and programming language that are specific to iOS, and putting the apps on more than a few iPads requires dealing with the app store. Neither of these are impossible hurdles, but both of them have kept us (and AFAIK most other labs in developmental psychology) from getting very far in using them for routine day-to-day data collection.

This post documents a new method that we have been using to collect experimental data with toddlers, developed this summer by Elise Sugarman, Dan Yurovsky, and Molly Lewis. It is easy to use, makes use of free development tools, doesn't require dealing with the App Store or the Apple Developer Tools, and hooks in nicely with the infrastructure we use to create Amazon Mechanical Turk web experiments for adults.

Our method has four ingredients:
  1. JavaScript/HTML web experiment
  2. Server-side PHP script to save data
  3. iPad with internet connection, in kiosk mode
  4. Kid management on the iPad
1. JavaScript/HTML web experiment. 

This is the big one for someone who has never made an online experiment. It's beyond the scope of this post to introduce how to create JavaScript web experiments, but there is a lot of good material out there, especially from the Gureckis Lab at NYU (e.g. this blog post). There are also many tools for learning how to make websites using the now standard combo of JavaScript, HTML, CSS, and jQuery. Note that putting up such an experiment will require some server space. We use the space provided by Stanford for our standard university web pages, but all that's required is somewhere to put your HTML, JS, and CSS files.

2. PHP script 

This is a simple script that runs on a server (probably the same one that you are using to serve the experiment). All it does is save data from the JavaScript experiment to a location on the server. 

In the JavaScript code, we need to add a bit of code to send the data to this script (making sure jQuery is loaded so we can use ".post"):
  
if (counter === numTrials) {
     $.post("http://lab.example.edu/cgi-bin/expt_dir/post_results.php", 
         {postresult_string : result_string});
}

And then post_results.php is a simple script that looks like this:

<?php
    $result_string = $_POST['postresult_string'];
    file_put_contents('results.csv', 
        $result_string, FILE_APPEND);
?>

3. iPad config 

Our method requires that you have an internet connection handy for your iPad. This means that you either need wifi access (e.g. from the testing location or a MiFi) or else you need an iPad with cell connectivity. But aside from that, you're just navigating the iPad to a website you've set up.

We use two tools to ensure that iPad-savvy kids don't accidentally escape from our experiment or zoom the screen into something totally un-navigable. The first is Guided Access, a mode on the iPad that disables the hardware buttons. The second is Mobile Kiosk, an app that further locks the iPad into a particular view of a webpage. The combination means that kids can't get themselves tangled in the functionality of the iPad.

4. Kid management

The last ingredient is training kids to use the iPad effectively. Although many of them will have interacted with tablet devices and smartphones before, that's no guarantee that they can tap effectively and navigate through an experiment. We created a simple page training with a set of dots for a child to tap - they can't advance until they successfully tap all the dots (kind of like Whac-A-Mole).

(Update #1: Elise notes that using a tilting iPad case helps kids click more successfully as well.)

(Update #2: Brock Ferguson gives a handy guide to making the PHP above more secure if you ever want to use it for anything other than preschoolers.)

Sunday, September 1, 2013

Unconscious and non-linguistic numbers?

(Flock of birds, from Flickr)

An image of flock of birds fades almost instantaneously from our visual memory - as Borges described memorably in his Argumentum Orithologicum. But if you get a chance to count the birds one by one, their exact number (78 in this case?) is represented by a symbol that is easy to remember and easy to manipulate. Numbers help us overcome the limitation that without some intermediate representation, none of our memory systems can represent exactly 78 objects as distinct from exactly 79.

For most people who use language to manipulate numbers, mentally representing exact quantities like 78 requires 1) knowing the number words and 2) producing the right number words - speaking them, at least in your head - at the moment you want to represent a corresponding quantity. The evidence:
One major exception to this "number requires language in the moment" hypothesis is visual representations of number like the mental abacus. Mental abacus provides a way to use visual memory (rather than auditory/phonological memory) for remembering and - more importantly - manipulating exact quantities. So it's an exception that proves the rule: Like numerical language, mental abacus gives its users a representation scheme for holding exact quantities in mind.

Over the past few years, I've been collecting a few examples that push at the boundaries of this theoretical framework, though:

1. It may be possible to prime arithmetic expressions unconsciously.

A recent paper by Sklar et al. uses a clever method called continuous flash suppression to introduce stimuli to participants while keeping them out of conscious awareness. When shown expressions like "9 - 3 - 4 = " using this method, participants were 10 - 20 ms faster to speak the correct answer (e.g., 2) when it was presented, compared to an incorrect answer. (Incidentally, an odd fact about the result is that the authors had much more trouble finding unconscious priming effects for addition than subtraction. )

I find this result very surprising! My initial thought was that participants might have been faster because they were using their analog magnitude system (indicating approximate rather than exact numerical processes).  I wrote to Asael Sklar and he and his collaborators generously agreed to share their original data with me. I was able to replicate their analyses* and verify that there was no estimation effect, ruling out that alternative explanation.

So this result is still a mystery to me. I guess the suggestion is that there is some "priming" - e.g. trace activation of the computations. But I find it somewhat implausible (though not out of the question) that this sort of subtraction problem is the kind of computation that our minds cache. Have I ever done 9 - 3 - 4 before? It certainly isn't as clear an "arithmetic fact" as 2+2 or 7+3.

2. Richard Feynman could count and talk at the same time. 

In a chapter from "The Pleasure of Finding Things Out," (available as an article here) Feynman recounts how he learned that he could keep counting while doing other things. I was very excited to read this because I have also noticed that I can count "unconsciously" - that is, I can set a counter going in my brain, e.g. while I hike up a hill. I can let my mind wander and check back in to find that the counter has advanced some sensible distance. But I never systematically tested whether my count was accurate.

This kind of test is exactly what Feynman set out do to. He would start counting, then begin another operation (e.g. doing laundry, walking around, etc.) and check back in with his internal "counter." He tested the accuracy of his count by measuring that he could count up to about 48 in a minute with very little variability when there was no interference. So he would do many different things while counting and check how close his count was to 48 - if there had been interference, he would be off in how far he had counted after a minute had elapsed.

The only thing he found that caused any active interference was talking, especially producing number words:
... I started counting while I did things I had to do anyway. For instance, when I put out the laundry, I had to fill out a form saying how many shirts I had, how many pants, and so on.
I found I could write down "3" in front of "pants" or "4" in front of "shirts: while I was counting to myself but I couldn't count my socks. There were too many of them: I'm already using my "counting machine" ...
What's even more interesting is that Feynman reports that the statistician John Tukey could count and talk at the same time - by imagining the visual images of the numbers turning over. But apparently this prevented Tukey from reading while he counted (which Feynman could do!).

So these observations seem like they are consistent with the hypothesis that exact number requires using a particular set of mental resources, whether it's the resources of speech production (for counting out loud) or of visual working memory (for imagining digits or a mental abacus). But they, along with the Sklar et al. finding - also support the idea that the representation need not necessarily percolate up to the highest levels of conscious experience.

3. Ildefonso, a home-signer without language, learned to do arithmetic before learning to sign.

In A Man Without Words, Susan Schaller describes the growth of her friendship with Ildefonso, a deaf, completely language-less man in his 30s. Ildefonso grew up as a home-signer in Mexico and came to the US as an agricultural laborer. Over the course of a short period working with him at a school for the deaf, she introduces him to ASL for the first time. The story is beautiful, touching, and both simply and clearly written.

Here's the crazy part: Before Schaller has succeeded in introducing Ildefonso to language more generally, she diverts him with single digit arithmetic, which he is apparently able to do handily:
To rest from the fatigue of our eye-to-eye search for an entrance into each other's head, we sat shoulder to shoulder, lining up numerals in progressively neater rows. I drew an addition sign between two 1s and placed a 2 underneath. I wrote 1 + 1 + 1 with a 3 under it, then four 1s, and so on. I explained addition by placing the corresponding number of crayons next to each numeral. He became very animated, and I introduced him to an equal sign to complete the equations. Three minutes later the crayons were unnecessary. He had gotten it. I presented him with a page of addition problems, and he was as happy as my nephew with a new dinosaur book. (p. 37)
It would be very interesting to know how accurate his computations were! This observation suggests that language for number may not critically rely on understanding any other aspects of language. Perhaps Ildefonso didn't even treat numerals as words at all (but instead like Gricean "natural" meanings, e.g. "smoke means fire").

Conclusions

All of these examples are consistent with the general hypothesis described above about the way language works to represent exact numbers. But all three suggest that our use of numbers need not be nearly as conscious or as language-like as I had thought for them to carry exact numerical content.

---
* With one minor difference: Sklar et al.'s error bars reflect the mean +/- .5 * the standard error of the mean (SEM), rather than the more conventional +/- 1 SEM. This is a semantic issue: the full length of their error bar is the SEM, rather than the SEM being the distance from the mean. Nevertheless, it is not standard practice.