Monday, September 23, 2013

M discovers objects

(M's bouncy chair, complete with newly fascinating objects.)

We've just returned from a trip to see family, in honor of my daughter M's two month birthday, and something very amazing has happened. M has discovered objects.

M has been completely fascinated by faces almost since the beginning of her life (see a previous post on this). She is incredibly social, tracking everyone around her, making eye contact* and smiling. And along with that social fascination came a complete indifference to objects. She couldn't - and still can't -hold onto anything that isn't tightly pressed into her palm (activating the palmar reflex). But it was more than that. She just didn't seem to care if we moved a toy near her. We could maybe get a little rise out of her with a rattle, but any face was enough to provoke her to track attentively.

We noticed a huge change this morning. We put M in her bouncy chair (see photo above), and - just for kicks - snapped on the toy bar that comes with the chair. We had tried this earlier and gotten absolute disinterest from her. But this morning, she was in love! She sat and cooed and bounced for probably around 45 minutes, fixating the toys the whole time. There's a face on one of the toys, though (for extra perceptual oomph, perhaps?). So we repeated the experiment, this time with some non-face toys in her crib. Again, she was mesmerized.

Clearly this general phenomenon is something toy manufacturers know about - or else why would every baby crib or seat come with floating, bouncing toys above it (think the ubiquitous pack-'n-play)? But a shift to an interest in objects around 2 months is something that I haven't read about in the developmental literature. A very interesting sequence of papers documents a shift in attention to faces relative to objects a little later, e.g. around 4 months. In a new eye-tracking study, we reviewed this literature, concluding that
"The evidence on sustained attention to faces is thus consistent across studies: 3-month-olds do not prefer faces in either dynamic displays or static stimulus arrays, while older children [5 - 6 month-olds] show a clear face preference."
But these findings are all about whether faces trump objects when they are put in direct competition. I bet M would show the attention capture that babies in all of these studies show - they look at a salient object and then don't seem to be able to tear themselves away, even when the competitor is a face. What they don't capture is a growth of interest in objects per se, when there are no other competitors.

There are also a bunch of wonderful studies from Jessica Sommerville, Amy Needham, and collaborators examining growth in babies' perception of faces, objects, and intentions based on their abilities to reach and grab (e.g. this instant classic and this newer one). But M can't consistently reach for objects yet, and probably won't be able to for a while now. She also doesn't seem to be swiping for the toys on the bouncy chair. So I don't think the development of reaching is what's driving this change in visual interest.

I'd love to know if anyone has any ideas about whether this phenomenon - a dramatic increase in interest to objects - is something that has been studied before (or even if they've seen it in their own children). Regardless of mechanism, though, it's been a pleasure to watch M as she discovers a whole world of new things to see.

---
* Actually, as Haith, Bergman, & Moore (1977) noted, she did a lot of "forehead contact" early; now  that behavior has morphed into something that looks much more like true eye contact.

Tuesday, September 10, 2013

Post-publication peer review and social shaming

Is the peer review system broken? Arguably. Peer review can be frustrating, both in what gets through and what doesn't. So recently there has been been a lot of talk about the virtues of post-publication peer review (e.g. links here, here, and here), where folks on the internet comment on scientific publications after they are public. One suggestion is that post-publication peer review might even one day replace the standard process.

Commenting on papers after they are published has to be a good idea in some form: more discussion of science and more opportunities to correct the record! But I want to argue against using post-publication peer review, at least in its current form, as the primary method of promoting stronger cultural norms for reliable research.

1. Pre-publication peer review is a filter, while post-publication peer review is an audit.  

With few exceptions, peer-review is applied to all submitted papers. And despite variations in the process from journal to journal, the overall approach is quite standard. This uniformity doesn't necessarily lead to perfect decisions, where all the good papers are accepted and bad papers are rejected. Nevertheless, peer review is a filter that is designed to be applied across the board so that the scientific record contains only those findings that "pass through" (consider the implicit filter metaphor!).

When I review a journal article I typically spend at least a couple of hours reading, thinking, and writing. These hours have to be "good hours" when I am concentrating carefully and don't have a lot of disruptions. I would usually rather be working on my own research or going for a walk in the woods. But for a paper to be published, a group of reviewers needs to commit those valuable hours. So I accept review requests out of a sense of obligation or responsibility, whether it's to the editor, the journal, or the field more generally. 

This same mechanism doesn't operate for post-publication review. There is no apparatus for soliciting commenters post-publication. So only a few articles, particularly those that receive lots of media coverage, will get the bulk of the thoughtful, influential commentary (see e.g. Andrew Gelman's post on this). The vast majority will go unblogged.

Post-publication peer review is thus less like a filter and more like an audit. It happens after the fact and only in select cases. Audits are also somewhat random in what attracts scrutiny and when. There is always something bad you can say about someone's tax return - and about their research practices.

2. Post-publication peer review works via a negative incentive: social shaming.

People are generally driven to write thoughtful critiques only when they think that something is really wrong with the research (see e.g. links hereherehere, and here). This means that nearly all post-publication peer review is negative.

The tone of the posts linked above is very professional, even if the overall message about the science is sometimes scathing. But one negative review typically spurs a host of snarky follow-ons on twitter, leaving a single research group or paper singled out for an error that may need to be corrected much more generally. Often critiques are merited. But they can leave the recipients of the critique feeling as though the entire world is ganging up against them.

For example, consider the situation surrounding a failure to replicate John Bargh's famous elderly priming study. Independent of what you think of the science, the discussion was heated. A sympathetic Chronicle piece used the phrase "scientific bullying" to describe the criticisms of Bargh, noting that this experience was the "nadir" of his career. That sounds right to me: I've only been involved in one, generally civil, public controversy (my paperreplymy reply backfinal answer) and I found that experience extremely stressful. Perceived social shaming or exclusion can be a very painful process. I'm sure reviewers don't intend their moderately-phrased statistical criticisms to result in this kind of feeling, but - thanks to the internet - they sometimes do.

3. Negative incentives don't raise compliance as much as positive cultural influences do.

Tax audits (which carry civil and  criminal penalties, rather than social ones) do increase compliance somewhat. But a review of the economics literature suggests that cultural factors - think US vs. Greece - matter more than the sheer expected risk due to audit enforcement (discussion here and here).* For example, US audit rates have fluctuated dramatically in the last fifty years, with only more limited effects on compliance (see e.g. this analysis).**

Similarly, if we want to create a scientific culture where people follow good research practices because of a sense of pride and responsibility - rather than trying to enforce norms through fear of humiliation - then increasing the post-publication audit rate is not the right way to get there. Instead we need to think about ways to change the values in our scientific culture. Rebecca Saxe and I made one pedagogical suggestion here, focused on teaching replication in the classroom.

Some auditing is necessary for both tax returns and science. The overall increase in post-publication discussion is a good thing, leading to new ideas and a stronger articulation and awareness of core standards for reliable research. So the answer isn't to stop writing post-pub commentaries. It's just to think of them as a complement to - rather than a replacement for - standard peer review.

Finally, we need to write positive post-publication reviews. We need to highlight good science and most especially strong methods (e.g. consider The Neurocomplimenter). The options can't be either breathless media hype that goes unquestioned or breathless media hype that is soberly shot down by responsible academic critics. We need to write careful reviews, questions, and syntheses for papers that should remain in the scientific literature. If we only write about bad papers, we don't do enough to promote changes in our scientific standards.

---
* It's very entertaining: The tone of this discussion is overall one of surprise that taxpayers' behavior doesn't hew to the rational norms implied by audit risk.
** Surprisingly, I couldn't find the exact graph I wanted: audit rates vs. estimated compliance rates. If anyone can find this, please let me know!

Monday, September 2, 2013

iPad experiments for toddlers

(Update 3/24/15 – this post is subsumed by a paper we wrote on this topic, available here).
---

The iPad should be a fabulous experimental tool for collecting data with young children: It's easy to use, kids love it, and it's even cheaper and easier to transport than a small Mac laptop. Despite these advantages, there has been one big drawback: creating iPad apps requires mastering a framework and programming language that are specific to iOS, and putting the apps on more than a few iPads requires dealing with the app store. Neither of these are impossible hurdles, but both of them have kept us (and AFAIK most other labs in developmental psychology) from getting very far in using them for routine day-to-day data collection.

This post documents a new method that we have been using to collect experimental data with toddlers, developed this summer by Elise Sugarman, Dan Yurovsky, and Molly Lewis. It is easy to use, makes use of free development tools, doesn't require dealing with the App Store or the Apple Developer Tools, and hooks in nicely with the infrastructure we use to create Amazon Mechanical Turk web experiments for adults.

Our method has four ingredients:
  1. JavaScript/HTML web experiment
  2. Server-side PHP script to save data
  3. iPad with internet connection, in kiosk mode
  4. Kid management on the iPad
1. JavaScript/HTML web experiment. 

This is the big one for someone who has never made an online experiment. It's beyond the scope of this post to introduce how to create JavaScript web experiments, but there is a lot of good material out there, especially from the Gureckis Lab at NYU (e.g. this blog post). There are also many tools for learning how to make websites using the now standard combo of JavaScript, HTML, CSS, and jQuery. Note that putting up such an experiment will require some server space. We use the space provided by Stanford for our standard university web pages, but all that's required is somewhere to put your HTML, JS, and CSS files.

2. PHP script 

This is a simple script that runs on a server (probably the same one that you are using to serve the experiment). All it does is save data from the JavaScript experiment to a location on the server. 

In the JavaScript code, we need to add a bit of code to send the data to this script (making sure jQuery is loaded so we can use ".post"):
  
if (counter === numTrials) {
     $.post("http://lab.example.edu/cgi-bin/expt_dir/post_results.php", 
         {postresult_string : result_string});
}

And then post_results.php is a simple script that looks like this:

<?php
    $result_string = $_POST['postresult_string'];
    file_put_contents('results.csv', 
        $result_string, FILE_APPEND);
?>

3. iPad config 

Our method requires that you have an internet connection handy for your iPad. This means that you either need wifi access (e.g. from the testing location or a MiFi) or else you need an iPad with cell connectivity. But aside from that, you're just navigating the iPad to a website you've set up.

We use two tools to ensure that iPad-savvy kids don't accidentally escape from our experiment or zoom the screen into something totally un-navigable. The first is Guided Access, a mode on the iPad that disables the hardware buttons. The second is Mobile Kiosk, an app that further locks the iPad into a particular view of a webpage. The combination means that kids can't get themselves tangled in the functionality of the iPad.

4. Kid management

The last ingredient is training kids to use the iPad effectively. Although many of them will have interacted with tablet devices and smartphones before, that's no guarantee that they can tap effectively and navigate through an experiment. We created a simple page training with a set of dots for a child to tap - they can't advance until they successfully tap all the dots (kind of like Whac-A-Mole).

(Update #1: Elise notes that using a tilting iPad case helps kids click more successfully as well.)

(Update #2: Brock Ferguson gives a handy guide to making the PHP above more secure if you ever want to use it for anything other than preschoolers.)

Sunday, September 1, 2013

Unconscious and non-linguistic numbers?

(Flock of birds, from Flickr)

An image of flock of birds fades almost instantaneously from our visual memory - as Borges described memorably in his Argumentum Orithologicum. But if you get a chance to count the birds one by one, their exact number (78 in this case?) is represented by a symbol that is easy to remember and easy to manipulate. Numbers help us overcome the limitation that without some intermediate representation, none of our memory systems can represent exactly 78 objects as distinct from exactly 79.

For most people who use language to manipulate numbers, mentally representing exact quantities like 78 requires 1) knowing the number words and 2) producing the right number words - speaking them, at least in your head - at the moment you want to represent a corresponding quantity. The evidence:
One major exception to this "number requires language in the moment" hypothesis is visual representations of number like the mental abacus. Mental abacus provides a way to use visual memory (rather than auditory/phonological memory) for remembering and - more importantly - manipulating exact quantities. So it's an exception that proves the rule: Like numerical language, mental abacus gives its users a representation scheme for holding exact quantities in mind.

Over the past few years, I've been collecting a few examples that push at the boundaries of this theoretical framework, though:

1. It may be possible to prime arithmetic expressions unconsciously.

A recent paper by Sklar et al. uses a clever method called continuous flash suppression to introduce stimuli to participants while keeping them out of conscious awareness. When shown expressions like "9 - 3 - 4 = " using this method, participants were 10 - 20 ms faster to speak the correct answer (e.g., 2) when it was presented, compared to an incorrect answer. (Incidentally, an odd fact about the result is that the authors had much more trouble finding unconscious priming effects for addition than subtraction. )

I find this result very surprising! My initial thought was that participants might have been faster because they were using their analog magnitude system (indicating approximate rather than exact numerical processes).  I wrote to Asael Sklar and he and his collaborators generously agreed to share their original data with me. I was able to replicate their analyses* and verify that there was no estimation effect, ruling out that alternative explanation.

So this result is still a mystery to me. I guess the suggestion is that there is some "priming" - e.g. trace activation of the computations. But I find it somewhat implausible (though not out of the question) that this sort of subtraction problem is the kind of computation that our minds cache. Have I ever done 9 - 3 - 4 before? It certainly isn't as clear an "arithmetic fact" as 2+2 or 7+3.

2. Richard Feynman could count and talk at the same time. 

In a chapter from "The Pleasure of Finding Things Out," (available as an article here) Feynman recounts how he learned that he could keep counting while doing other things. I was very excited to read this because I have also noticed that I can count "unconsciously" - that is, I can set a counter going in my brain, e.g. while I hike up a hill. I can let my mind wander and check back in to find that the counter has advanced some sensible distance. But I never systematically tested whether my count was accurate.

This kind of test is exactly what Feynman set out do to. He would start counting, then begin another operation (e.g. doing laundry, walking around, etc.) and check back in with his internal "counter." He tested the accuracy of his count by measuring that he could count up to about 48 in a minute with very little variability when there was no interference. So he would do many different things while counting and check how close his count was to 48 - if there had been interference, he would be off in how far he had counted after a minute had elapsed.

The only thing he found that caused any active interference was talking, especially producing number words:
... I started counting while I did things I had to do anyway. For instance, when I put out the laundry, I had to fill out a form saying how many shirts I had, how many pants, and so on.
I found I could write down "3" in front of "pants" or "4" in front of "shirts: while I was counting to myself but I couldn't count my socks. There were too many of them: I'm already using my "counting machine" ...
What's even more interesting is that Feynman reports that the statistician John Tukey could count and talk at the same time - by imagining the visual images of the numbers turning over. But apparently this prevented Tukey from reading while he counted (which Feynman could do!).

So these observations seem like they are consistent with the hypothesis that exact number requires using a particular set of mental resources, whether it's the resources of speech production (for counting out loud) or of visual working memory (for imagining digits or a mental abacus). But they, along with the Sklar et al. finding - also support the idea that the representation need not necessarily percolate up to the highest levels of conscious experience.

3. Ildefonso, a home-signer without language, learned to do arithmetic before learning to sign.

In A Man Without Words, Susan Schaller describes the growth of her friendship with Ildefonso, a deaf, completely language-less man in his 30s. Ildefonso grew up as a home-signer in Mexico and came to the US as an agricultural laborer. Over the course of a short period working with him at a school for the deaf, she introduces him to ASL for the first time. The story is beautiful, touching, and both simply and clearly written.

Here's the crazy part: Before Schaller has succeeded in introducing Ildefonso to language more generally, she diverts him with single digit arithmetic, which he is apparently able to do handily:
To rest from the fatigue of our eye-to-eye search for an entrance into each other's head, we sat shoulder to shoulder, lining up numerals in progressively neater rows. I drew an addition sign between two 1s and placed a 2 underneath. I wrote 1 + 1 + 1 with a 3 under it, then four 1s, and so on. I explained addition by placing the corresponding number of crayons next to each numeral. He became very animated, and I introduced him to an equal sign to complete the equations. Three minutes later the crayons were unnecessary. He had gotten it. I presented him with a page of addition problems, and he was as happy as my nephew with a new dinosaur book. (p. 37)
It would be very interesting to know how accurate his computations were! This observation suggests that language for number may not critically rely on understanding any other aspects of language. Perhaps Ildefonso didn't even treat numerals as words at all (but instead like Gricean "natural" meanings, e.g. "smoke means fire").

Conclusions

All of these examples are consistent with the general hypothesis described above about the way language works to represent exact numbers. But all three suggest that our use of numbers need not be nearly as conscious or as language-like as I had thought for them to carry exact numerical content.

---
* With one minor difference: Sklar et al.'s error bars reflect the mean +/- .5 * the standard error of the mean (SEM), rather than the more conventional +/- 1 SEM. This is a semantic issue: the full length of their error bar is the SEM, rather than the SEM being the distance from the mean. Nevertheless, it is not standard practice.

Tuesday, August 27, 2013

Valence/arousal in babies



The valence/arousal model, from Russell (1980). Higher up is more aroused, further right is more positive valence.

Watching my daughter M, who is now six weeks old, makes me think that valence and arousal are much more tightly coupled in young infants than in older children and adults. Let me explain what I mean.

The valence/arousal model is a simple, powerful way of thinking about the spectrum of human emotions. Valence describes whether the emotion is positive or negative, while arousal describes the level of alertness or energy involved in the emotion. The original scaling of emotion words is shown in the graphic above. For example, you can see that fear, anger, and distress are high arousal emotions with generally negative valence (upper left corner) while excitement and delight are generally high arousal emotions with positive valence (upper right corner).

In the last few weeks, M has been starting to smile and coo. This very cute behavior is happening mostly right after she wakes up. She will be awake and alert and very smiley, and I'll play with her, tickling her stomach or holding her hand. After a few minutes of this, though, she can very quickly cross the line into overstimulated and fussy. Her smile will turn into a frown almost instantaneously. This flip happens in the other direction as well: she can be starting to wail from hunger, and if I bounce her and talk to her for a minute she will suddenly smile at me (before remembering that she is hungry and scrunching up her face again). In other words, when she's aroused, she switches between positive and negative valence very quickly.

So perhaps M's dimensions of valence and arousal just aren't as well-differentiated. I tried to make my own version of Russell's diagram (below):




There's essentially just a single dimension of variance, which is mostly based on arousal: asleep being lowest and wailing being highest. Between this there is some variation on whether the middles states are generally positive (e.g. happy satiated look after eating) or negative (squirming and slightly uncomfortable), but they can flip between one another very quickly. 

I've been trying to think about how to test this model, but I don't have any good ideas yet. Nevertheless, it certainly seems like it captures my intuitions about M's emotional states so far...

Thursday, August 22, 2013

Seeing in the first month, part 1

(Home-made face perception stimuli.)


M just had her one-month birthday. As a reader of papers on early cognitive development, I'm used to thinking that babies like M are all "newborns" - an unexamined category that includes at least the first month and perhaps beyond. But it turns out a tremendous amount has changed in that short month... in these two posts, I'll talk about a few things about M's visual perception that have been changing.

Visual acuity

Parents of newborns are routinely told that their baby can only focus on objects 8 - 15 inches away. But this turns out not to be true. Research by Tony Norcia and colleagues (nicely summarized here) suggests that infants' visual acuity is considerably sharper than initially thought. Early research by Fantz and others used infants' visual preference for contrast to test their acuity: if they would attend to stripes with spatial frequency X then they must be able to perceive spatial frequency X.

This work was incredibly clever, but it required infants not only to be able to see the stimulus but also to have a preference for it and to direct their eyes to it consistently. Neither consistent preference nor sustained attention are newborns' strong suit, however. As a consequence, this work significantly underestimated their acuity. In fact, Norcia and colleagues estimate newborns' acuity to be closer to 20/120 in the first month, using passive brain measures that only require perception, not preference or attention. So newborns' acuity isn't great, but at least they aren't legally blind.

Eye-movements

Adults' eyes are constantly in motion, making saccades directly from from one location to another around two to four times per second. Infants' ability to make fast saccades matures rapidly, although
early researchers observed that very young infants sometimes make what look like tiny "microsaccades" along the way to a target, rather than jumping there directly.

My little experiment with M has been to hold her up at arm's length facing me. Once she's there I get her attention and move her about 20 or 30 degrees to the right, and then to the left, observing whether her eyes follow my face. In the first week or so I saw a lot of these little tracking microsaccades. But by around 14 days she would make a single tracking saccade to fixate my face again after I moved her. These saccades took quite a while to plan and execute: I didn't time them but it seemed like it took a solid 500 - 1000 ms to look at me. This was really striking: Move. Pause. Look.

Now at a month old, I can do the same trick when M is in her chair and I'm three or four feet away from her. And she already seems quite a bit faster - though still far from the instantaneous adjustment I'd expect even a few months down the road.

A preference for contrast

Even in the first week we noticed that M seemed to love the venetian blinds above her changing table. Her preference is likely driven by a preference for high contrast. The blinds are spaced pretty far apart, so the spatial frequency is low and the contrast is high because of the light behind them. I had talked about this phenomenon in a paper on babies' visual preferences, but it's amazing to see in person. By 4 weeks, when she's fussy, we can calm her down by holding her a couple of feet from the blinds in our dining room.

Although my saccade tracking exercise above happens to involve a saccade to a face, it doesn't necessarily say anything about face preferences per se. In that situation, my face is also a high-contrast target on a boring background (usually the ceiling). At one point I tried to show M's new trick to my wife and inadvertently set M up in a situation where there was a lightbulb behind me. I moved her to the left, and she stared at the lightbulb. I moved her to the right and she made a saccade to keep looking at the lightbulb (completely missing me). Maybe I should paint stripes on my face.

A preference for faces

If M would rather look at a lightbulb than her dad, does she really have a preference for faces? It turns out she does. Although their face preference may not trump their love of blinds, infants still prefer to look at things that look more like faces than contrast-matched things that don't. To test this finding with M, I used an adapted version of the classic Johnson & Morton (1991) paper on newborn face preferences.

I constructed a pair of ping pong paddles, as in the picture above, with a set of three dots comprising either a pyramid (a schematic face) or an upside down pyramid (which doesn't look like a face).  For each trial, I got M's attention, crouched behind her bassinet, and held a paddle about a foot away from her face. I timed how long she looked at the paddle before looking away for more than 2 seconds. (All timing was inexact because I was using an iPhone timer. Trials shorter than 5s I called a false start and restarted.)

I ran this procedure twice, when M was 13 and 15 days old. The first time I showed a non-face and then a face; the second time I showed a face and then a non-face. I tried to remain blind to condition by randomly numbering the backs of the paddles before each session, but I didn't completely succeed - so I take the results with a grain of salt. Nevertheless, M's data were striking. In the first session she looked at the non-face for 28 seconds and the face for 78 seconds; in the second she looked at the face for 46 seconds and then the non-face for 25 seconds. These results look almost identical to the visual preference reported by Farroni et al. (2005) using the same stimulus. Cool!

Interim Summary

A lot has changed over the past month. It's been remarkable to see M becoming a more awake, aware, perceptive baby. In the second post in this series, I'll talk a bit about her growing ability to use her arms, as well as the way she scans within faces.

Thursday, August 15, 2013

On publication lag


One potential negative of being an academic - especially for someone as impatient as I am - is the time lag of publication. It can easily take two years from the first submission of a manuscript to when that manuscript appears in print (in some high-profile journals and faster-paced fields the whole process can be shorter, but I'll focus on psychology here). What I want to argue is that, although publication lag has a whole host of negative consequences, it can nevertheless be a pathway to better work.

As a way to shortcut the long journal submission process, I've made a habit of submitting much of my research to the Cognitive Science Society. This is a great way to get findings out quickly: short papers are due in February, reviewed by April, and presented in July. The downside is that publications are not archival. Unlike in Computer Science - where conference publications are the standard - for purposes of jobs, promotion, etc., psychology papers must be published in a peer-reviewed journal. So I often write up some more complete version of my CogSci papers as substantially longer journal articles.

I recently tried to estimate the lag in this process. In 2007, 2008, and 2009 I submitted a number of papers to CogSci:
In other words, the lag was typically 3 - 5 years between the conference and the journal publication date. I feel exhausted even thinking about a lag of this magnitude: Ideas I was excited about last February will most likely see print in 2018.

By slowing down the broader dissemination of ideas, this lag clearly has negative consequences. Although conference proceedings are citable, journal papers stand a much better chance of having a long-term impact. A nicely typeset article in a good journal suggests solid research; when the article is published, there are often press-releases and content alerts that go out; and journal papers are indexed in PubMed and other catalogs as part of the archival scholarly literature. Yet that journal article is often several years out of date before we read it. 

If I look at the list of articles above none of them were published, or even submitted, with only minor changes from their CogSci versions. Instead, I made substantial revisions and additions prior to submitting them for the first time. Sometimes I replaced entire experiments - in some cases all of the experiments - because I had learned how to design them better or had created a better stimulus set. I was and still am happy with the initial CogSci papers. But taking the time to write them up, get reviews, prepare a talk, and present gave me space to become dissatisfied. It gave me time to up my standards and to think that I could do better. 

Peer review plays a part in this process, but the feedback I receive is not always critical in my revisions. In some case I make changes to please reviewers. But my most successful revisions are the ones in which I find a shared concern with the reviewers: a flaw that I recognize and that I am unsatisfied with. Then when I fix this flaw to my own satisfaction, reviewers are also satisfied. It takes time for me to get this kind of perspective.

That's why I think the lag itself is valuable, even independent of feedback from the review process. The slow speed of scientific publication may actually be a form of being tied to the mast. As frustrating as it is to wait 2 - 3 months for reviews, the process actually enforces the dictum of setting a draft aside, a practice that is endorsed by writing coaches from Stephen King to the Harvard Writing Center.* And without those enforced breaks, I doubt that I would have the discipline to keep from pressing "publish" and sending my (perhaps interesting but often half-baked) work out into the world.

A lot  has been written about changing publication standards for psychology and for science more generally. I especially like the Scientific Utopia pieces of Nosek and colleagues that describe ways that digital communication can help with disseminating scientific knowledge. But as much as I hate to say it, I wonder whether the molasses-slow timeline of scientific publication doesn't sometimes lead to better thought-out, higher-quality papers...


---
* Of course, there are some parts of the publication process that don't provide a benefit, e.g. the lag from proofs until the journal actually decides to print the darn thing. This lag could be eliminated if we gave up on paper journals. While many journals now use e-pub before print, my experience is that this practice leads to huge messes in Google Scholar and elsewhere when the same paper is cited to two different years.