Thursday, December 12, 2013

A belated git migration

It's coming up on conference paper season, specifically for the Cognitive Science Conference. I love how deadlines like CogSci push research forward, giving us an intermediate goal to shoot for. But when lots of folks in the lab are writing papers separately, keeping track of all the drafts can get unwieldy very fast. My resolution this year is that no one will send me any more zip files of a directory called "CogSciPaperFinal." File naming practices like this one have been caricatured before,* but they get even worse when I'm constantly trying to track something like 6 - 8 different papers going forward at the same time.

Towards that end, our last lab meeting of the quarter was on version control software. In a nutshell, version control packages allow individuals and collaborative groups to work together on a project (usually a software codebase) and provide tools for keeping track of and merging changes to the project. It's painfully clear that we're late to the party: virtually no one in industry works on a large project without version control, but, as is frequently noted, scientists are not good software engineers.

We are starting a lab-wide push to keeping track of all of our writing and code using git and github. This transition will mean a bit of discomfort – hopefully not pain – but it's a far better method for storing our work and sharing it with collaborators. If you haven't played with git, I recommend looking at this nice tutorial by NYU's John McDonnell. I also found it very useful to do the TryGit tutorial. The lab's (currently very empty) github page is here. Hopefully in a couple of months it'll be substantially fuller...

---
* HT: Michael Waskom.

Friday, December 6, 2013

Computation under interference

(ENIAC, the first electronic general purpose computer; courtesy Wikipedia).

What if you have a very powerful computer, but it only works some of the time? Maybe it's made from vacuum tubes, and when they overheat or when some dust or a fly ends up in the works, one of those tubes burns out. Then the computer is down for weeks on end. But even if it's nonfunctional most of the time, when it's working, it's turing complete.

I'm coming to think that babies are this sort of computer. Perhaps the biggest puzzle in cognitive development is the amazing things that babies can sometimes do in very controlled settings (think moral reasoning) and yet the tremendous amount they can't do the rest of the time (think anything except eat, sleep, poop, suck, swipe and occasionally give you a charming smile...). I wonder if one way to reconcile these two different conceptions of infants is by thinking about the challenges of regulating their arousal – in much the same way you need to regulate the temperature of the vacuum tubes to get optimal computing performance.

Sometimes M gives me an incredibly intelligent look and does something unexpected. In the past weeks, she's been trying to pick up her chair to see the bottom, cooing systematically in response to me, or pulling out and reinserting her pacifier in her mouth. But other times she is glassy-eyed because she's concentrating on eating, or wiggling because she has indigestion. Tiredness is the biggest cause of cognitive failures. When I'm tired, I get grouchy, and my reactions are slower. When M is tired, everything goes to pieces. For a while she would even forget how to swallow: Milk would come pouring out of her mouth because she was sucking it in but forgetting to put it away down her throat...

When I was starting to think about M's cognitive development, right after she was born, I described her crying as a feedback look, where arousal leads to more and more arousal unless there is some internal regulation or external noise in the system. Having observed her for a few more months, I'm increasingly convinced that crying is only one small part of this process.

In fact, most of M's psychological world – perhaps ours as well, though it's well-hidden – seems like it's about regulation of attention (think temperature in the vacuum tube room). Part of this is learning to attend to what is interesting in the world (say, her father's face rather than the blinds). Another partis learning to suppress attention to all kinds of stimuli, including both visual stimuli and internal sensations (like gurgling in the stomach or wetness in the diaper). When she gets tired this all stops happening. Internal sensations get amplified, external ones don't get attended to. The vacuum tubes start burning out, and only a long, relaxing nap will help.

Tuesday, November 26, 2013

Confounds in developmental time

(Looking for developmental dissociations between processes can be a profitable research strategy, but such dissociations may be affected by external events like the transition to formal schooling.) 


As a developmental psychologist, I'm primarily interested in answering "how" questions: How do children figure out how objects work, learn the meanings of words, or recognize the beliefs or goals of others? Yet along the way, I can't help interacting with the (less interesting) more descriptive set of "when" questions: When do children show evidence of object permanence, learn their first word, or pass false belief tasks? And in studying any individual phenomenon, answers to "how" questions can be informed by estimates of when a particular behavior is first observed.

But here's an issue that has been bothering me for a while. Our "when" estimates – derived as they are from the behavior of middle-class kids in the US and Europe – are not independent from one another. They are instead highly correlated, because of external milestones in the lives of the children we are studying. Transitions to preschool or to kindergarten are major drivers of new behaviors. Worse, because teachers are active readers of developmental psychology, new school experiences likely involve explicit practice of exactly the kinds of skills we're interested in studying.

One possible example of this issue comes from a lovely talk I heard by Yuko Munakata at the Cognitive Development Society meetintg. Munakata has a deep body of recent work on the development of children's executive function (roughly, the ability to shift flexibly between different sets of behaviors according to context or task; review here). She documents transitions in children's executive function, including the transition from reacting to a stimulus to proactive preparation – choosing the proper behavior for a particular situation ahead of time. To be clear, nothing in Munakata's work depends on the precise timing of these transitions. Yet suspiciously, many of the transitions she studies happen in the same age range (4 - 6 years) when children are transitioning to school, an environment where their executive functions are being challenged and perhaps even trained.

A second example (very far from my area of expertise) comes from a comment made by Kate McLean in a recent brownbag talk she gave at Stanford. McLean studies identity development in adolescents, and she noted a big uptick in the quality of narratives in later high school. When she probed more deeply, however, she uncovered an external driver: late high schoolers were all engaged in the same social ritual: college application essays.

The research in these examples is not necessarily compromised by the presence of external events. But nevertheless, these kinds of events are big factors that might affect study outcomes in ways we wouldn't otherwise predict. From my perspective, I wonder how much the cognitive constructs I am interested in – pragmatics, language learning, theory of mind reasoning – are affected by individual children's transition to preschool, since the period around age 3 - 4 is a time of tremendous development for all of these abilities.

Studies that dissociate age and school shouldn't be too complicated to do, for either executive control or for other constructs. And these sorts of studies might give us some insights into the ways that (pre-) school experiences support the development and refinement of cognition. I recently heard the term "academic redshirting": holding children back from starting school so that they are older and do better than their peers when they finally start. This is a fairly intense (and controversial) strategy for getting kids ahead, but it might create an interesting natural opportunity for studying cognitive development...

Tuesday, November 12, 2013

What can a four-month-old do?

If you read papers on babies, you get the sense that mostly they just stare at stuff. The vast majority of research on babies uses visual attention – usually time looking at a screen or puppet show – as its dependent measure. Some experiments use more exotic dependent variables, like operant conditioning of kicks, pacifier sucking rate, or even smiles. But since Fantz's work in the 1960shabituation and related looking time paradigms have dominated the field. Although we're reminded occasionally that babies cry, fuss, poop, and sleep, developmentalists appear far and away most interested in looking (very nice review and critique of this idea by Dick Aslin here).

As a reader of that literature, it's been consistently amazing to me to see what M can do, even as a little baby. She is about to turn four months old next week, and the the range of her behaviors is astonishing. Even more interesting is that some of this behavioral repertoire gives clear signals to underlying cognitive processes. Here's a quick list of some things I've noticed:

Eating. M takes a lot of her meals from a bottle. Early on, she showed no recognition of the bottle itself until it touched her cheek or lip, activating the rooting reflex. But around a month or six weeks ago ago she started showing signs of recognizing the bottle as an object, and responding to it before it reached her mouth. At first, the evidence seemed inconclusive to me – she was reaching (at that point mostly unsuccessfully) for many objects, so a reach for the bottle didn't seem diagnostic. But now she shows clear signs of recognition: When she is hungry and sees the bottle, she vocalizes and opens her mouth. Although I haven't tested this systematically, her recognition seems fairly viewpoint-invariant: she can recognize the bottle in many different orientations. This provides converging evidence for object categorization in 3 - 4 month-olds. It also seems like it could be a neat method for studying vision – think specially engineered bottles with different shapes and colors...

Smiling. Ever since around six weeks, M has been a very smiley baby. She greets people with a big smile, sometimes even smiling when she is otherwise quite fussy. It's kind of fun (in a slightly sad way) to watch smiles war with crying. If she is starting to fuss you can smile at her and see a reciprocal smile fight its way through her pouty face. But so far I have seen no evidence that her smiles reflect recognition of me or her mom: she gives them quite indiscriminately right now. (I know there is other evidence for recognizing and preferring mom, via her face or even her smell, very early on; I just find it surprising that she smiles roughly as much for others as she does for us).

Also, even if I hadn't tested M's face preference to schematic stimuli early on, her smiles would be a good indicator of her recognition of pictures. M will give a big smile at a picture of a baby's face. (Before I saw this, I never understood why people gave us board books filled with baby faces.) It doesn't seem surprising now that babies recognize pictures, but people used to argue that there were "primitive peoples" (presumably tribes somewhere) who didn't recognize photos. Hence picture perception – the ability to recognize the content of pictures – would be a learned cultural skill, and so babies wouldn't recognize pictures. A beautiful study by Hochbert and Brooks (1962), in which they denied their own child access to pictures and then tested his recognition, nicely disproved this idea.

Rolling over. M has rolled over a few times, from prone (tummy) position to her back. Each time, she was interested in an object on one side of her, and she turned her head and body that way (rotating herself onto her side), then began to kick her legs. When she kicked especially hard, her center of gravity tipped over her midpoint, and she flopped onto her back. This was clearly not something she was expecting, viz. her look of complete and total surprise. The interesting thing is that she hasn't been able to reproduce this behavior in a week. This kind of motor exploration really looks like reinforcement learning, where the issue is assigning credit for the result: which of many different behaviors produced the rewarding outcome?

Vocalizations. M started cooing right around when she started smiling – a very adorable behavior. Now her vocalizations have differentiated a bit more: coos when she is in a good mood, squeals when she is starting to fuss. But the most interesting noise she makes is something we call her "cognition noise." There are several physiological measures of attention and cognition in infants, for example heart rate and pupil dilation. M presumably shows these, though we haven't measured. What we didn't bargain for  is that she actually shows changes in respiration and vocalization when she is concentrating. When we give her a new toy, she stares at it, grunts, and breathes heavily. It's almost like the fan coming on for a MacBook Pro when the CPU is working hard. Adorable – and a nice external measure of attention.

Tuesday, October 22, 2013

How to make a babycam





(This is a guest post, written by Ally Kraus.)

After the recent Cognitive Development Society meeting, several people asked how we construct our head-mounted cameras (headcams or babycams for short), seen e.g. in this paper. Here are the ingredients - the camera, the mount, and the band - and how we put them together.

The Camera

Our recent headcams have used three types of camera. Each has pros and cons. We started out with the MD-80 camera, and then moved on to Veho cameras because they have a larger field of view and better image quality.

MD-80: You can find these cameras (and their knockoffs) on Amazon and EBay. The MD-80s are cheap and very lightweight, and also come with an accessory pack with a variety of mounts. 

Veho Pro: The Pro is a more heavy-duty version of the MD-80. It has much clearer indicator lights, nearly double the battery life, and the camera also has a larger field of view.  We have had some problems with the audio in the video files (either with it being quite noisy, or not synching with the audio) and also file corruption; different instances of the camera have had different issues. Also, the Pro does not come with the mount we need to attach it to the headband, so we have cannibalized the MD-80 mounts we had used previously. Amazon link for the camera here.

Veho Atom: Very similar to the Pro (same pros/cons), the Atom is smaller, and has about half the battery life. It does come with a headband mount. On Amazon here.

Fisheye Lens: The only modification we’ve made to the cameras themselves is to attach a fisheye lens to widen the field of view. We’ve used a simple magnetic smartphone version, like this one. The lens comes with a ring you can attach to a surface for the lens to adhere to. We attached ours with a ton of hot glue. (We’ve also substituted regular washers from the hardware for the metal ring that’s included.) The lens can be knocked off by kids, so you can also glue the fisheye lens itself to the ring so it’s permanently on the camera.

Here is a comparison of the MD-80 and the Veho Pro, with and without the fisheye:



You can see that the field of view is dramatically different. The MD-80 without fisheye has a vertical field of view of about 22 degrees, while with fisheye it has a bit more than 40 degrees. The Veho is almost that good - around 40 even without the fisheye. It goes up to about 70 with the fisheye. The lenses on these cameras are not completely consistent, though, so we have found variance in our view measurements from camera to camera.

The Mount

Ideally, the camera lens would be situated in the center of the child's forehead just at the brow line, to give a semi-accurate idea of what the child can see. We wanted to have some ability to make adjustments; in particular to angle the camera down if it were positioned too high, though, since some children find the camera distracting if it's too low on the forehead.

Both the MD-80 and the Atom come with an angle-adjustable mount that pivots at one end. It's not ideal for our purposes because the lens is on the opposite side from the pivot point (indicated by a circle). All my diagrams use the MD-80 mount, although the Atom’s is similar, just smaller:

We really want the lens end to be right above the pivot so it's low on the child's forehead. We remedied this by unscrewing the two screws, flipping the camera holder upside-down, and re-assembling it. It's doesn't fit quite as well this way (note the slight gap) but it's fine and not going to budge:

You can buy a similar mount for the Pro in a separate accessory package – unfortunately it is not included with that camera. We ended up not buying the accessory kit, but simply modded some of our existing MD-80 mounts to fit the Pro.

The Band

We modified some Coast LED Lenser 7041 6 Chip LED headlamp bands for our headcamera. The best thing about this headlamp is that it comes with some plastic hooks that fit the mount perfectly. We disassembled the headlamp, keeping only the band, the top strap, and two of the hooks. The band is designed for adults and ended up being too large for some children; we fixed this by pulling apart the seam, trimming the elastic a few inches, and re-sewing it. The top strap was also too small with the battery pack removed, so we kept the buckle and replaced the adjustable part of the strap with a longer piece of 1" Nylon Woven Elastic purchased from http://www.rockywoods.com/.

The hooks connect the mount to the band. Slip the hooks into the bottom row of rectangular holes on the headcam mount and snap them into place:



It helps to hot glue the mount to the hook pieces, in order to stabilize the connection. You can then slip it on to the headband:



Our headcams have a headstrap to keep the camera snug on the child's head and also to prevent it from sliding/being pulled down. We wanted to ensure that the back one especially would be comfortable against the child's head.

For the front, we used a pipe cleaner. (Easy to bend, and relatively soft/safe around children.) We threaded the pipe-cleaner through the loop on the top strap (1). Then we threaded the ends of the pipe-cleaner from the back to the front through the top rectangular holes, then down along the sides of the camera (2). We twisted them together at the bottom of the camera mount (3), and then threaded the ends back into the hinge so there is no danger of them poking the child:



For the back, we picked the seam on the back loop of the top strap, wrapped the end around the band, and sewed it in place:


Finally, we added a little padding to the inside-front of the strap so that the plastic hook pieces wouldn't rest against the child's forehead. You can use the extra elastic from when you shortened the band, and hot glue it to the plastic hook pieces:


Voila! The final headcam is as pictured at the top of the post. Please let us know if you find this useful or if you discover other good variants on our setup.




Thursday, October 10, 2013

Randomization on mechanical turk

Amazon Mechanical Turk is a fabulous way to do online psychology experiments. There have a bunch of good tutorial papers showing why (e.g. here, here, and here). One issue that comes up frequently, though, is how to do random assignment to condition. Turk is all about allowing workers to do many HIT (human intelligence task, Turk's name for a work assignment) types, one after another. In contrast, most experimental psychologists want to make each condition of their experiment a single HIT and to get participants to do only one condition.

If you are using the web interface to Turk, you are creating a single HTML template, populated with different values for each distinct HIT type. That means that each different condition is a different HIT. In this case, if you want random assignment to (a single) condition, all you can do is write prominently "please do only one of these HITs." The problem is that Amazon displays HITs from the same job one after another, so you have to make sure that every worker stops after doing just one. This strategy generally works until some worker does 7 or 30 conditions of your experiment - messing up your randomization and putting you in the awkward position of paying for data you (typically) can't use. Nevertheless, I and many other people used the "do this HIT once" method for years - it's easy and doesn't go wrong too much if the instructions are clear enough.

In the last couple of years, though, folks in my lab have moved to using "external HITs" where we use Turk's Command Line Tools to direct workers to a single HTML/JavaScript-based HIT that can do all kinds of more interesting stuff, including having multiple screens, lots of embedded media, and a more complex control flow. The HTML/js workflow is generally great for this, and there is quite a bit of code floating around the web that can be reused for this purpose. Now there is only one underlying HIT, so workers can only complete it once.

The easiest way to do random assignment to condition from within a JavaScript HIT is to have the js assign condition completely at random for each participant. This just involves writing some randomization in the code for the experiment and makes things very simple. With 2 conditions and many participants, this works pretty well (maybe you get 48 in one condition and 52 in another), but with many conditions and fewer participants, it fails quite badly. (Imagine trying to get 5 conditions with 10 participants each. You might get 6, 14, 8, 4, and 18 subjects, respectively, which would not be optimal from the perspective of having equally precise measures about each condition.)

Our solution to this problem is as follows: We use a simple PHP script, the "maker getter," that is called with an experiment filename and a set of initial condition numbers (in the example below, it's "myexpt_v1" and conditions 1 and 2, each with 50 participants). The first time it's called, it sets up a filename for that experiment and populates the conditions. Every subsequent time it's called, it returns a condition. Then, if this is a true Turk worker (and not a test run), a separate script decrements the counts for that condition. This gives us true random assignment to condition.

(Note: Todd Gureckis's PsiTurk is a more substantial, more general way to solve this same problem and several others, but requires a bit more in the way of setup and infrastructure.)

---- DETAILS AND CODE ----

The JavaScript block for setting up and getting conditions:

// Condition - call the maker getter to get the cond variable 
try {
    var filename = "myexpt_v1"
    var condCounts = "1,50;2,50"
    var xmlHttp = null;
    xmlHttp = new XMLHttpRequest();
    xmlHttp.open( "GET", "http://website.com/cgi-bin/maker_getter.php?conds=" + 
 condCounts +"&filename=" + filename, false );
    xmlHttp.send( null );
    var cond = xmlHttp.responseText;
} catch (e) {
    var cond = 1;
}

The JavaScript block for decrementing conditions:

// Decrement only if this is an actual turk worker!
if (turk.workerId.length > 0){
var xmlHttp = null;
xmlHttp = new XMLHttpRequest();
xmlHttp.open('GET',  
'http://website.com/cgi-bin/' + 
'decrementer.php?filename=' + 
filename + "&to_decrement=" + cond, false);
xmlHttp.send(null);

}

maker_getter PHP script (courtesy of Stephan Meylan, now a grad student at Berkeley), which is running in the executable portion of your hosting space: maker_getter.php.

decrementer PHP script (also courtesy Stephan): decrementer.php.

Friday, October 4, 2013

Effect sizes and baby rearing (review of Baby Meets World)

In these first months of M's life, I've been reading a fair number of parenting advice or interest books focused on babies. My motivation is partially personal and partially professional. Regardless, it has been entertaining to sample the vast array of different theories and interpretations of what is going on in M's cute little head (and body).

I recently finished Baby Meets World: Suck, Smile, Touch, Toddle, by Nicholas Day, and it is my favorite of the scattered group I've read. Day is a clear, funny writer who also blogs entertainingly for SlateBaby Meets World is a tour of the history and science of parenting, broken down by the four activities in its subtitle.

But unlike many books about developmental science it is also a cry of rage and despair by a new parent who has completely had it with parenting advice. This feels exactly right to me. Rather than urbanely walking through the latest research on sucking along with a Gladwell-esque profile of a scientist, Day shows us the absolute weirdness of its past - from deciding whether to use goats or donkeys as wet nurses to the purported link between thumb sucking and chronic masturbation.

The implication, drawn out very clearly in a recent New York Times blog post, is that our current developmental studies may not have much more to offer parents than Freud's hypotheses about thumb sucking:
... [E]xperiments have the most meaning within [their] discipline, not outside of it: they are mostly relevant to small academic disputes, not large parenting decisions. But when we extract practical advice from these studies, we shear off all their disclaimers and complexities. These are often experiments that show real but very small effects, but in the child-rearing advice genre, a study that showed something is possible comes out showing that something is certain. Meager data, maximum conclusions. (p299)
People often ask me how relevant my own work on language development is to my relationship with M. My answer is, essentially not at all. I am a completely fascinated observer; I continually interpret her behavior in terms of my interest in development. Nevertheless, I see very few - if any - easy generalizations from my work (and that of most of my colleagues) to normative recommendations for child rearing beyond "talk to your child."

While this kind of recommendation is without a doubt critical for some families, it's not necessarily the kind of thing that you need to hear if you're already in the market for baby advice books. For example, rather than telling me that M needs to hear 30 million words, you should probably counsel me to talk to her less (let the baby sleep, already!). One size doesn't fit all. There are some interesting applied studies that have near-term upshot for baby-advice consumers (e.g. work on learning from media). But overall this is the exception rather than the rule in much of what I do, which is primarily basic research on children's social language learning.

Parents who have read parenting books often say "you must do X with your child" or "you can't do Y," whether it's serving refined sugar, giving tummy time, or using the word "no" (don't, do, and don't, respectively - according to some authorities).  But the effect size of any child-rearing advice, whether reasonable or not, is likely to be small: the people who had parents that followed it aren't immediately distinguishable from those whose parents didn't. Consider the contrast between the range of variation in parenting practices across cultures and the consistency of having reasonable outcomes - nice, well-adjusted people. People grow up lots of different ways and yet they turn out just fine. This is the message of Day's book.

Of course there are real exceptions to this rule. But these are not the small variations in child rearing for your standard-issue helicopter parents - BPA-free tupperware or not? - or even the culturally-variable practices like whether you swaddle. They are huge factors like poverty, stress, and neglect, which have systematic and devastating effects on children's brain, mind, and life outcomes. Remediating them is a major policy objective. We shouldn't confuse the myriad bewildering details of babyrearing with the necessities of providing safety, nutrition, and affection.