Tuesday, October 22, 2013

How to make a babycam





(This is a guest post, written by Ally Kraus.)

After the recent Cognitive Development Society meeting, several people asked how we construct our head-mounted cameras (headcams or babycams for short), seen e.g. in this paper. Here are the ingredients - the camera, the mount, and the band - and how we put them together.

The Camera

Our recent headcams have used three types of camera. Each has pros and cons. We started out with the MD-80 camera, and then moved on to Veho cameras because they have a larger field of view and better image quality.

MD-80: You can find these cameras (and their knockoffs) on Amazon and EBay. The MD-80s are cheap and very lightweight, and also come with an accessory pack with a variety of mounts. 

Veho Pro: The Pro is a more heavy-duty version of the MD-80. It has much clearer indicator lights, nearly double the battery life, and the camera also has a larger field of view.  We have had some problems with the audio in the video files (either with it being quite noisy, or not synching with the audio) and also file corruption; different instances of the camera have had different issues. Also, the Pro does not come with the mount we need to attach it to the headband, so we have cannibalized the MD-80 mounts we had used previously. Amazon link for the camera here.

Veho Atom: Very similar to the Pro (same pros/cons), the Atom is smaller, and has about half the battery life. It does come with a headband mount. On Amazon here.

Fisheye Lens: The only modification we’ve made to the cameras themselves is to attach a fisheye lens to widen the field of view. We’ve used a simple magnetic smartphone version, like this one. The lens comes with a ring you can attach to a surface for the lens to adhere to. We attached ours with a ton of hot glue. (We’ve also substituted regular washers from the hardware for the metal ring that’s included.) The lens can be knocked off by kids, so you can also glue the fisheye lens itself to the ring so it’s permanently on the camera.

Here is a comparison of the MD-80 and the Veho Pro, with and without the fisheye:



You can see that the field of view is dramatically different. The MD-80 without fisheye has a vertical field of view of about 22 degrees, while with fisheye it has a bit more than 40 degrees. The Veho is almost that good - around 40 even without the fisheye. It goes up to about 70 with the fisheye. The lenses on these cameras are not completely consistent, though, so we have found variance in our view measurements from camera to camera.

The Mount

Ideally, the camera lens would be situated in the center of the child's forehead just at the brow line, to give a semi-accurate idea of what the child can see. We wanted to have some ability to make adjustments; in particular to angle the camera down if it were positioned too high, though, since some children find the camera distracting if it's too low on the forehead.

Both the MD-80 and the Atom come with an angle-adjustable mount that pivots at one end. It's not ideal for our purposes because the lens is on the opposite side from the pivot point (indicated by a circle). All my diagrams use the MD-80 mount, although the Atom’s is similar, just smaller:

We really want the lens end to be right above the pivot so it's low on the child's forehead. We remedied this by unscrewing the two screws, flipping the camera holder upside-down, and re-assembling it. It's doesn't fit quite as well this way (note the slight gap) but it's fine and not going to budge:

You can buy a similar mount for the Pro in a separate accessory package – unfortunately it is not included with that camera. We ended up not buying the accessory kit, but simply modded some of our existing MD-80 mounts to fit the Pro.

The Band

We modified some Coast LED Lenser 7041 6 Chip LED headlamp bands for our headcamera. The best thing about this headlamp is that it comes with some plastic hooks that fit the mount perfectly. We disassembled the headlamp, keeping only the band, the top strap, and two of the hooks. The band is designed for adults and ended up being too large for some children; we fixed this by pulling apart the seam, trimming the elastic a few inches, and re-sewing it. The top strap was also too small with the battery pack removed, so we kept the buckle and replaced the adjustable part of the strap with a longer piece of 1" Nylon Woven Elastic purchased from http://www.rockywoods.com/.

The hooks connect the mount to the band. Slip the hooks into the bottom row of rectangular holes on the headcam mount and snap them into place:



It helps to hot glue the mount to the hook pieces, in order to stabilize the connection. You can then slip it on to the headband:



Our headcams have a headstrap to keep the camera snug on the child's head and also to prevent it from sliding/being pulled down. We wanted to ensure that the back one especially would be comfortable against the child's head.

For the front, we used a pipe cleaner. (Easy to bend, and relatively soft/safe around children.) We threaded the pipe-cleaner through the loop on the top strap (1). Then we threaded the ends of the pipe-cleaner from the back to the front through the top rectangular holes, then down along the sides of the camera (2). We twisted them together at the bottom of the camera mount (3), and then threaded the ends back into the hinge so there is no danger of them poking the child:



For the back, we picked the seam on the back loop of the top strap, wrapped the end around the band, and sewed it in place:


Finally, we added a little padding to the inside-front of the strap so that the plastic hook pieces wouldn't rest against the child's forehead. You can use the extra elastic from when you shortened the band, and hot glue it to the plastic hook pieces:


Voila! The final headcam is as pictured at the top of the post. Please let us know if you find this useful or if you discover other good variants on our setup.




Thursday, October 10, 2013

Randomization on mechanical turk

Amazon Mechanical Turk is a fabulous way to do online psychology experiments. There have a bunch of good tutorial papers showing why (e.g. here, here, and here). One issue that comes up frequently, though, is how to do random assignment to condition. Turk is all about allowing workers to do many HIT (human intelligence task, Turk's name for a work assignment) types, one after another. In contrast, most experimental psychologists want to make each condition of their experiment a single HIT and to get participants to do only one condition.

If you are using the web interface to Turk, you are creating a single HTML template, populated with different values for each distinct HIT type. That means that each different condition is a different HIT. In this case, if you want random assignment to (a single) condition, all you can do is write prominently "please do only one of these HITs." The problem is that Amazon displays HITs from the same job one after another, so you have to make sure that every worker stops after doing just one. This strategy generally works until some worker does 7 or 30 conditions of your experiment - messing up your randomization and putting you in the awkward position of paying for data you (typically) can't use. Nevertheless, I and many other people used the "do this HIT once" method for years - it's easy and doesn't go wrong too much if the instructions are clear enough.

In the last couple of years, though, folks in my lab have moved to using "external HITs" where we use Turk's Command Line Tools to direct workers to a single HTML/JavaScript-based HIT that can do all kinds of more interesting stuff, including having multiple screens, lots of embedded media, and a more complex control flow. The HTML/js workflow is generally great for this, and there is quite a bit of code floating around the web that can be reused for this purpose. Now there is only one underlying HIT, so workers can only complete it once.

The easiest way to do random assignment to condition from within a JavaScript HIT is to have the js assign condition completely at random for each participant. This just involves writing some randomization in the code for the experiment and makes things very simple. With 2 conditions and many participants, this works pretty well (maybe you get 48 in one condition and 52 in another), but with many conditions and fewer participants, it fails quite badly. (Imagine trying to get 5 conditions with 10 participants each. You might get 6, 14, 8, 4, and 18 subjects, respectively, which would not be optimal from the perspective of having equally precise measures about each condition.)

Our solution to this problem is as follows: We use a simple PHP script, the "maker getter," that is called with an experiment filename and a set of initial condition numbers (in the example below, it's "myexpt_v1" and conditions 1 and 2, each with 50 participants). The first time it's called, it sets up a filename for that experiment and populates the conditions. Every subsequent time it's called, it returns a condition. Then, if this is a true Turk worker (and not a test run), a separate script decrements the counts for that condition. This gives us true random assignment to condition.

(Note: Todd Gureckis's PsiTurk is a more substantial, more general way to solve this same problem and several others, but requires a bit more in the way of setup and infrastructure.)

---- DETAILS AND CODE ----

The JavaScript block for setting up and getting conditions:

// Condition - call the maker getter to get the cond variable 
try {
    var filename = "myexpt_v1"
    var condCounts = "1,50;2,50"
    var xmlHttp = null;
    xmlHttp = new XMLHttpRequest();
    xmlHttp.open( "GET", "http://website.com/cgi-bin/maker_getter.php?conds=" + 
 condCounts +"&filename=" + filename, false );
    xmlHttp.send( null );
    var cond = xmlHttp.responseText;
} catch (e) {
    var cond = 1;
}

The JavaScript block for decrementing conditions:

// Decrement only if this is an actual turk worker!
if (turk.workerId.length > 0){
var xmlHttp = null;
xmlHttp = new XMLHttpRequest();
xmlHttp.open('GET',  
'http://website.com/cgi-bin/' + 
'decrementer.php?filename=' + 
filename + "&to_decrement=" + cond, false);
xmlHttp.send(null);

}

maker_getter PHP script (courtesy of Stephan Meylan, now a grad student at Berkeley), which is running in the executable portion of your hosting space: maker_getter.php.

decrementer PHP script (also courtesy Stephan): decrementer.php.

Friday, October 4, 2013

Effect sizes and baby rearing (review of Baby Meets World)

In these first months of M's life, I've been reading a fair number of parenting advice or interest books focused on babies. My motivation is partially personal and partially professional. Regardless, it has been entertaining to sample the vast array of different theories and interpretations of what is going on in M's cute little head (and body).

I recently finished Baby Meets World: Suck, Smile, Touch, Toddle, by Nicholas Day, and it is my favorite of the scattered group I've read. Day is a clear, funny writer who also blogs entertainingly for SlateBaby Meets World is a tour of the history and science of parenting, broken down by the four activities in its subtitle.

But unlike many books about developmental science it is also a cry of rage and despair by a new parent who has completely had it with parenting advice. This feels exactly right to me. Rather than urbanely walking through the latest research on sucking along with a Gladwell-esque profile of a scientist, Day shows us the absolute weirdness of its past - from deciding whether to use goats or donkeys as wet nurses to the purported link between thumb sucking and chronic masturbation.

The implication, drawn out very clearly in a recent New York Times blog post, is that our current developmental studies may not have much more to offer parents than Freud's hypotheses about thumb sucking:
... [E]xperiments have the most meaning within [their] discipline, not outside of it: they are mostly relevant to small academic disputes, not large parenting decisions. But when we extract practical advice from these studies, we shear off all their disclaimers and complexities. These are often experiments that show real but very small effects, but in the child-rearing advice genre, a study that showed something is possible comes out showing that something is certain. Meager data, maximum conclusions. (p299)
People often ask me how relevant my own work on language development is to my relationship with M. My answer is, essentially not at all. I am a completely fascinated observer; I continually interpret her behavior in terms of my interest in development. Nevertheless, I see very few - if any - easy generalizations from my work (and that of most of my colleagues) to normative recommendations for child rearing beyond "talk to your child."

While this kind of recommendation is without a doubt critical for some families, it's not necessarily the kind of thing that you need to hear if you're already in the market for baby advice books. For example, rather than telling me that M needs to hear 30 million words, you should probably counsel me to talk to her less (let the baby sleep, already!). One size doesn't fit all. There are some interesting applied studies that have near-term upshot for baby-advice consumers (e.g. work on learning from media). But overall this is the exception rather than the rule in much of what I do, which is primarily basic research on children's social language learning.

Parents who have read parenting books often say "you must do X with your child" or "you can't do Y," whether it's serving refined sugar, giving tummy time, or using the word "no" (don't, do, and don't, respectively - according to some authorities).  But the effect size of any child-rearing advice, whether reasonable or not, is likely to be small: the people who had parents that followed it aren't immediately distinguishable from those whose parents didn't. Consider the contrast between the range of variation in parenting practices across cultures and the consistency of having reasonable outcomes - nice, well-adjusted people. People grow up lots of different ways and yet they turn out just fine. This is the message of Day's book.

Of course there are real exceptions to this rule. But these are not the small variations in child rearing for your standard-issue helicopter parents - BPA-free tupperware or not? - or even the culturally-variable practices like whether you swaddle. They are huge factors like poverty, stress, and neglect, which have systematic and devastating effects on children's brain, mind, and life outcomes. Remediating them is a major policy objective. We shouldn't confuse the myriad bewildering details of babyrearing with the necessities of providing safety, nutrition, and affection.