Wednesday, July 30, 2014

I would have shocked myself

(One of my favorite xkcd cartoons. According to recent research, 
maybe we're all scientists? At least, we keep pulling the lever over and 
over again...)


Do people hate being alone with their thoughts so much that they will shock themselves to avoid thinking? In a series of studies, Wilson et al. (2014) asked people to spend time quietly thinking without distractions and concluded that people generally found the experience aversive. Others have reinterpreted this conclusion, but the general upshot is that people at least didn't find it particularly pleasurable. Several manipulations (e.g. planning the thing you were going to think about) also didn't make the experience much better. Overall this is a very interesting paper, and I really appreciated the emphasis on a behavior – mind wandering – that has received much more attention in the neuroscience literature than in psychology.

The part of the paper that got the most attention, however, was a study that measured whether people would give themselves an electric shock while they were supposed to be quietly thinking. The authors shocked the participants once and then checked to make sure that the participants found it sufficiently aversive to say they would pay money to avoid having it done to them again. But then when the participants were left to think by themselves, around two thirds of men and a quarter of women went and shocked themselves again, often several times. The authors interpret this finding as follows:
What is striking is that simply being alone with their own thoughts... was apparently so aversive that it drove many participants to self-administer an electric shock that they had earlier said they would pay to avoid.
Something feels wrong about this interpretation as it stands. Here are my two conflicting intuitions. First, I actually like being alone with my own thoughts. I sometimes consciously create time to space out and think about a particular topic, or about nothing at all. Second, I am absolutely certain I would have shocked myself.

I would have shocked myself at least once, but possibly five or more times. I might even have been the guy who shocked himself 190 times and had to get excluded from the study. Even when I said I would pay money to avoid having someone else shock me. I definitely would have done it to myself. Why? I don't really know.

There are many sensations that I would pay money not to have someone do to me: stick a paperclip under my fingernail, pluck out hairs from my beard, bite my nail to the quick. Yet I will sometimes do these things to myself, even though they are painful and I will regret it afterwards. The exploration of small, moderately painful stimuli is something that I do on a regular basis. (Other people do these things too). I am not sure why I do them, but I don't think it's because I hate being bored so much that I would rather be in pain.

Boredom and pain are not zero sum, in other words. Pain can drive boredom away, but the two can coexist as well. I don't do these things when I'm engaged in something else like reading the internet on my phone. But I do actually do them on a regular basis when I'm listening to a talk or thinking about a complicated paper.

I don't know why I cause myself minor pain sometimes. But it feels like there are at least two component reasons. One is some kind of automatic exploration (they happen when my mind is otherwise occupied, as the examples above show). But I also do these sorts of things in part because I want to see how they feel. Kind of like ripping off a hangnail or playing with a sharp knife. There's some novelty seeking involved, but doing them again and again isn't quite about novelty seeking; we've all had a hangnail or pulled out a hair. Perhaps it's about the exact sensation and the predictions we make – will it feel better or worse? Can I predict exactly what it will be like?

What I'm arguing is that these things are mysterious on any view of humans as rational agents. The Wilson paper doesn't sufficiently acknowledge this mystery, instead choosing to treat people as purely rational: they paid to avoid X, but then they do it anyway, it must be because X is better than Y. But there isn't a direct, utility-theoretic tradeoff between mind-wandering and electric shock. Consider if Wilson et al. had played Enya to participants and found they shocked themselves (which I bet I would have). Would they then conclude that Enya is so bad that people shock themselves to get away from her?


ResearchBlogging.org Wilson TD, Reinhard DA, Westgate EC, Gilbert DT, Ellerbeck N, Hahn C, Brown CL, & Shaked A (2014). Social psychology. Just think: the challenges of the disengaged mind. Science, 345, 75-77 PMID: 24994650

Tuesday, July 29, 2014

Are first words bound to specific contexts?

("Internet high five." From maniacworld.com).

It's so fun to watch the emergence of language. M just had her first birthday, and – though we still haven't seen much in the way of production beyond "brown bear" (see previous post) and maybe "yum" – she's starting to show some exciting signs of knowing some words. It's endlessly fascinating to gather evidence about her comprehension, but I'm continuously amazed at how tenuous my evidence is for any given word.*

In particular, I've been wondering for the past week or so whether M knows the meaning of the word/phrase "high five." She loves the swings at the playground, and really enjoys playing games while swinging. One day we started doing hand slaps (accompanied by me saying "high five"). After a couple of times playing this game, when I said "high five," she would raise her hands, even without the extra cue of me raising my hands. Word knowledge, right?

It turns out that one persistent question about first words is how contextually-bound they are: whether their meanings are general across contexts, or whether they apply only in specific cases. Some of this is a remnant of older, behaviorist analyses of early language – word A is a conditioned response to situation B – which don't seem to account for the data. Most people who study child language agree that early nouns like "dog" can be generalized across situations quite handily – in fact, overgeneralization is relatively common. But you still see references to "context-specific" language in textbooks and materials for parents (exampleexample). My goal here is to propose an alternative – rational – account of why much early language looks context specific, even though it's not.

I can see why ideas about context-specific language stick around. When I investigated M's "high five" knowledge further, I was disappointed. Although I could get her to give me a high five on the swings, I simply couldn't elicit the gesture in response to my words when we came home to the house. This looked to me a lot like "high five" was bound to the context of the swing set.

But here's another possibility, in two parts. Part one: Language comprehension for a one-year-old is hard. A well-known set of experiments by Stager & Werker (1998) suggest that even relatively small attentional demands can disrupt the encoding of speech. In their experiments, 14-month-olds (and even 8-month-olds) could distinguish the sounds "bih" and "dih." But the same age children had trouble learning to pair these sounds consistently with different pictures, even though they could do it just fine with more dissimilar words (e.g. "lif" and "neem"). 

Part two: When you have a hard comprehension task, context can make it easier. Contextual predictability effects have been very well studied in word recognition (example), with the caveat that context is typically defined as being the sentence in which a sound occurs. The basic idea is very Bayesian: a context creates a higher prior probability of a particular sound, which helps in identifying that sound from noisy perceptual input.

So perhaps contextual-boundedness effects in early child language have exactly the same source. When M recognizes "high five," it could be that she is getting a boost from its use in a familiar context, even if she could – in principle – recognize it in another context, given a sufficiently clear and unambiguous signal. Inspired by this idea, I tried asking her again at the house the other day. I said, "M! M! Can you give me a HIGH... FIVE?" in my best child-directed speech. She grinned and reached her hand up for the win. Of course, while I was figuring out my theory, perhaps she was generalizing...

---
* There are so many reasons why any uncontrolled individual test of comprehension doesn't provide good evidence for her knowledge.** For example, if I'm trying to figure out whether she knows the word "cat," I can't use a book where we have previously pointed to a cat photo, since she tends to come back to parts of the book we've attended to. On the other hand, if I find two new objects (a cat and a ball), typically one will be more exciting than the other. In some of our recent eye-tracking work, we've been finding that salience of this kind has an outsize effect on word recognition (echoing much earlier findings), and the best work on very early word knowledge explicitly measures and subtracts this salience bias.

** That's why experiments, I guess...