(tl;dr: Some questions I'm thinking about, inspired by the idea of studying the broad structure of child development through larger-scale datasets.)
My daughter, M, started kindergarten this month. I began this blog when I was on paternity leave after she was born; the past five years have been an adventure and revolution for my understanding of development to watch her grow.* Perhaps the most astonishing feature of the experience is how continuous, incremental changes lead to what seem like qualitative revolutions. There is of course no moment in which she became the sort of person she is now: the kind of person who can tell a story about an adventure in which two imaginary characters encounter one another for the first time,** but some set of processes led us to this point. How do you uncover the psychological factors that contribute to this kind of growth and change?
My lab does two kinds of research. In both my hope is to contribute to this kind of understanding by studying the development of cognition and language in early childhood. The first kind of work we do is to conduct series of experiments with adults and children, usually aimed at getting answers to questions about representation and mechanism in early language learning in social contexts. The second kind of work is a larger-scale type of resource-building, where we create datasets and accompanying tools like Wordbank, MetaLab, and childes-db. The goal of this work is to make larger datasets accessible for analysis – as testbeds for reproducibility and theory-building.
Each of these activities connects to the project of understanding development at the scale of an entire person's growth and change. In the case of small-scale language learning experiments, the inference strategy is pretty standard. We hypothesize the operation of some mechanism or the utility of some information source in a particular learning problem (say, the utility of pragmatic inference in word learning). Then we carry out a series of experiments that shows a proof of concept that children can use the hypothesized mechanism to learn something in a lab situation, along with control studies that rule out other possibilities. When done well, these studies can give you pretty good traction on individual learning mechanisms. But they can't tell you that these mechanisms are used by children consistently (or even at all) in their actual language learning.
In contrast, when we work with large-scale datasets, we get a whole-child picture that isn't available in the small studies. In our Wordbank work, for example, we get a global picture of the child's vocabulary and linguistic abilities, for many children across many languages. The trouble is, it's very hard or even impossible to find answers to smaller-scale questions (say, about information seeking from social partners) in datasets that represent global snapshots of children's experience or outcomes. Both methods – the large-scale and the small-scale – are great. The trouble is, the questions don't necessarily line up. Instead, larger datasets tend to direct you towards different questions. Here are three.
Thoughts on language learning, child development, and fatherhood; experimental methods, reproducibility, and open science; theoretical musings on cognitive science more broadly.
Thursday, August 30, 2018
Friday, August 10, 2018
Where does logical language come from? The social bootstrapping hypothesis
(Musings on the origins of logical language, inspired by work done in my lab by Ann Nordmeyer, Masoud Jasbi, and others).
For the last couple of years I've been part of a group of researchers who are interested in where logic comes from. While formal, boolean logic is a human discovery*, all human languages appear to have methods for making logical statements. We can negate a statement ("No, I didn't eat your dessert while you were away"), quantify ("I ate all of the cookies"), and express conditionals ("if you finish early, you can join me outside.").** While boolean logic doesn't offer a good description of these connectives, natural language still has some logical properties. How does this come about? Because I study word learning, I like to think about logic and logical language as a word learning problem. What is the initial meaning that "no" gets mapped to? What about "and", "or", or "if"?
Perhaps logical connectives are learned just like other words. When we're talking about object words like "ball" or "dog," a common hypothesis is that children have object categories as the possible meanings of nouns. These object categories are given to the child by perception*** in some form or other. Then, kids hear their parents refer to individual objects ("look! a dog! [POINTS TO DOG]"). The point allows the determination of reference; the referent is identified as an instance of a category, and – modulo some generalization and statistical inference – the word is learned, more or less.****
So how does this process work for logical language? There are plenty of linguistic complexities for the learner to deal with: Most logical words simply don't make sense on their own. You can't just turn to your friend and say "or" (at least not without a lot of extra context). So any inference that a child makes about the meaning of the word will have to involve disentangling that from the meaning of the sentence as a whole. But beyond that, what are the potential targets for the meaning of these words? There's nothing you can point to out in the world that is an "if," an "and," or even a "no."
For the last couple of years I've been part of a group of researchers who are interested in where logic comes from. While formal, boolean logic is a human discovery*, all human languages appear to have methods for making logical statements. We can negate a statement ("No, I didn't eat your dessert while you were away"), quantify ("I ate all of the cookies"), and express conditionals ("if you finish early, you can join me outside.").** While boolean logic doesn't offer a good description of these connectives, natural language still has some logical properties. How does this come about? Because I study word learning, I like to think about logic and logical language as a word learning problem. What is the initial meaning that "no" gets mapped to? What about "and", "or", or "if"?
Perhaps logical connectives are learned just like other words. When we're talking about object words like "ball" or "dog," a common hypothesis is that children have object categories as the possible meanings of nouns. These object categories are given to the child by perception*** in some form or other. Then, kids hear their parents refer to individual objects ("look! a dog! [POINTS TO DOG]"). The point allows the determination of reference; the referent is identified as an instance of a category, and – modulo some generalization and statistical inference – the word is learned, more or less.****
So how does this process work for logical language? There are plenty of linguistic complexities for the learner to deal with: Most logical words simply don't make sense on their own. You can't just turn to your friend and say "or" (at least not without a lot of extra context). So any inference that a child makes about the meaning of the word will have to involve disentangling that from the meaning of the sentence as a whole. But beyond that, what are the potential targets for the meaning of these words? There's nothing you can point to out in the world that is an "if," an "and," or even a "no."
Subscribe to:
Posts (Atom)