Everyone seems to be consumed with AI anxiety. Graduate students are wondering if they will be replaced by assistants, or if they themselves are using AI enough or using it "right". Researchers are wondering what it means to produce research if agents can write whole papers. Everyone is wondering how we will keep up with a literature that is moving ever faster.
Everyone is feeling the pressure to do *more*: do more projects, produce more papers, review more papers. This has already resulted in negative impacts on the research space, for example the problems that conferences have in getting quality, non-automated reviewing for the huge volume of submissions they receive.
We should think about what we can do that is *different.* We should try to use automation to be more efficient at the annoying parts of our jobs while leaving more time for discovering new knowledge. The key (fast-evolving, unresolved) issue is how AI models will change the frontier of what is scientifically possible. This varies from field to field and changes day by day, but my sense is that the rise of semi-autonomous agents will be very interesting for scaling up social and behavioral science.
Don’t give away the good part
The first use-case for AI has been generating text. While this function can be useful, especially for useless administrative prose that no one needs to read, it is at odds with a fundamental feature of scholarship: writing is critical to thinking.
A small handful of scientists I've encountered seem to think directly from clearly posed questions to elegant experimental designs and theoretical interpretations. I envy them because I can't do that. Instead, I have to slowly and laboriously externalize my argument into a paper or a talk and stare at it to realize why it doesn't make sense. It's inconvenient that this process sometimes happens after years of research effort!
If you value thinking, then jumping directly to text generation using AI doesn't make sense. That's one reason why so many academics are so negative on AI, and deeply distrust its use as a tool: generating some generic text gives away the opportunity to figure something out.
But it's shortsighted to give up on AI altogether because of this argument. Instead, we need to focus on ways that AI can help us have more time to spend on the good, hard work of writing and thinking. I'm still working on this, but here are some cases where it's been useful for me: reformatting research documents for IRB; planning conference travel itineraries; drafting documentation for a software package; organizing a code repository; rearranging and reformatting authorship and credit info for a manuscript; adding DOIs to a reference section. Not all of these worked perfectly – I'm looking at you, DOI falsification – but overall they saved me time that I could use for more meaningful work.
Don’t focus on making the same junk
Here's the other thing: we're not knocking it out of the park right now. The standard paper-shaped package of social science research is not something that we just want to make more of! We have a long ways to go to ensure that the work we publish is reproducible, replicable, and robust. Using AI to make more standard papers faster will not be a win.*
The bigger problem is that there's no direct route from more papers to precise, generalizable theories of how the human mind works, or how social systems function, or how to improve school achievement. Making more papers might be a *side effect* of making progress on those topics, but it's not the right causal lever to pull.
What we need are more precise measurements and more precise theories. (As a side note, this is what I've written about again and again, e.g. in my experimental methods textbook or a recent review on cognitive modeling). AI is not the royal road to these, but the work to come is figuring out how it can help.**
What does "better" look like?
Unless you're actually studying AI, standard text-generation a la GPT4 is not that helpful for making more precise measurements or better theories. How do you go from a chat window to better science? I tried a bunch of ideas early on in this era and they all failed. With increases in the capacity of base models and the rise of agent-based AI tools like Cursor, Claude Code, and Cowork, however, I am seeing some fascinating ways that these tools can be helpful for increasing the scale and robustness of our work. Here are a few.
Finally, a sometimes-overlooked but very cool function of AI tools is for providing critique. Agents need not write code; they can also check that your code does what you think it does. One of my students used Claude Code to find a pretty significant bug in their (entirely hand coded) research pipeline. This kind of robustness checking is far too rare in research. I also find AI critiques of research materials (designs, stimuli, even writing) to be very helpful in pointing out flaws I've overlooked. Maybe models aren't as insightful as a really smart friend or mentor, but these folks tend to be really busy with their own work!
Conclusion
I hear a lot of people saying they oppose AI-generated prose and so they don't think AI should be used for research. My response is that this is just the wrong way to use AI as an academic. Don't use it to decrease the quality of the hard thing you do. Instead, try to find ways to use it for making the boring parts of your job easier and increasing the quality and scope of the best parts!
No comments:
Post a Comment