Jon Green Bridging Politics and Methods

Jon Green sits in his dining room with his dog Izzy
Jon Green, Assistant Professor of Political Science, is not fully unpacked from the recent move to Durham. He and his pup, Lizzy, eagerly await the delivery of a dining room table. (Shaun King/Trinity Communications)

The acrimonious and polarizing politics of the 2016 U.S. election cycle was a significant prompt for Jon Green’s first publication.

It called for a “translational science of democracy” in which political scientists foster democratic health and citizens move from the passive mindset of asking “what do we want?” to the deliberative mindset of “what should we do?” A healthy American democracy needs more deliberative experiments that engage citizens, Green and his coauthors wrote.

Seven years later and after finishing his Ph.D. at Ohio State University, Green published his first book, “Machine Learning for Experiments in the Social Sciences.” In it, Green and his coauthor see two research camps in political science that need to be bridged: causal inference and machine learning.

Typically separated by methodological traditions, Green asks about the benefits of using machine learning to advance our understanding of causal inference. How would that change the way political scientists do the translational science of democracy?

We asked Green, who joined Duke’s Department of Political Science as an assistant professor this Fall, about this question, as well as about his career and his future steps. This interview has been lightly edited for length and clarity.

How have politics informed and motivated your research through your graduate program?

Political science is important because, at the end of the day, it’s the systematic study of how we govern ourselves. We’re obviously interested in figuring out how things work, but ideally in doing that we learn how to make things work better.

One of my favorite projects, which I started working on in graduate school, looks at candidate emergence. We’re obviously concerned with the extent to which elected representatives reflect the backgrounds and perspectives of the public. But if some people are less likely to want to run for office than others — and if, among those who want to run, some are less likely to actually do so than others — then voters don’t get the chance to elect more representative representatives.

We partnered with Run for Something, a nonprofit that tries to recruit first-time candidates from under-represented backgrounds for state and local office, to look at who, among those who had expressed an interest in running, was more likely to go on and actually launch a campaign — paying particular attention to the reasons they had given for being interested in running when they signed up.

We found that people who said they were upset about the election of Donald Trump or other aspects of national politics were actually less likely to run than people who saw more specific issues in their local communities that they wanted to fix.

On some level this is fairly intuitive — you can’t impeach Trump from city council — but to me it also says something about the importance of local news and engagement with local politics, so that broader swathes of our communities feel empowered to step up and run.

Your book, “Machine Learning for Experiments in the Social Sciences,” argues machine learning should be used to inform causal inference. Historically, why have machine learning methods and causal inference analysis developed apart from each other?

In general, the two approaches have been used for different things, with causal inference emphasizing explanation and machine learning emphasizing prediction.

One of the more familiar applications of machine learning that affects us every day (whether or not it’s at the top of our minds) is spam detection. When an email hits your account, Gmail or Outlook or whatever client you use runs it through a model that predicts whether that email belongs in your spam folder. We don’t necessarily need a theory of spam or to precisely explain why a particular message belongs in your spam folder in order to do a good job of sorting spam from not-spam.

In other contexts, such as mobilizing people to vote, we care a lot more about why people vote and why particular strategies work for getting people who otherwise wouldn’t vote to vote. For that, simply being able to predict voting behavior won't do — being able to explain the mechanism is more important.

Was there a political moment that led you to connect these two methods?

I’d been interested in these methods before, but (you may sense a pattern here) the work I was doing concerning the COVID-19 pandemic helped clarify my thinking on them.

Our team had been asked by someone on the policy side whether, for a problem like how to encourage people to take a new vaccine, it’s better to identify specific messages that are especially persuasive to different subgroups or if it’s better to find the overall best message and deliver that to everyone.

We wound up running a big experiment where we tried some messages that we thought might be particularly effective for some groups (conservatives might be more responsive to a message invoking patriotism, for example) and some that we thought would be more compelling for everyone (invoking descriptive norms — “most people plan to take the vaccine”, for example).

We then used machine learning to test for heterogeneity — whether there were any groups or individuals for whom some messages were especially effective or not. We actually found in this case that the effects were relatively constant across the sample, which for that application was encouraging — we could go back and say, “We threw the kitchen sink at this message and didn’t find anyone who was put off by it.”

Are there cases where machine learning can help researchers make better causal connections?

I'll give an example. During the 2020 Democratic primary, there was a big debate over strategy: Should the Democratic nominee focus more on mobilizing the Democratic base or should they focus on winning over moderates who may not have voted for Hillary Clinton in 2016?

In a conjoint experiment that pitted hypothetical primary candidates against each other, varying their planned strategy (among other factors like race, gender, age, and so on) it didn’t look like strategy made much of a difference to likely Democratic primary voters. But when other researchers re-analyzed the same data using machine learning methods designed to predict individual-level effects, they found that this is because respondents were polarized on strategy.

Some strongly preferred a Democratic nominee who would mobilize the base; others strongly preferred a Democratic nominee who would appeal to moderates, and their preferences had canceled out in the aggregate. When we looked at the aggregate level, we were missing something important about how that primary electorate was thinking about the race.

What do you plan to focus on next?

I’m working on a book that argues for redrawing what used to be a sharper distinction between “mass” opinion and “public” opinion, and why too heavy an emphasis on the former and not enough on the latter undermines democracy.

The short version is, the aggregation of privately held beliefs and opinions — which is very common today but used to be more carefully delineated as “mass” opinion — is very useful for some things, but lacks key features that used to be considered truly “public” opinion.

We don’t typically ask survey respondents to engage with the views of their peers, think about connections between their ideas, or consider their preferences in the face of tradeoffs, for example. But this is exactly what elected officials have to do when they consider how we as a society should act collectively. So you wind up in a situation where there are polls out there for pretty much anything you could think of, but elected officials are (often with good reason) not sure whether or how they should inform their behavior.

We’re hoping to use some recently developed tools to show that it’s possible to conduct surveys in a manner that incorporates the things we think are important to public opinion, and how this may strengthen the lines of communication between citizens and their elected representatives.

So, why are we in the kitchen?

Doing research and publishing papers involves long timelines and high uncertainty. Maybe data collection doesn’t go as planned, maybe someone else publishes new research that leads you to revise your theory, maybe one of the journal reviewers decides your paper isn’t good enough. This is a great recipe (no pun intended) for anxiety. Cooking gives me sets of discrete tasks that I can accomplish right away – in an hour on a weeknight, or maybe in an afternoon on a Saturday – where I have full control over the outcome and there’s an immediate payoff.

Jon Green cooking while working at Duke