Online Media Exposure

“Media Choice and Moderation: Evidence from Online Tracking Data”

Does the internet enable selective exposure to congenial content? This is the first study of online media consumption to combine large-N passive tracking data with individual-level political variables on a representative cross-section of Americans. I find that most people across the political spectrum have centrist media diets composed largely of mainstream portals: The average slant of Democrats’ and Republicans’ media consumption differs by less than 8% of the available ideological spectrum of online sources. An exception to this pattern is a small group of partisans who drive a disproportionate amount of traffic to relatively extreme websites, particularly on the right. I extend this powerful empirical approach by testing how exogenous changes in the offline political environment affect the types of sources people use to learn about the news. I do this in two ways: first, by deploying an online field experiment about a novel political issue, and second, by exploiting revelations about a major scandal that emerged during data collection. In doing so, I explore the mechanisms of information search underlying the unique observational portrait this approach enables. Theoretically, I outline a distinction between “active” and “passive” search and show that engaging in more purposeful political media consumption can result in either polarization or moderation, depending on the context. Overall, the findings support a view that if online “echo chambers” exist, they are a reality for only very few people who drive the traffic and priorities of the most partisan outlets.

“Measure for Measure: An Experimental Test of Online Political Media Exposure”, Political Analysis (2015)  [ preprint | data ]

Self-reported measures of media exposure are plagued with error and questions about validity. Since they are essential to studying media effects, a substantial literature has explored the shortcomings of these measures, tested proxies, and proposed refinements. But lacking an objective baseline, such investigations can only make relative comparisons. By focusing specifically on recent Internet activity stored by Web browsers, this article’s methodology captures individuals’ actual consumption of political media. Using experiments embedded within an online survey, I test three different measures of media exposure and compare them to the actual exposure. I find that open-ended survey prompts reduce overreporting and generate an accurate picture of the overall audience for online news. I also show that they predict news recall at least as well as general knowledge. Together, these results demonstrate that some ways of asking questions about media use are better than others. I conclude with a discussion of survey-based exposure measures for online political information and the applicability of this article’s direct method of exposure measurement for future studies.

Twitter Mobilization Experiments

“When Treatments Are Tweets: A Network Mobilization Experiment Over Twitter”, Political Behavior (2015, with Alexander Coppock and John Ternovski)  [ preprint | data | online appendix ]

This study rigorously compares the effectiveness of online mobilization appeals via two randomized field experiments conducted over the social microblogging service Twitter. In the process, we demonstrate a methodological innovation designed to capture social effects by exogenously inducing network behavior. In both experiments, we find that direct, private messages to followers of a nonprofit advocacy organization’s Twitter account are highly effective at increasing support for an online petition. Surprisingly, public tweets have no effect at all. We additionally randomize the private messages to prime subjects with either a “follower” or an “organizer” identity but find no evidence that this affects the likelihood of signing the petition. Finally, in the second experiment, followers of subjects induced to tweet a link to the petition are more likely to sign it — evidence of a campaign gone “viral.” In presenting these results, we contribute to a nascent body of experimental literature exploring political behavior in online social media.

“Petitioning the Court: Testing Promoted Tweets and DMs in a Networked Field Experiment” (with Kevin Collins and Alexander Coppock)  [ in progress ]

This study evaluates the effectiveness of online mobilization appeals via a large-scale randomized field experiment conducted in partnership with a major political advocacy organization on Twitter. We demonstrate two innovations in this study. First, we use Promoted Tweet campaigns to randomize subjects’ exposure to tweets. Second, we incorporate a peer encouragement design to induce and measure the effect of appeals to subjects’ own followers. We find that direct messages (DMs) to followers of the organization’s Twitter account lead to small but measurable (~1.5pp) increases in the likelihood of tweeting a message in support of the campaign — a real-world initiative to generate pressure on Senate Republicans to hold hearings on President Obama’s Supreme Court pick, Merrick Garland. Promoted Tweets are more effective (~2pp) at generating tweets, but not at encouraging supporters to sign an online petition. DMs are much more effective in this regard, boosting signatures by approximately 7 percentage points.

Opinion Change and Persuasion

“The Exception, Not the Rule? The Rarely Polarizing Effect of Challenging Information” (with Alexander Coppock)

Several prominent theoretical perspectives suggest that when individuals are exposed to counter-attitudinal evidence or arguments, their preexisting opinions and beliefs are reinforced, resulting in a phenomenon known as “backlash,” “backfire,” or “boomerang.” We investigate the prevalence of this effect. Should we expect that all attempts to persuade those who disagree will backfire? First, we formalize the concept of backlash and specify how it can be measured. We then present results from three survey experiments — two on Mechanical Turk and one on a nationally representative sample — in which we find no evidence of backlash, even under theoretically favorable conditions. While a casual reading of the literature on partisan information processing would lead one to conclude that backlash is rampant, we suspect that it is much rarer than commonly supposed. Researchers should continue to design well-powered randomized studies in order to better understand the specific conditions under which backlash is most likely to occur.

“Back to Bayes: Confronting the Evidence on Attitude Polarization” (with Alexander Coppock)  [ online appendix ]

A large body of theory suggests that when individuals are exposed to counter-attitudinal evidence or arguments, their preexisting opinions and beliefs are reinforced, resulting in a phenomenon known as attitude polarization. Drawing on evidence from three well-powered randomized experiments designed to induce polarization, we find that no such effect occurs. To explain our results, we develop and extend a theory of Bayesian learning that, under reasonable assumptions, rules out polarization. People, instead, update their prior beliefs in accordance with the evidence presented. As we illustrate using both standard linear models and Bayesian Additive Regression Trees (BART), this updating occurs in a more or less uniform fashion: Those inclined to agree with the evidence appear to update to a similar extent as those inclined to disagree. In short, our subjects all appear to be persuadable. We further show, through an exact replication of the original Lord, Ross and Lepper (1979) study, that the seeming discrepancy between our results and those in the literature are driven by differences in experimental design. Using this insight, we reinterpret previous findings and argue that Bayesian reasoning remains the best available model of opinion change in light of evidence. (Previous title: “Bayesian Evaluation, Not Biased Assimilation: Revisiting the Attitude Polarization Hypothesis”)

Experimental Research and Methods

“Can the Government Deter Discrimination? Evidence from a Randomized Intervention in New York City” (Conditionally Accepted at Journal of Politics, with Albert Fang and Macartan Humphreys)  [ report to city ]

Racial discrimination persists despite established anti-discrimination laws. A common government strategy to deter discrimination is to publicize the law and communicate potential penalties for violations. We study this strategy by coupling an audit experiment with a randomized intervention involving nearly 700 landlords in New York City and report the first causal estimates of the effect on rental discrimination against Blacks and Hispanics of a targeted government messaging campaign. We uncover discrimination levels higher than prior estimates indicate, especially against Hispanics, who are approximately six percentage points less likely to receive callbacks and offers than whites. We find suggestive evidence that government messaging can reduce discrimination against Hispanics, but not against Blacks. The findings confirm discrimination’s persistence and suggest that government messaging can address it in some settings, but more work is needed to understand the contexts under which such appeals are most effective.

“Gender Discrimination in Housing: Evidence from a Field Experiment” (with Albert Fang and Macartan Humphreys)  [ in progress ]

“Bayesian Estimation of Principal Causal Effects under Partial Compliance” (PolMeth poster, with Albert Fang)  [ in progress ]

We assess the properties and performance of Bayesian estimators of principal causal effects, which have been widely recommended by scholars in the causal inference and biostatistics literatures. In a preliminary set of simulations, we find that Bayesian estimators of key parameters of interest appear to be unreliable. Future work will extend these simulations and assess the extent to which posteriors are driven by the prior, with a specific focus on assessing measures of prior informativeness and prior-likelihood conflict.

“By the Numbers: Toward More Precise Numerical Summaries of Results”, The Political Methodologist (2017, with Gaurav Sood)  [ data ]

Unlike the natural sciences, there are few true zeroes in the social sciences. All sorts of variables are often weakly related to each other. And however lightly the social scientists intervene, effects of those interventions are rarely precisely zero. When there are few true zeroes, categorical statements like: two variables are significantly related, or, the intervention had a significant effect, convey very limited information — about sample size and luck (and manufactured luck). Yet these kinds of statements are the norm in abstracts of the top political science journal, the American Political Science Review (APSR). As we show, only 10% of empirical articles in recent volumes of APSR have abstracts with precise quantitative statements. The comparable number for American Economic Review (AER) is 35%.

Social Media and Misinformation

“Rumors in Retweet: Social Media and the Spread of Political Misinformation” (with Briony Swire, Adam Berinsky, John Jost, and Joshua Tucker)  [ in progress ]

How do rumors spread? Research on why misinformation continues to influence reasoning has primarily focused on narrow timeframes in controlled laboratory settings. We advance the study of why rumors persist — even after valid corrections have been presented — by leveraging a unique collection of Twitter data covering four specific political events, including the Boston Marathon bombing and the death of Antonin Scalia. Combining these large datasets with machine learning techniques, we show how “corrective” information (such as the capture of suspects) affects the transmission of rumors up to 10 days after the incidents occurred. In several cases, we show that Twitter users estimated to be conservative are more likely to continue spreading rumors even after such causal explanations have been provided by public authorities. Furthermore, many of the rumors originate with a small number of conspiracy-oriented websites. We discuss cognitive mechanisms underpinning these findings such as motivated reasoning and the impact of the correction’s source credibility. The structural features that make social media an effective tool for facilitating collective action and democratizing access to information can also serve to magnify falsehoods and innuendo.

“Fact-checking on Twitter: An examination of campaign 2014” (public report commissioned by the American Press Institute and The Democracy Fund, not peer reviewed)

I combine machine learning methods with a complete set of historical Twitter data to examine how fact-checking interventions play out on social media. Overall, tweets (or retweets) containing misleading or factually wrong statements outnumber tweets with corrective information. However, as I show in two case studies, the amount of misinformation decreases over time as corrections make up a larger share of tweets relating to a particular claim. Despite controversies surrounding factual interpretations, especially as they relate to political debates, I find that sentiment toward journalistic fact-checking on Twitter is more likely to be positive than negative or neutral.