Online Media Exposure

“(Almost) Everything in Moderation: New Evidence on Americans’ Online Media Diets”

Does the internet enable selective exposure to politically congenial content? To answer this question, I introduce and validate large-N behavioral data on Americans’ online media consumption in both 2015 and 2016. I then construct a simple measure of media slant and apply machine learning to identify individual articles related to news about politics. I find that most people across the political spectrum have relatively moderate media diets driven to a large degree by mainstream portals. Quantifying the similarity of Democrats’ and Republicans’ media diets, I find nearly 70% overlap in the two groups’ distributions in 2015 and roughly 50% in 2016. An exception to this picture is a small group of partisans who drive a disproportionate amount of traffic to ideologically slanted websites. Overall, the findings support a view that if online “echo chambers” exist, they are a reality for relatively few people. (Previous title: “Media Choice and Moderation: Evidence from Online Tracking Data”)

“How Accurate Are Survey Responses on Social Media and Politics?”, Political Communication (Forthcoming Symposium on “New Approaches to Methods and Measurement in the Study of Political Communication Effects,” with Kevin Munger, Jonathan Nagler, and Joshua Tucker) [ online appendix ]

How accurate are survey-based measures of social media use, in particular about political topics? We answer this question by linking original survey data collected during the U.S. 2016 election campaign with respondents’ observed social media activity. We use supervised machine learning to classify whether this Twitter and Facebook account data is content related to politics. We then benchmark our survey measures on frequency of posting about politics and the number of political figures followed. We find that, on average, our self-reported survey measures tend to correlate with observed social media activity. At the same time, we also find a worrying amount of individual-level discrepancy and problems related to extreme outliers. Our recommendations are twofold. First, for survey questions about social media use to provide respondents with options covering a wider range of activity, especially in the long tail. Second, for survey questions to include specific content and anchors defining what it means for a post to be “about politics.”

“Measuring How Many People are in Media Bubbles on Twitter” (with Gregory Eady, Jan Zilinsky, Jonathan Nagler, and Joshua Tucker)

A major point of debate in the study of social media is whether social media platforms facilitate and/or encourage citizens to inhabit online ideological “bubbles” or “echo chambers,” exposed primarily to ideologically congruent political information. The fear is that ideologically exclusive environments will result in large segments of the public who do not consume information that would challenge their existing beliefs. To investigate this, we use survey data from a large representative sample of Twitter users in the U.S. to map the ideological distributions of users’ media and political environments online. We measure the proportion of liberals and conservatives whose online media diets include almost no content from ideologically incongruent sources, and show that large proportions of conservatives, and larger proportions of liberals, choose not to receive information from ideologically incongruent sources. We note that such choices are somewhat moderated by offline news watching behavior.

“Measure for Measure: An Experimental Test of Online Political Media Exposure”, Political Analysis (2015)  [ preprint | data ]

Self-reported measures of media exposure are plagued with error and questions about validity. Since they are essential to studying media effects, a substantial literature has explored the shortcomings of these measures, tested proxies, and proposed refinements. But lacking an objective baseline, such investigations can only make relative comparisons. By focusing specifically on recent Internet activity stored by Web browsers, this article’s methodology captures individuals’ actual consumption of political media. Using experiments embedded within an online survey, I test three different measures of media exposure and compare them to the actual exposure. I find that open-ended survey prompts reduce overreporting and generate an accurate picture of the overall audience for online news. I also show that they predict news recall at least as well as general knowledge. Together, these results demonstrate that some ways of asking questions about media use are better than others. I conclude with a discussion of survey-based exposure measures for online political information and the applicability of this article’s direct method of exposure measurement for future studies.

Twitter Mobilization Experiments

“When Treatments Are Tweets: A Network Mobilization Experiment Over Twitter”, Political Behavior (2015, with Alexander Coppock and John Ternovski)  [ preprint | data | online appendix ]

This study rigorously compares the effectiveness of online mobilization appeals via two randomized field experiments conducted over the social microblogging service Twitter. In the process, we demonstrate a methodological innovation designed to capture social effects by exogenously inducing network behavior. In both experiments, we find that direct, private messages to followers of a nonprofit advocacy organization’s Twitter account are highly effective at increasing support for an online petition. Surprisingly, public tweets have no effect at all. We additionally randomize the private messages to prime subjects with either a “follower” or an “organizer” identity but find no evidence that this affects the likelihood of signing the petition. Finally, in the second experiment, followers of subjects induced to tweet a link to the petition are more likely to sign it — evidence of a campaign gone “viral.” In presenting these results, we contribute to a nascent body of experimental literature exploring political behavior in online social media.

“Petitioning the Court: Testing Promoted Tweets and DMs in a Networked Field Experiment” (with Kevin Collins and Alexander Coppock)  [ in progress ]

This study evaluates the effectiveness of online mobilization appeals via a large-scale randomized field experiment conducted in partnership with a major political advocacy organization on Twitter. We demonstrate two innovations in this study. First, we use Promoted Tweet campaigns to randomize subjects’ exposure to tweets. Second, we incorporate a peer encouragement design to induce and measure the effect of appeals to subjects’ own followers. We find that direct messages (DMs) to followers of the organization’s Twitter account lead to small but measurable (~1.5pp) increases in the likelihood of tweeting a message in support of the campaign — a real-world initiative to generate pressure on Senate Republicans to hold hearings on President Obama’s Supreme Court pick, Merrick Garland. Promoted Tweets are more effective (~2pp) at generating tweets, but not at encouraging supporters to sign an online petition. DMs are much more effective in this regard, boosting signatures by approximately 7 percentage points.

Opinion Change and Persuasion

“Does Counter-Attitudinal Information Cause Backlash? Results from Three Large Survey Experiments”, British Journal of Political Science (Forthcoming, with Alexander Coppock) [ data | online appendix ]

Several theoretical perspectives suggest that when individuals are exposed to counter-attitudinal evidence or arguments, their preexisting opinions and beliefs are reinforced, resulting in a phenomenon sometimes known as “backlash.” We formalize the concept of backlash and specify how it can be measured. We then present results from three survey experiments — two on Mechanical Turk and one on a nationally representative sample — in which we find no evidence of backlash, even under theoretically favorable conditions. While a casual reading of the literature on information processing suggests that backlash is rampant, we conclude that it is much rarer than commonly supposed. (Previous title: “The Exception, Not the Rule? The Rarely Polarizing Effect of Challenging Information”)

Experimental Research and Methods

“Can the Government Deter Discrimination? Evidence from a Randomized Intervention in New York City”, Journal of Politics (Forthcoming, with Albert Fang and Macartan Humphreys)  [ local copy | online appendix | replication archive | report to city ]

Racial discrimination persists despite established anti-discrimination laws. A common government strategy to deter discrimination is to publicize the law and communicate potential penalties for violations. We study this strategy by coupling an audit experiment with a randomized intervention involving nearly 700 landlords in New York City and report the first causal estimates of the effect on rental discrimination against Blacks and Hispanics of a targeted government messaging campaign. We uncover discrimination levels higher than prior estimates indicate, especially against Hispanics, who are approximately six percentage points less likely to receive callbacks and offers than whites. We find suggestive evidence that government messaging can reduce discrimination against Hispanics, but not against Blacks. The findings confirm discrimination’s persistence and suggest that government messaging can address it in some settings, but more work is needed to understand the contexts under which such appeals are most effective.

“Responsiveness Without Representation: Evidence from minimum wage laws in U.S. states”, American Journal of Political Science (Forthcoming, with Gabor Simonovits and Jonathan Nagler)  [ data ]

How well does public policy represent mass preferences in U.S. states? Current approaches provide an incomplete account of statehouse democracy because they fail to compare preferences and policies on meaningful scales. Here we overcome this problem by generating estimates of Americans’ preferences on the minimum wage and compare them to observed policies both within and across states. Because we measure both preferences and policies on the same scale (U.S. dollars), we can quantify both the association of policy outcomes with preferences across states (responsiveness) and their deviation within states (bias). We demonstrate that while minimum wages respond to corresponding preferences across states, policy outcomes are more conservative than preferences in each state, with the average policy bias amounting to about two dollars. We also show that policy bias is substantially smaller in states with access to direct democratic institutions.

“By the Numbers: Toward More Precise Numerical Summaries of Results”, The Political Methodologist (2017, with Gaurav Sood)  [ data ]

Unlike the natural sciences, there are few true zeroes in the social sciences. All sorts of variables are often weakly related to each other. And however lightly the social scientists intervene, effects of those interventions are rarely precisely zero. When there are few true zeroes, categorical statements like: two variables are significantly related, or, the intervention had a significant effect, convey very limited information — about sample size and luck (and manufactured luck). Yet these kinds of statements are the norm in abstracts of the top political science journal, the American Political Science Review (APSR). As we show, only 10% of empirical articles in recent volumes of APSR have abstracts with precise quantitative statements. The comparable number for American Economic Review (AER) is 35%.

Social Media and Misinformation

“Selective Exposure to Misinformation: Evidence from the consumption of fake news during the 2016 U.S. presidential campaign” (with Brendan Nyhan and Jason Reifler)

Though some warnings about online “echo chambers” have been hyperbolic, tendencies toward selective exposure to politically congenial content are likely to extend to misinformation and to be exacerbated by social media platforms. We test this prediction using data on the factually dubious articles known as “fake news.” Using unique data combining survey responses with individual-level web traffic histories, we estimate that approximately 1 in 4 Americans visited a fake news website from October 7-November 14, 2016. Trump supporters visited the most fake news websites, which were overwhelmingly pro-Trump. However, fake news consumption was heavily concentrated among a small group — almost 6 in 10 visits to fake news websites came from the 10% of people with the most conservative online information diets. We also find that Facebook was a key vector of exposure to fake news and that fact-checks of fake news almost never reached its consumers.