Social Media and Digital Communication

 

Twitter “healthy conversations”

I am currently leading an interdisciplinary team of six researchers from four universities that was selected by Twitter to help the platform assess the “health of conversations” on the platform.

Our project will use computational methods to develop four key health metrics:
(1) mutual recognition, (2) diversity of perspectives, (3) incivility, and (4) intolerance in political conversations between Twitter users. The metric for “mutual recognition” will help us understand the extent to which Twitter users engage with claims from different ideological perspectives or, conversely, are silo-ed in ideological echo chambers. The second metric, “diversity of perspectives”, builds on the first, but examines the range of perspectives being engaged on the platform. We ask: What, if any, perspectives dominate? Are certain perspectives marginalized?

The third and fourth metrics differ from most recent work in this area by recognizing “incivility” and “intolerance” as distinct concepts. While incivility (e.g., the use of profanity or crude language) breaks norms of politeness, it may still serve important democratic purposes—by, for example, helping marginalized voices break through and gain attention. On the other hand, intolerance—which targets categories of people for discrimination, hate, abuse, etc.—is inherently threatening to democratic norms.

In each case, our primary goal is to develop tools to measure the extent of these phenomena on the platform. (Our team will not be designing tools for, or guiding decisions about, intervention.) However, because we cannot merely assume that certain levels of each phenomenon are either healthy or harmful to Twitter users, or to the larger discourse occurring on the platform, we will also conduct a series of panel surveys and experiments (all based on explicit participant consent) to understand what does indeed constitute a healthy conversation.

The research team comprises myself and Michael Meffert (Leiden University), Patricia Rossini and Jennifer Stromer-Galley (Syracuse University), Dirk Hovy (Bocconi University), and Nava Tintarev (Technical University Delft).

 

citizen-politician interaction

This project examines the causes and effects of interaction between politicians and members of the public on social media platforms. The overarching question is whether and how social networking sites such as Twitter and Facebook might serve democracy by bringing politicians and citizens into direct, constructive dialog with one another. 

The first stage of the project employed an original dataset of tweets originating from, as well as those directed to, Members of Parliament and Congress in the Netherlands, United Kingdom, and United States, and explored the interactions between these different actors in several ways.

  • "Thanks for (Actually) Responding! How Citizen Demand Shapes Politicians’ Interactive Practices on Twitter" was just published (online before print) in New Media & Society. This article examines the factors that impact whether and to what extent politicians choose to engage with members of the public in genuinely reciprocal dialogue. I find that, contrary to previous expectations, structural factors such as the electoral system and culture of politicians' digital media use are not correlated with legislators' level of reciprocal engagement. Instead, the data suggest that the amount and the tone of demands received from the public condition politicians' willingness to converse via Twitter.

  • "The Great Leveler? Comparing Citizen-Politician Twitter Engagement across Three Western Democracies," which was recently published (online before print) in European Political Science, asks whether politicians are using social media to engage average citizens - and not just other political and media elite - in dialog. I find that a large share of politicians' genuinely reciprocal exchanges do in fact include average citizens. Though there is much room for improvement, this study suggests that Twitter is indeed opening spaces for citizens and policymakers to engage one another on matters of political import.

  • “She Belongs in the Kitchen, Not in Congress? Political Engagement and Sexism on Twitter,” co-authored with Karin Koole (Leiden University), is forthcoming in the Journal of Applied Journalism & Media Studies. The article recognizes that social media provide mechanisms and spaces for members of the public to share their opinions as never before. They offer direct lines of communication to democratic representatives and, on a societal scale, provide policymakers, journalists, and other elite gatekeepers with a better sense of the will of (at least some portion of) the people. But, the paper asks, are the views expressed on social media worth listening to? After all, instantaneous, impersonal, and anonymous communication has the tendency to bring out the worst in us - inviting crassness, negativity, and abuse of others, including racism and sexism. Indeed, extensive evidence suggests that women face particularly high levels of abuse online. And yet we know relatively little about the role of sexism in citizen's digitally-mediated interactions with their political representatives. Do people engage more with male politicians? Do they direct more criticism and hostility toward women? While the Twitter data employed in this study do provide some evidence that members of the public engage with men slightly more than with women, citizens actually direct more positive messages toward their female representatives. The codebook for this article will shortly be available on my Supplementary Materials page.

In the next stage of this project, which I have just begun to develop with Leiden University colleague Michael Meffert, we are using experimental research designs to explore (1) whether the mechanisms my own and previous studies have suggested might promote interactivity between politicians and citizens bear out and (2) whether interaction between politicians and citizens has positive effects on citizens' political perceptions and sense of efficacy. We ran our first pilot survey experiment in summer 2017 in the United States and are currently working on several experiments we plan to run in the US and various Western European countries.

 

Comparative online campaign communication

In the lead up to the most recent US presidential elections, Jennifer Stromer-Galley and her team at Syracuse University began the Illuminating 2016 project, which seeks "to understand political discourses engaged by the ... presidential candidates on social media." The project resulted in "a platform for collecting, analyzing and visualizing data, using advanced computational approaches" (see the project overview here). In particular, the team developed machine learning classification models that automatically code candidates' Facebook and Twitter posts to distinguish between common forms of political messaging (e.g., mobilizing, ceremonial, persuasive, informative, attack). Using these tools, Stromer-Galley and her team are analyzing the impact of polling on candidates' social media communication, civility in online discourse, and numerous other digital communication dynamics.

Beginning in January 2017, I was a visiting scholar in Stromer-Galley's Center for Computational and Data Science, and during my time at Syracuse, I took the lead in expanding this project to other countries. We have now collected Facebook and Twitter data from the 2015 Finnish and 2017 Dutch, French, UK, and German elections. We are currently working on developing machine learning algorithms to classify the data in each country and plan to publish a number of comparative studies on this basis.

 

Online News Quality and MISINFORMATION

I am also currently working on a project with Dong Nguyen, and funded in part by the Alan Turing Institute, that explores automated techniques for evaluating the quality of news reports found online. Our approach takes journalistic standards rooted in media theory as its starting point, classifying different elements of a story (e.g., (a) the number and types of sources that are used to support claims, or (b) the use of hyperbole in an article) as more or less problematic, and arriving at a holistic assessment of the quality of reporting in each news item. In the second phase of the project, we will assess whether and how strongly these elements correlate with the presence of false or misleading claims—i.e., misinformation—in a story. (For example, does hyperbole predict misinformation?) The goal in this second stage of research is to provide human fact-checkers with probabilistic assessments of how likely a story is to contain misinformation, pointing them to news items that are most in need of assessment.


Media Framing

 

frame duration

In "The Life and Death of Frames: Dynamics of Media Frame Duration," recently published in the International Journal of Communication, Michael Meffert (Leiden University) and I note that though media framing scholars have long examined why journalists select certain frames over others at a given point in time, we know much less about why certain frames persist over time in the media while others fade away and still others disappear very quickly. In this study, we bring attention to the analysis of frame duration and offer an approach based in event-history methodologies that can assess the causes of repeated frame deployment over both long and short periods of time.

The study of frame duration holds a particular advantage over empirical analyses of frame selection in that it avoids problems with systematic selection bias inherent in the latter. Unable to determine "what might have been," most empirical studies of frame selection cannot fully ascertain what frames journalists could have selected at a given point but ultimately did not. As such, the data resulting from these studies systematically omit frames that were not selected - truncating the dependent variable. By comparing only those frames that are already selected to one another, examination of frame duration avoids this problem. 

By way of illustration, we examine British coverage of the 2006 Danish Muhammad cartoon controversy, demonstrating a rigorous and analytically sound approach to the longitudinal analysis of media frame dynamics.