This website will provide an overview of 5-year NSF-funded research project that looks at the flow of rumors and misperceptions online. It will include information about the project, the research team, publications, and (eventually) datasets. There is also a Twitter feed associated with the project, which aims to track relevant news and research (#FalseBeliefNews).
I gave two talks at CSCW 2013 in San Antonio this week. The first was part of panel organized by Paul Resnick, and also included Travis Kriplean (creator of the Living Voter Guide), Sean Munson (creator of Balancer), and Talia Stroud (author of Niche News). The second talk was the paper coauthored with Brian Weeks that’s been in the news recently.
Abstract and links to slides follow.
Abstract: Bursting your (filter) bubble
Broadcast media are declining in their power to decide which issues and viewpoints will reach large audiences. But new information filters are appearing, in the guise of recommender systems, aggregators, search engines, feed ranking algorithms, and the sites we bookmark and the people and organizations we choose to follow on Twitter. Sometimes we explicitly choose our filters; some we hardly even notice. Critics worry that, collectively, these filters will isolate people in information bubbles only partly of their own choosing, and that the inaccurate beliefs they form as a result may be difficult to correct. But should we really be worried, and, if so, what can we do about it? Our panelists will review what scholars know about selectivity of exposure preferences and actual exposure and what we in the CSCW field can do to develop and test ways of promoting diverse exposure, openness to the diversity we actually encounter, and deliberative discussion.
Abstract: The promise and peril of real-time corrections to political misperceptions
Computer scientists have responded to the high prevalence of inaccurate political information online by creating systems that identify and flag false claims. Warning users of inaccurate information as it is displayed has obvious appeal, but it also poses risk. Compared to post-exposure corrections, real-time corrections may cause users to be more resistant to factual information. This paper presents an experiment comparing the effects of real-time corrections to corrections that are presented after a short distractor task. Although real-time corrections are modestly more effective than delayed corrections overall, closer inspection reveals that this is only true among individuals predisposed to reject the false claim. In contrast, individuals whose attitudes are supported by the inaccurate information distrust the source more when corrections are presented in real time, yielding beliefs comparable to those never exposed to a correction. We find no evidence of real-time corrections encouraging counterargument. Strategies for reducing these biases are discussed.
And you can still download the full paper here.
TechCrunch and TechNewsDaily have each published short profiles of our research examining the effectiveness of real-time corrections. Read them here:
UPDATED 1/30: There’s a new article over at FoxNews.com, too.
The blog at CrowdResearch.org, Follow the Crowd, features a short write up of our CSCW study today. You can check it out here: http://crowdresearch.org/blog/?p=4590
We’ve had a paper accepted at the Journal of Communication. An abstract is provided below. If you’d like to review a prepress copy of the article, please send me an email.
Garrett, R. K., Nisbet, E. C., & Lynch, E. K. (In Press). Undermining the corrective effects of media-based political fact checking? The role of contextual cues and naïve theory. Journal of Communication.
Abstract: Media-based fact checking contributes to more accurate political knowledge, but its corrective effects are limited. We argue that biographical information included in a corrective message, which is often unrelated to the inaccurate claim itself, can activate misperception-congruent naïve theories, increasing confidence in a misperception’s plausibility and inducing skepticism toward denials. Resistance to corrections occurs regardless of initial belief accuracy, but the effect is strongest among those who find the contextual information objectionable or threatening. We test these claims using an online survey-embedded experiment (N=750) conducted in the wake of the controversy over the proposed Islamic cultural center in NYC near the site of the 9/11 attacks, and find support for our predictions. Theoretical and practical implications are discussed.
We’ve had a paper accepted at CSCW. In it we argue that systems that correct inaccurate online information in real-time, such as Dispute Finder or Hypothes.is, do less well than systems that provide corrections at a later time. We explore why this occurs and what we might do to fix it. You can download the paper here.