Electoral consequences of political rumors

Brian Weeks and I have had a paper accepted at the International Journal of Public Opinion Research that examines how political misperceptions can influence vote choice.  We argue that rumor isn’t idle talk, but has real-world consequences.  Email one of us if you’d like to see a prepress copy of the manuscript.

Weeks to present new paper based on 2012 election data at APSA

Brian Weeks will present his paper, Feeling is believing?  The influence of emotions on citizens’ false political beliefs at APSA in Chicago later this week.  Here’s the abstract:

Many Americans hold inaccurate beliefs about political candidates and issues, which are troublesome because evidence suggests people often behave in accordance with those misperceptions. Scholars have recently begun to explore why citizens are misinformed and most extant research uses partisan-based explanations to examine this phenomenon. In the current study, I argue that using party affiliations as the primary explanatory factor is limited in helping to understand why or how citizens are misinformed and instead make the case for discrete emotions as the mechanism driving false beliefs about politics. Using panel survey data (N = 1,004) collected over three waves during the 2012 presidential election I show that angry citizens are more likely to hold false beliefs about Barack Obama and Mitt Romney. The data also reveal that partisanship ceases to explain misperceptions about the candidates once the emotions anger and anxiety are accounted for. The findings indicate that false beliefs are driven in part by anger, suggesting future research on misperceptions must account for citizens’ feelings toward political targets.

ICA paper on affective polarization

A new conference paper, on the subject of affective polarization, is now available.  In the U.S., affective polarization refers to the increasing hostility felt by both Democrats and Republicans toward those who belong to the opposing party.  Although not directly related to misperceptions, this phenomenon has important implications for Americans’ willingness to listen to and engage with the other side, activities which can promote better understanding and more accurate beliefs.  This paper examines partisan online news media’s polarizing effects, considering the impact of both pro-party and counter-party exposure.

You can see the slides here:
http://www.slideshare.net/rkellygarrett/garrett-icaaffective-polarization

Corrective effects paper now available

Our paper exploring the persistence of misperceptions in the face of carefully documented corrections is now accessible via the Journal of Communication’s website.

Garrett, R. K., Nisbet, E. C., & Lynch, E. K. (2013). Undermining the corrective effects of media-based political fact checking? The role of contextual cues and naïve theory. Journal of Communication. doi: 10.1111/jcom.12038

Abstract: Media-based fact checking contributes to more accurate political knowledge, but its corrective effects are limited. We argue that biographical information included in a corrective message, which is often unrelated to the inaccurate claim itself, can activate misperception-congruent naïve theories, increasing confidence in a misperception’s plausibility and inducing skepticism toward denials. Resistance to corrections occurs regardless of initial belief accuracy, but the effect is strongest among those who find the contextual information objectionable or threatening. We test these claims using an online survey-embedded experiment (N=750) conducted in the wake of the controversy over the proposed Islamic cultural center in NYC near the site of the 9/11 attacks, and find support for our predictions. Theoretical and practical implications are discussed.

CSCW Slides now available

I gave two talks at CSCW 2013 in San Antonio this week.  The first was part of panel organized by Paul Resnick, and also included Travis Kriplean (creator of the Living Voter Guide), Sean Munson (creator of Balancer), and Talia Stroud (author of Niche News).  The second talk was the paper coauthored with Brian Weeks that’s been in the news recently.

Abstract and links to slides follow.

Abstract: Bursting your (filter) bubble

Broadcast media are declining in their power to decide which issues and viewpoints will reach large audiences. But new information filters are appearing, in the guise of recommender systems, aggregators, search engines, feed ranking algorithms, and the sites we bookmark and the people and organizations we choose to follow on Twitter. Sometimes we explicitly choose our filters; some we hardly even notice. Critics worry that, collectively, these filters will isolate people in information bubbles only partly of their own choosing, and that the inaccurate beliefs they form as a result may be difficult to correct. But should we really be worried, and, if so, what can we do about it? Our panelists will review what scholars know about selectivity of exposure preferences and actual exposure and what we in the CSCW field can do to develop and test ways of promoting diverse exposure, openness to the diversity we actually encounter, and deliberative discussion.

http://www.slideshare.net/rkellygarrett/bursting-your-filter-bubble-final

Abstract: The promise and peril of real-time corrections to political misperceptions

Computer scientists have responded to the high prevalence of inaccurate political information online by creating systems that identify and flag false claims.  Warning users of inaccurate information as it is displayed has obvious appeal, but it also poses risk.  Compared to post-exposure corrections, real-time corrections may cause users to be more resistant to factual information.  This paper presents an experiment comparing the effects of real-time corrections to corrections that are presented after a short distractor task.  Although real-time corrections are modestly more effective than delayed corrections overall, closer inspection reveals that this is only true among individuals predisposed to reject the false claim.  In contrast, individuals whose attitudes are supported by the inaccurate information distrust the source more when corrections are presented in real time, yielding beliefs comparable to those never exposed to a correction.  We find no evidence of real-time corrections encouraging counterargument.  Strategies for reducing these biases are discussed.

http://www.slideshare.net/rkellygarrett/instant-correctionscscw

And you can still download the full paper here.

TechCrunch, TechNewsDaily cover our study on real-time corrections

TechCrunch and TechNewsDaily have each published short profiles of our research examining the effectiveness of real-time corrections.  Read them here:

http://techcrunch.com/2013/01/24/study-finds-that-we-still-believe-untruths-even-after-instant-online-corrections/

http://www.technewsdaily.com/16579-online-fact-checking-corrections.html
Reprinted at MSNBC: http://www.msnbc.msn.com/id/50587619/ns/technology_and_science-innovation/#.UQQHGr_EZ_4

And Mashable:

http://mashable.com/2013/01/26/issue-corrections-online/

UPDATED 1/30: There’s a new article over at FoxNews.com, too.

http://www.foxnews.com/tech/2013/01/29/can-correct-misinformation-on-web-maybe-not/

Paper accepted at Journal of Communication

We’ve had a paper accepted at the Journal of Communication.  An abstract is provided below.  If you’d like to review a prepress copy of the article, please send me an email.

Garrett, R. K., Nisbet, E. C., & Lynch, E. K. (In Press). Undermining the corrective effects of media-based political fact checking? The role of contextual cues and naïve theory. Journal of Communication.

Abstract:  Media-based fact checking contributes to more accurate political knowledge, but its corrective effects are limited. We argue that biographical information included in a corrective message, which is often unrelated to the inaccurate claim itself, can activate misperception-congruent naïve theories, increasing confidence in a misperception’s plausibility and inducing skepticism toward denials. Resistance to corrections occurs regardless of initial belief accuracy, but the effect is strongest among those who find the contextual information objectionable or threatening. We test these claims using an online survey-embedded experiment (N=750) conducted in the wake of the controversy over the proposed Islamic cultural center in NYC near the site of the 9/11 attacks, and find support for our predictions. Theoretical and practical implications are discussed.

New Paper

We’ve had a paper accepted at CSCW.  In it we argue that systems that correct inaccurate online information in real-time, such as Dispute Finder or Hypothes.is, do less well than systems that provide corrections at a later time.  We explore why this occurs and what we might do to fix it.  You can download the paper here.