Share this:

In the lead up to and during election campaigns, newspapers run ad watches which comment on the accuracy of advertisements of political candidates. In new research Patrick C. Meirick looks at just how effective these ad watches are at ensuring political ads stay honest. He finds that the more ad watches a race had – and the more explicitly critical they were – the greater the likelihood that those ads would be accurate and did not exaggerate a candidate’s centrism. 

False claims in political campaigns have long been a cause for concern, but that concern has escalated in recent years alongside the ride of populist candidates like Donald Trump. At the same time, fact checks have assumed an increasingly robust presence in national-level political discourse. News mentions of fact-checking in 2012 were 10 to 20 times as frequent as they had been in 2001 and 2002. But in state-level political campaigns, the job of examining the assertions made in political ads has fallen largely to newspaper ad watches. Previous research has found that ad watches – when the media comments on the accuracy of a political advertisement  – can effectively refute deceptive claims in ads, but the question of whether fact checking actually can keep campaigns honest is only beginning to be addressed.

Our study analyzed ads and newspaper ad watches from eight competitive US Senate races that received varying levels of ad watch coverage. The first hurdle in any study like this is how to measure honesty. A pair of experts from each of the eight states rated 10 ads from each candidate on the accuracy of their claims and their portrayals of the candidates’ ideologies. In general elections in a two-party system, it is common for ads to depict their favored candidates as more moderate than they really are. At the same time, ads tend to portray opponents as extreme representatives of their parties: Democratic candidates as far left liberals, Republicans as far right conservatives. We used voting records as well the experts’ assessments to get at candidates’ actual ideology, and then calculated the difference between portrayed and actual ideology. We controlled for incumbency, margin of victory or loss, per-capita ad spending, and ad tone.

We found that the more ad watches a race had, the more accurate ads in the race tended to be and the less they exaggerated their favored candidates’ centrism. But surprisingly, ads in races with more ad watches also tended to portray opposing candidates as more extreme than their positions warranted. In retrospect, this may have been largely because of two campaigns that had little ad watch coverage that tried to portray, for instance, a Democrat running in a liberal state as more conservative than she really was rather than as an extreme liberal.

Photo by Kaboompics .com from Pexels

We counted the total number of ad watches in each race, but ad watches vary in approach. Some explicitly criticize misleading ads. Some speculate about an ad’s strategy or effectiveness. Some report campaign representatives’ competing claims in “he-said, she-said” coverage. Some might do more than one of these things. So we also coded each ad watch and totaled how many in each race contained explicit criticism, discussion of strategy, and he-said, she-said coverage.

Explicit criticism in ad watches seemed to work best in keeping campaigns honest. It was associated with greater ad accuracy and less exaggerated centrism of favored candidates, just as the overall number of ad watches was, but without the unfortunate relationship with exaggerated extremism of opponents. He-said/she-said coverage was a mixed bag: It was related with greater ad accuracy but also with somewhat more exaggerated candidate centrism. Strategy coverage, if anything, backfired. It was related with somewhat more exaggerated opponent extremism, and it was not related with the other two indicators of ad veracity.

Ad watches were inspired in part by the desire to bring negative advertising under control, and other studies have found that they tend to scrutinize negative ads disproportionately. In our study, negative ads were about half again as likely to be examined by ad watches as positive ads. But ad watches don’t appear to make a campaign any less negative. The number of ad watches had no relationship with overall ad tone or the number of negative ads in a race. It may be that campaigns’ fear of scrutiny that negative ads might invite was balanced by their desire for free “ad amplification”.

But do negative ads deserve their bad rap?  Are positive ads necessarily more virtuous? In our study, positive ads were rated as more accurate but also more prone to exaggerate the supported candidates’ centrism compared to other ads.

A disclaimer here: We can’t be sure that ad watch coverage caused ad honesty to increase. Indeed, it is likely that they influence each other: Heavy ad watch coverage should cause campaigns to be more honest, but dishonest campaigns should cause heavier ad watch coverage. Studying those dynamics in real time would be fascinating. But given our results, it makes more sense to conclude that ad watches kept campaigns honest than to conclude that newspapers responded to honest campaigns by launching or stepping up ad watches and criticism. Besides, the ad watches in the races we selected all began in the summer before the general election, which suggests that the races were “on notice.”

We hope that our study will give more news editors a rationale to begin using ad watches or to broaden their application, because although there are ad watches for most US Senate races, they are rarer for US House contests. We hope these ad watches don’t pull their punches, because explicit criticism seems to work best, while other type of content had ambivalent or counterproductive relationships with ad honesty. But we also think that ad watches should pay attention not just to specific claims in ads but also to their overall narrative and candidate characterizations, especially with positive ads.

Please read our comments policy before commenting.          

Note:  This article gives the views of the author, and not the position of USAPP – American Politics and Policy, nor the London School of Economics.

Shortened URL for this post: http://bit.ly/2YxdtX9


About the author

Patrick C. MeirickUniversity of Oklahoma
Patrick C. Meirick is an associate professor of communication and director of the Political Communication Center at the University of Oklahoma. His research interests include public opinion, political misperceptions, political advertising, and media effects. He is the author, with Jill A. Edy, of A Nation Fragmented: The Public Agenda in the Information Age (2019, Temple University Press).

Print Friendly, PDF & Email