Big Tech Still Doesn’t Have a Handle on its Growing Role in Campaigns

There’s No Simple Way to Balance Free-Speech Values and Free and Fair Elections.

Big Tech Still Doesn’t Have a Handle on its Growing Role in Campaigns

Over the last year and a half, politicians and activists have spent more than $900 million pumping 5.7 million ads through Facebook’s network. Is there any wonder the company is reluctant to crimp that pipeline?
 
In late September, Facebook took off one of the few restrictions it had placed on political advertisers, exempting candidates and political parties from the fact-checking process it had instituted to slow the virus-like spread of fake news on the platform. If President Trump wanted to run an ad saying he has proof that former Vice President Joe Biden is a Ukrainian spy, he’s free to do so. And if Biden wanted to respond with an ad showing a faked Kenyan birth certificate for Trump, he can do that too.
 
We’re all for candidates speaking freely to the public. But we’re not comfortable with the way Facebook and other tech companies enable candidates and campaigns to turn their speech into something more manipulative and powerful than it would otherwise be.
 
Facebook’s move was such an alarming renunciation of responsibility, politicians and good-government advocates were aghast. They’ve been pressing for change ever since Facebook’s see-no-evil approach to political ads became official in September. The company appears to be responding; according to the Wall Street Journal, Facebook executives are exploring ways to reduce the power political advertisers have to manipulate Facebook users. In particular, they’re discussing ways to limit how precisely political ads can be targeted to specific audiences.
 
These discussions come as Twitter and Google are also adjusting the tools they offer political advertisers. But these Big Tech companies are finding that there’s no simple way to balance two important but competing interests: our society’s free-speech values and our interest in free and fair elections.
 
Along with Google’s sister company YouTube, the three companies dominate online advertising as well as play a central role in the flow of information on the internet. It’s more than just their near-ubiquitous reach; it’s also the tools they offer to deliver messages tailored to individual leanings and susceptibilities, and the algorithms that some of them use to decide which posts to favor and which ones to bury.
 
Combined, these factors have the potential not just to amplify deceit, but to deliver it to the audiences most likely to believe it. As Twitter chief Jack Dorsey put it, “Internet political ads present entirely new challenges to civic discourse: machine learning-based optimization of messaging and micro-targeting, unchecked misleading information, and deep fakes. All at increasing velocity, sophistication, and overwhelming scale.”
 
We don’t want Big Tech companies to be a gatekeeper, picking and choosing which political speech is allowable. But we don’t want them to make their amplifiers and their vulnerability-seeking targeting tools available to candidates seeking to deceive either, especially not when these companies have a financial incentive to turn a blind eye to abuses of their platforms. Otherwise, there will be no boundaries, and we will truly be a post-truth society.
 
So now Facebook is reportedly pursuing what you might call a light-touch approach, even as it refuses to impose the same standards on political ads that it does on all other advertisers. Yet even its restrictions on targeting seem weak; according to the Wall Street Journal, candidates will be able to use any of the profile information Facebook collects on its users to target ads, as long as at least a few thousand people are in that group. Call it not-so-micro micro-targeting.
 
Google appears to be heading in a more promising direction. Last week, the company announced that it will bar election-related ads from including “doctored or manipulated media” or “making demonstrably false claims that could significantly undermine participation or trust in an electoral or democratic process.” It also will bar political advertisers from accessing the personality profiles Google builds from users’ web browsing. Instead, those advertisers will be allowed to tailor their messages only according to general demographic information, such as a viewer’s gender, age and ZIP Code.
 
Nevertheless, Google is inserting itself into an uncomfortable place. If the National Rifle Assn. wants to run an ad a month before an election criticizing a member of Congress for voting for a gun-control bill, is that “election-related”? How much does a video have to be edited to be considered “doctored”? Does taking something that’s demonstrably true and presenting it in a different context make it demonstrably false?
 
These are all judgment calls, and not always easy ones. Facebook eventually assembled an independent team to handle appeals of the judgments it was making about advertisements — and even then, it wasn’t willing to apply that team’s work to candidates’ ads.
 
Twitter’s Dorsey believed he had a simpler and less controversial solution to the problem: He announced last month that Twitter would not carry any political ads. That eliminated the thorny issue of fact-checking, but it created a new problem: What, exactly, constituted a political ad? The company has been backpedaling since then, allowing ads related to causes — but only if they don’t push for a specific bill, candidate or regulation, and with limits on targeting. That’s going to make Twitter a lot less useful to people trying to challenge the governmental status quo.
 
The lesson here isn’t that these companies shouldn’t be trying to make their platforms both open and trustworthy. It’s that the two qualities aren’t well matched, and there’s no easy way to overcome that problem.
 
This article was originally published in the Los Angeles Times.
 
font change