Stay up to date with latest news about technology, food, fashion, games, business and everything else that you need.

Friday, May 4, 2018

Facebook, Twitter, and Google Would Like to Be the Gatekeepers of Democracy Without the Responsibility

Top brass from Facebook, Twitter, and Google stopped by Stanford Law School Thursday to participate in a conversation about the challenges social media companies face in regulating free speech and protecting democracies. The panel showed the companies are much better at identifying problems than actually solving them.
The discussion, titled “Free Speech Online: Social Media Platforms and the Future of Democracy,” gave the biggest players space to talk about their views on the fake news epidemic and the role of social media in creating an informed public for the good of democracy. They mostly chose to acknowledge their position as proprietors of information while demurring at the idea of accepting more responsibility for that role.
It’s clear that companies like Facebook and Twitter and Google have gotten good at moderating some very specific problem areas, primarily the ones that can be automated. Nick Pickles, Twitter’s Senior Public Policy Manager in the United Kingdom, told the panel that his company challenges as many as 6.4 million accounts every week for suspicious activity. Juniper Downs, the Head of Policy at YouTube, noted Google has built tools that have proven effective in stomping out spam and explicit malicious actors trying to present disinformation. That’s all great. The worst of the worst content gets caught and dismissed by alogrithms.
Where the companies struggle is in figuring out just how far their moderation should stretch and how they should be applied. That has resulted pretty poorly formed community guidelines that are often vague and designed to provide the company with wiggle room when exercising its moderation tools.
This is perhaps best captured in how the companies approach dealing with disinformation. Most members of the panel were quick to point out that “fake news” existed long before platforms like Facebook and Twitter and YouTube were around. Downs went so far as to suggest that the benefits of these platforms—such as enabling members of the public to expose the actions of oppressive governments—greatly outweigh any of the challenges the spread of false information presents. 
That may well be true! But it tries to frame the conversation in a way that attempts to abdicate responsibility. It isn’t a matter of weighing the positives and the negatives, it’s recognizing there is a problem and addressing it. When it comes to actually dealing with the challenges facing these platform, specifically that they often enable the spread of misinformation, there wasn’t much offered in terms of answers.
Elliot Schrage, the Vice President of Communications and Public Policy at Facebook, said the question about the dissemination of disinformation isn’t “does it happen, but how do you manage it?” But he later suggested there isn’t even a consensus on just how bad the problem is. “There is no agreement whatsoever on the prevalence of false news, propaganda, misinformation on our platform,” Schrange said. For a company that collects an ungodly amount of data about everything a person does online and off, it seems odd that it can’t figure out how prevalent fake information is or how far it spreads.
YouTube’s Downs said Google looks at information as a spectrum and considers the biggest challenge to be the murky area between news and opinion. According to her, Google’s system recognizes when people are “coming to our services clearly looking for news content” and won’t present hyper-partisan content. “We have a sense of what sites are legitimate news sites,” she said. “We try to promote authoritative content and demote lower quality, less authoritative content.”
The company has been much better at that in theory than in practice, as it has regularly been called out for floating false information and questionable sources in its search results.
Downs also suggested the mainstream media is complicit in the problem of misinformation, citing President Obama’s statement that “If you watch Fox News, you are living on a different planet than you are if you, you know, listen to NPR.”
“There is not a common source of facts that even mainstream media are drawing on,” Downs said, apparently unaware or uninterested in addressing Google’s role in determining the set of facts that people are presented with when searching for information on its platform.
Twitter’s Pickles suggested that type of moderation that would go stopping the spread of fake news shouldn’t be the responsibility of social networks, as it gets too sticky:
As technology platforms, it’s very dangerous for us to get into that space [of moderating views expressed by users], because either you’re asking us to change the information that we provide to people based on an ideological view, which I don’t think is what people want tech companies to be.
That’s a valid position to take, except these companies are already making tons of judgment calls on what kind of information can be shared every day. As Nathaniel Persily, James B. McClatchy Professor of Law at Stanford Law School, pointed out during the discussion, “Each one of these platforms makes decisions about what goes at the top and what goes at the bottom. ...Those are value judgments, sometimes motivated by engagement.”
The problem here is primarily this: Facebook, Twitter, and Google want to have this both ways. They want to be the gatekeepers of the internet because it benefits them to have as many users visit and use their services possible. They also don’t want the responsibility of that role. 
At one point during the conversation, Schrange of Facebook said “the first line of defense for a democracy” is determining what the “appropriate actions of a responsible company” should be. If a company is the primary defender of a democracy, that democracy is in trouble.

No comments:

Post a Comment