Stephen R. Barnard
Just last week, a federal appeals court ruled that President Donald Trump can no longer prevent Americans from reading his tweets, or from engaging in the public conversations Twitter hosts in response to them. While Twitter’s “block” function has been a key tool enabling users to protect themselves from hate and harassment the court ruled this option is now off-limits for Trump, who has used it to silence accounts that criticize or mock him. The ruling could have broader implications for how public officials across the country engage with their constituents.
Twitter has also been making headlines recently for how it is handling the continued onslaught of harmful content on its platform. After signaling it would be tightening restrictions on speech and ramping up enforcement in order to promote “conversational health,” Twitter announced that it has decided to restrict insulting or dehumanizing speech only when it is aimed at protected groups.
While the decision is largely strategic, focusing first on the issues it thinks it can adequately address, it signals a larger problem: Twitter, like Facebook and YouTube, is in over its head when it comes to content moderation.
On some level, the company is simply searching for problems (and solutions) where the light is best. And who can blame it? It’s a thorny, complicated problem with few obvious solutions and a guarantee that whatever action it takes will offend some portion of its user base.
But that is no excuse. Because Twitter is writing the policy, it is the one holding the light. While it has little control over the shadowy landscape of free speech — one that is rarely drawn in black and white, but rather in varying shades of gray — it is free to determine which problems to prioritize and how best to fix them.
One decision Twitter has long stuck with has been to make exceptions for world leaders like Trump who use the service to spread harmful or offensive content. However, just last month the company announced its plan to label and down-rank tweets by officials that violate its policies, while also admitting that such enforcement will be rare.
Although it is understandable why the company would be wary of censoring influential political figures, there is also reason to question its conclusion that hateful or dehumanizing content is ever “in the public’s interest,” especially when its primary focus is “conversational health.”
For his part, Trump remains committed to his divisive rhetoric. Over the weekend, he posted a series of controversial tweets that appear to be aimed at four Democratic congresswomen of color — tweets that some critics considered racist. He has also continued to place public pressure on social media companies. Although he has yet to publicly respond directly to the ruling about blocking Americans from his Twitter account, he did hold a “social media summit” at the White House last Thursday. In an effort to rally his base, Trump invited some of his most vocal online supporters, many of whom are far-right online influencers.
The move also signals a breakdown in communication between the administration and the social media platforms, whose names were conspicuously left off the guest list. Trump’s intended message seemed pretty straightforward: tread lightly, or else.
Despite its efforts, Twitter remains a leading host of “problematic information”— a broad term used to describe the propaganda, disinformation, fake news, and other forms of media manipulation that abound in today’s media ecosystem. And Twitter, like other social media platforms, plays a key role in contemporary propagandists’ playbook
One study of automated Twitter bots during the last 2016 presidential debate found that pro-Trump bots produced seven times more content than their Clinton-supporting counterparts.
While Twitter’s user base is substantially smaller than Facebook or YouTube, it has been remarkably slow in addressing many of the key challenges. Canada, which recently passed legislation requiring websites to create a public registry of political ads in an attempt to thwart election interference this fall, has left much of Silicon Valley scrambling.
While Facebook has already launched its database and is fully prepared to profit from the spike in political advertisements during an election year, many others have opted out. Twitter is currently working on its database, and is banning ads for parties and candidates in the country until it is in compliance.
It will be important to watch how Twitter’s Canadian efforts go, and whether it will choose to apply the strategy to the U.S. It's likely that the driving motivation behind the push is profit — a benefit that is much more loosely tethered to the broader slate of moderation challenges it is facing.
Profits aside, the company cannot afford to continue allowing hate, lies, and other problematic information to continue to flow through its channels unabated, whether from the president, political influencers, or those on the margins. Regardless of what lasting effects Trump’s social media summit may have, if Twitter continues to waffle at the challenges it faces, it can all but guarantee that it will also face more scrutiny from the public, let alone regulation from the federal government.
Stephen R. Barnard is an assistant professor of sociology at St. Lawrence University, NY. He is the author of "Citizens at the Gates: Twitter, Networked Publics, and the Transformation of American Journalism."
Note: originally published at thehill.com; re-published with permission.