… media appears, at times, to be a fruitless effort. Yet it is essential that such efforts continue. Open, honest debate is fundamental to any liberal democracy. There are steps that platforms such as Twitter need to take to facilitate this. Their processes for issues such as reporting abuse and banning accounts need to be made transparent and applied in a consistent manner. If users feel that the rules are clear and consistently adhered to, they will feel more confident in using the platform to express their honest opinions without fear of reprisal. Twitter should also reverse its decision to allow content-based criteria into their verification de…
The content police: Cyberabuse, bullying, fake news, and reductive solutions
Social media’s problems are easy to identify, but its solutions are hard to implement.
Yours is little more than consensus opinion. The problems are well-known; the utopian state is well-defined; but the concrete solutions are elusive. Everybody has spilled ink about Twitter’s problems, but so few people offer solutions. The commentariat is quick to critique, which is valuable in drawing attention to existential problems that entities would otherwise ignore, like Twitter has here. Yet, the commentariat then washes-its-hands of it all — at its own expense.
Now that the problem has gained mainstream consciousness, it’s incumbent upon the community to either stop adding noise about the problems or start contributing signals about the solutions. Sure, the onus is on Twitter to fix itself, but does any intellectually honest person believe they’re not trying? Regardless, everyone describes these social platforms as fundamental utilities for democracy, social equity, etc. You yourself say as much:
“why not leave networks such as Twitter and Facebook behind?
“…ceding these platforms to their most aggressive and censorious elements is no long-term solution.”
Given that importance to you and others, these are problems that you and others should help solve. But, this is all we get — these buzzwords that are mere constructs describing an end result that’s missing a transmission mechanism…
- “reporting abuse and banning accounts”
How should Twitter implement or improve this? If a Tweet gets 100 complaints, block it? If an account gets 10 blocked Tweets, ban it? Barack Obama’s account would’ve been blocked long before Election 2008.
Maybe block it if 10% of impressions complain? The tweets from an oppressed everyman in a third world country would never see the light of day, because this would make it too easy for an authoritarian regime to suppress his voice.
- “transparent and applied in a consistent manner”
Radical transparency and predictable consistency enable gaming-the-system. If you show bad actors where the line is, they will always toe it (or tip-toe around it). As opaque as its algorithms are, even Google has to play whack-a-mole all day. Enough mixed metaphors for you? 😝 Let’s just cite the famous adage known as Goodhart’s Law, which observes that, when a measure becomes a target, it ceases to be a good measure. Or, Karl Popper’s Paradox of Tolerance, in which a society that’s tolerant without limit is eventually destroyed by the intolerant.
You realize how complex these problems really are when you start thinking about actual implementation and the second-order effects of any logical solution.
Whose principles should Twitter appeal to? Its own? Even were Twitter worthy of being anointed some kind of righteous moral arbiter, how does it quickly and objectively prove guilt in cases of abuse, bullying, and misinformation?
Leading up to Election 2016, how would Twitter have identified and blocked all the bots that kept popping-up; all the nefarious Tweets; all the fake news?
How would it have impacted real-world democracy, social stability, and political polarization had Twitter — being liberal-leaning — blocked candidate Trump on the campaign trail?
What’s the cost of a fully-automatic algorithm flagging false positives along with truly malicious content? On the other hand, what’s the cost of waiting for humans manually working through a backlog of censorship-worthy content?
There’s an important theme here than can define the tech revolution. Web 2.0 eliminated so much friction for the consumer, user generated content, information access, etc. Much like the printing press before it, frictionless consumer tech has huge upside that’s partially offset by inherent downside, but it’s still better than the era from which we’re emerging — wherein our information diets were rationed by a few white men in a few coastal cities. Democracy dies in the dark, right?
“We don’t regulate the curation of publishers’ human editors, who manually dictate what composes our media diet… Citizen journalism has unleashed a host of evils, like cyberbullying, misinformation, and noise; but those evils are overrun by the appreciable benefits. Social media and blogging have provided everyone a megaphone — a momentous historical fulcrum akin to the 15th century’s printing press. Such groundswells have massive, gross positive and negative consequences, but what may appear a small net benefit in the short term compounds into large social/economic surplus over the long term.”
Yet, notice the juxtaposition: The subjective/manual/human/editorial era has been succeeded by the objective/automatic/algorithm/curation era. These are extremes on the spectrum of information dissemination approaches. Perhaps the social web will settle into a happier medium. Maybe it won’t. I guess my point is that solutions need be careful so as not to encourage wading further into one extreme or another. Regardless, as the printing press and the industrial revolution showed us, it’s still Day 1 of this modern renaissance.
Now, there’s a movement underway that’s fracturing the once stateless, borderless, open web, including…
- The US is fighting to standby protections like Section 230 of the Communications Decency Act, which provides digital platforms with safe harbor that shields them from liabilities resulting from user generated content they host. While this amplifies free speech and open information, it makes censorship a prohibitively cumbersome legal process. (e.g. Given the velocity of digital content, the damage of digital distribution is already done by the time analog courts hand-down verdicts.)
- The EU is moving to push the burden of censorship back onto the platforms themselves with both Article 13 (a copyright filtering mandate) and Article 11 (a proposed “link tax”). While this amplifies censorship, it not only makes free speech and open information economically cumbersome, but also gives more power to unelected corporations’ to govern free speech — with all the harm that comes from false positives — which neither the left or the right of the political spectrum want.
- China is tightening its centralized control over the web, with the heavy-handed government increasingly enlisting private businesses to further the ubiquity of “The Great Firewall”. This is pure censorship, leaving no room for free speech and open information.
In sum, it’s easy to identify the problems. That’s why everyone can and has. (Not to mention that complaining about populist issues appeals to the negativity bias in all of us, which is great for pageviews. If it bleeds it reads! Some things never change.) The solutions are the trick. These things aren’t as simple in practice as they are in theory — particularly given the added variables of dynamism and scale.
So, what say ye about implementation-ready solutions?
Be the signal, not the noise.
Highlights by you and for you; on all the blogs, news, and research you need; every time you read.
Annotote is a better way to read. #LeaveYourMark