Mastodon and the pursuit of a utopian online community
1. How do Mastodon’s communities (i.e. “instances”) handle borderline content?
A lot of abuse, misinformation, fake news, pornography, etc is in the eyes of the beholder. You and I probably see a lot of these issues as having bright lines; but another may see many shades of grey; some are laissez-faire; and yet others will think or argue that the sky isn’t blue.
This is one of the difficulties that supermassive platforms like Twitter and Facebook are wrestling with. In theory, they have every desire to be objective; but in practice, objectivity is an enigma — partially due to subconscious bias, but predominantly due to the broad spectrum of opinion, infinite shades of grey, and truth in the eye of the beholder.
Now, by maintaining smaller, more ideologically-aligned communities, Mastodon’s instances should theoretically avert a lot of controversy. But, not everything’s black and white — even in the smallest, tightest-knit groups. So, how do these instances handle the stuff that falls in the grey? Additionally, is pushing radical ideas and radical people into their own echo chambers — to worship among their closest ideological acolytes — truly a fix for the Overton Window that social media shattered?
I suspect that each Mastodon instance has its own rules for dealing with this stuff. For example, I’d expect that most instances have a democratic voting process and others might rely on a benevolent dictator? Are there forums for discussing context and debating the more subjective (dis)qualifications of content? Can all members revise community standards on-the-fly — like a wiki?
2. Does Mastodon actually facilitate or improve content moderation? (Or is it just a tradeoff?)
As I understand it, Mastodon users read the stated rules and manifestos of each community in order to decide whether or not they want to “federate” with that community (i.e. subscribe to that instance so you can see its content in your stream). Not sure whether the decision to federate comes from the individual user or his “home” federation — “B2C” or “B2B” so to speak — although I assume it’s the latter…?
Regardless, that sounds like a really manual, cumbersome means of screening content. It’s trading all the benefits and detriments of automatic, algorithmic curation (at one end of the spectrum) for the opposite benefits and detriments of manual, editorial curation (on the other end). It’s the tyranny of algorithms vs the tyranny of choice. Autocracy vs bureaucracy. Centralized vs decentralized. Scale vs niche. Open vs closed.
So, sure, if one federation deems another federation’s content objectionable, then they can cut-ties. (Again, I assume that if an individual user dissents from his own federation’s decisions, then his/her only means of recourse is leaving the node to join another federation — or starting his/her own?) But, as with most things in life, this strikes me as a tradeoff for which the net benefits aren’t appreciably different than Twitter’s unilateral follow/unfollow model…
First, you still can’t “shield your child from adult content”, because the censorship is reactive, not proactive. If a user in a foreign node posts porn, then your home base will use that as grounds for subsequently blocking that foreign node, but that’s closing the barn door after that horse is already set free. Even if the foreign node subsequently blocks that rogue user (and his/her content), it’s still ex-post moderation.
Second, if there’s any truth to this being a better structural apparatus for content moderation, then that’s largely attributable to the fact that ideological homogeneity increases among members as the size of the group decreases. The smaller the group, the stronger the bond. But, that’s subject to Dunbar’s Number:
“Dunbar’s number is a suggested cognitive limit to the number of people with whom one can maintain stable social relationships — relationships in which an individual knows who each person is and how each person relates to every other person… proposed that humans can comfortably maintain only 150 stable relationships.”
…which means that the individual bears far more responsibility for customizing his/her own experience on Mastodon — as opposed those who wholesale outsource it to Twitter. For example, you have to sort through a lot of Mastodon communities to find one befitting of you; then you have to actively participate in your chosen community’s governance and moderation; then you miss-out on an entire universe of content because you’re committed to your limited confederation.
This doesn’t strike me as facilitating or improving any user experience — at least not without making counterbalancing sacrifices for the gains. I mean, the mass market doesn’t read Facebook’s Terms of Service or manually override default privacy settings. I don’t see the carrot or the stick for them bearing the burden required to upkeep all of these Mastodon nodes.
Perhaps the proposition is that many individuals can outsource their moderation to fewer individuals who [seem to] share their ideology. But, this just sounds like a shuffling of the deck — a return to the blog’s heyday.
I spent a lot of time herein talking about the “moderation” opportunity¹ that seems to be one of Mastodon’s core value propositions, administered via the “rules” you cite:
“The point of decentralized publishing is not censorship resistance… Instead, decentralization is important because it allows a community to run under its own rules.”
At what point does the time you spend customizing your own experience and perfecting the “rules” crowd-out the time you spend actually enjoying the underlying product experience? Even if it’s not the central tenant, at what point do hawkish rules become censorship by another name?
While it’s not a panacea, Mastodon may be a more perfect platform for some users. But, again, almost everything is a tradeoff.
Annotote is just a better way to read: Highlights by you and for you — on all the blogs, news, and research you need…
¹ N.B. “moderation” is not the same as “censorship”. I’m using the former to describe a user or community’s act of self-selecting which content it consumes — and which it doesn’t. In contrast, censorship is an act undertaken by a third party on behalf of a user or community.