To De-Platform or Not to De-Platform
The quote in my newsletter subtitle today is from a fellow OG social media type, Anil Dash, and I quote it often. His point (and one with which I agree) is that if you run a site, service, or community online, you’re in charge, and you’re responsible for what you allow the culture of the site, service, or community to become.
This became personal to me at a larger scale when we launched the web community part of BlogHer. We launched in January of 2006 with a set of community guidelines that were linked from every page of the site. We also had human moderators who worked to enforce those guidelines and (this part is critical) worked hard to enforce them fairly and equitably, whether they agreed with the content they were moderating or not.
A year or two later, a prominent woman who blogged about tech and user interface and customers satisfaction (well-known hot button issues, I guess </sarcasm>) decided to reveal the extent of the hate mail and hate comments she got, including violent threats of sexual assault and more. Since she moderated her comments on her site, none of her readers had to see this content, but she did. She quit the blogosphere and heated debate about moderation, community guidelines, and codes of conduct for the Internet ensured. At the time a couple of prominent men who were Internet elders pointed to BlogHer’s community guidelines as a model for an Internet code of conduct. And that led to lots of questions about our opinion on whether we should indeed become the schoolmarms of the entire Internet with our guidelines and our moderation and our commitment to civil discourse, even disagreement. (Much of this entire episode in Internet life was captured in this NY Times article.)
We said, and I stand by it, that there wasn’t a single code of conduct that would be appropriate or necessary for the entire worldwide web. And that every single site proprietor was going to have to decide on what kind of environment they wanted to host online, articulate that clearly, and then enforce it fairly. Just as we were.
I also stand by the fact that our biggest problems on the internet and social media stem from the fact that the social media platforms that arose subsequent to blogging (Facebook, Twitter, YouTube, Reddit) wanted to make money like media companies but be treated like utilities. To somehow be stupid pipes and a public square. Even while their business viability relied on targeted advertising, data mining, and eventually, we would learn, psychological manipulation. They took (or to be fair, were eagerly given) huge sums of funding and asked only to focus all their energies on unfettered growth without having to worry about little things like how people (and facts) were being treated in the spaces they built.
It was and is a problem.
By the time the mainstream had caught on to how much of a problem, the platforms could say they were relying on algorithms and automation to address the problem, and that it was untenable to try to scale moderation any other way. Because of course it was because their model was built and scaled without integrating that concept.
Make no mistake, these issues of disinformation, fake accounts, harassment, and hate speech are. not. new. Ask Shireen Mitchell from Stop Online Violence Against Women how long it’s been going on. Ask BlogHer’s community managers from 2008 about the sockpuppet accounts that tried to proliferate on our site to bash candidate-Obama. Ask one of my employees who was being harassed and stalked on Twitter a decade ago and couldn’t get anyone there to even respond to law enforcement about it (let alone me when I tried to intervene).
I see new platforms being launched and funded and lauded and fluffed today that seem to have evolved not at all when it comes to these fundamental problems.
The chickens came home to roost in the past five years, but unfortunately, I’m not sure that really matters to those who have already earned their millions and billions from what they wrought.
But let’s get to de-platforming, specifically. I tweeted a question last week: Had anyone seen someone who wasn’t a Trump supporter and also wasn’t a (most likely white) man who was objecting to the suspension of Trump and other problematic accounts, or of Parler, the social platform many of the Capitol would-be insurrectionists used to plan their attack? Because all I was seeing was the usual suspects of techno-”libertarian-ish guys who were “concerned” and “troubled.” They were sort of being the Susan Collins of tech, really. I heard crickets in return.
My position will always be that each of these platforms has terms of service where they pay lip service to their right to not being in the business of supporting unacceptable content, and they give themselves incredibly wide latitude to define what that means. As BlogHer did, honestly. Of course, we gave the most obvious examples, but we also gave ourselves latitude. And sort of like impeachment, what’s the point of having these terms and guidelines if you’re never going to implement them if the person is powerful or newsworthy enough?
And the reason I point out that gender and race of those people lamenting the potential slippery slope is that I’m guessing those folks are not aware of exactly how many women have run afoul of the platforms for breastfeeding pictures or making some clearly joking lament that “men are pigs.” (I mean, hey, that’s not my jam to say, but it’s hardly hate speech.) Or the black folks who have been put in Twitter or Facebook prison for calling out literal racists.
If leaping to defend Parler or Trump is the first time you’ve pulled out that slippery slope argument and concerned yourself with “censorship” (knowing full well it’s surely not a first amendment issue) then please please spare me. And please know I will look askance at your claims to not agree with the speech/usage you’re defending.
But you tell me: Who are the voices you’re hearing that aren’t coming from the place of privilege of not regularly being the target of hate speech or aren’t discovering that the platforms de-platform folks for the first time because of Trump?
Last week-ish and next week-ish
OK, that was a bit of a lengthy rant, so let me just quickly say that last week’s Op-Ed Page podcast, espiode 44, did indeed talk about my story of entering tech, riding the wave of the boom times in the late 90s and the early 00s, and how that journey would likely be nipped in the bud by today’s hiring practices. It’s a bit of a word about how your hiring criteria is exclusionary, needlessly.
As always I appreciate a share of this newsletter or my podcast. And I appreciate feedback and hearing from you too.
And if you think I can help you break through the things that are keeping you stuck, you can always set up your first introductory 30-minute consult for free by booking it in my Calendly.
Have a great week-ish!
This is very much my rant. I want to add that it is important that the moderation isn't faceless. Community guidelines and dedicated humans on the back end aren't the only key to keeping a well-lit place on the internet. The community moderation and management must have a face that pops in and participates to some degree with the community. Just like people naturally behave better when their teacher/parents/children/doctor is in the room --they are less likely to send hate mail or flame a community where the moderation staff is visible, fair, and appreciated by that community. Well-moderated communities in that vein tend to become almost self-moderating to a degree.
This is my favorite Elisa rant!