Over in the AmPolMedia thread, a debate popped up over free speech on Twitter, and how abuse and moderation factor in. At the heart of the debate is the meaning and goals of free speech - do we further free speech when we allow others to say what they want, even if in doing so they push other people out of the conversation; or when we make sure that all speakers are allowed to feel safe, even if we have to restrict the speech of others.
So, where does Twitter stand on this?
On the knife's edge, honestly. Many of Twitter's founders can be termed "free speech absolutists" - that is, they believe that free speech means that people should be allowed to speak their mind, regardless of how disquieting or offensive it might be. This, in turn, was baked into Twitter's DNA - there is little ability to moderate the system, mainly limited to blocking. The problem, as we've seen with Twitter's growth, is the ease at which someone can find themselves on the receiving end of abuse on the service. This has become something of a PR nightmare for Twitter in the past few months, with high profile cases like Leslie Jones showing the depth of the problem.
But it's just talk online, right?
More and more, we live digitally - for many people, their digital presence is just as "real" as their physical one. Abusive attacks online are just as impactful to people today as abuse in the physical world. And some attacks, like doxxing and swatting blend the two, exposing the target to physical danger as well as online danger. Beyond that, our peer groups are becoming more and more online. Choosing to leave online spaces to avoid abuse is becoming a form of exile, forcing the person targeted to choose speech or safety. Part of the issue is that this transition has been so rapid that many people who grew up where the two worlds were more separate have not had the time to truly acclimate to the new world, and still see online presence as something separate and distanced from physical space.
Posts
I was addressing PantsB's comment about Twitter being free speech but PA not being so as "weird". It's not weird, it's the part of the basis for each of their models.
I in no way endorsed the behavior of Twitter or PA.
I agree, there is a disconnect for some, but I'm not convinced it's solely based on age. I was alive before the internet was widespread and still remember going to my grandmother's house every other weekend in the mid 90's to use Netscape Navigator on a dial up modem. But I can see the connection between one's "digital identity" and their meat space identity, and it's why I'm more discerning with my use of social media (especially the people I friend on social media).
Unfortunately I think that the widespread availability of relatively cheap hardware and internet access has only enhanced the capabilities of people who want to be assholes on the internet. When I was in middle school the worst my best friend and I could really do would be to troll people on AOL/Yahoo and web cam chat rooms. Now with people being pushed to connect all of their social media accounts together it's easier to follow someone around the internet if you interact with one of their profiles.
They tried to bury us. They didn't know that we were seeds. 2018 Midterms. Get your shit together.
First, if Twitter is truly too big to moderate, then perhaps that's a sign that it's too big to exist in its current form.
Second, I don't buy that it's too big to moderate. Part of why this forum took so long to get the moderation that it has now is because it was one of the first big forums out there - in many ways Tube was a pioneer, and thus had to learn lessons the hard way. Moderation is better understood these days, and I would expect that people building a moderation system would build from existing systems, and use experts.
Third, a lot of Twitter's problems stem from their unwillingness to "fire" abusive users. The moderators here are very willing to remove abusive posters, and make it clear that they are unwelcome. In comparison, Twitter has shown that they are uncomfortable with even the mildest of sanctions.
The trouble is who gets to define these things, and that's where it gets murky. I've generally considered myself a progressive and fairly open minded but I've gotten into very heated debates on these forums in regards to topics of sexism, racism, etc and found that "the line" here, so to speak, is weighted quite heavily in one direction. And that's fine for private forums, but I'd hate it if that were the standard for all public platforms. There are certainly conversations I take to other forums and platforms because you can push and prod and explore territory you simply cannot here.
Tldr, the PA method seems ideal in its context. A wider world of free speech and discourse exists with more heavily moderated havens for those who want them. If all things had PAs system, that would IMO be awful.
I'm not talking about "the PA method". I'm talking about the fact that if you're a racist/mysoginist/homophobic/whatever shitlord to people in an offline context, there's a very good chance you'll end up in legal trouble if you aren't very careful to toe the line or at least a social pariah. I see no reason why we shouldn't be extending this same principle to online social spaces.
They tried to bury us. They didn't know that we were seeds. 2018 Midterms. Get your shit together.
Who chooses the standards? On the street you're in one country subject to its laws. Online we are from wherever
People get to have unpopular opinions. That's kind of the basis of any free society. It's also healthy to have an airing place for all ideas: the poor ones are exposed and mocked while the good is sifted through.
Also, I'm dubious about who gets to set the line. I've been called pretty much all those things you mentioned: a racist, homophobe, a sexist, and all in the context of disagreeing with someone's deeply held beliefs on said topic in some way. In many respects there is an orthodoxy about these topics that must be adhered to or you an "ist", and I find that an extremely unhealthy state of affairs and a worrying one in the context of censuring "the bad speech".
They tried to bury us. They didn't know that we were seeds. 2018 Midterms. Get your shit together.
I get being a free speech absolutist, I lean that way myself, but the current tools enable abuse as much as they moderate them.
Nothing is really stopping anyone from creating a Twitter with different rules, Twitter itself is pretty simple
Well, nothing other than network effects and the fact that Twitter is still surviving on "investor storytime".
You can be racist on a streetcorner just fine. In most places you can even hand out racist fliers or hang posters about your racism. What you can't do is single out other people and berate them about your racism against their will. The crime isn't the racism, because making beliefs illegal is a fool's errand and inherently problematic. The crime is verbal assault, intimidation and harassment, or the escalation from there.
I could give a rat's ass if some dude is racist on Twitter. But if that dude decides that he and his racist buddies should follow me around and impede my ability to simply exist outside their influence, that's a problem.
They tried to bury us. They didn't know that we were seeds. 2018 Midterms. Get your shit together.
There's an entire fantastic comedy subculture in black twitter for example. You shouldn't be banning those people for saying the n word. But if a particular user is getting targeted by users calling them that word for example, then they should be removed.
Now you might think if somebody tweets to their own followers "I really hate n words" then they should be banned. But that's not twitter's stance. And IMO it shouldn't be. The system is designed so that the only people who will see that person's tweet are the people who have chosen to follow them.
The only time you're exposed to other users on twitter is when you follow them or when they @ you. And if they @ you with abuse, then they're breaking twitter's rules and should be reported as such, and twitter should ban those people. I figure they're probably not doing a good enough job at banning people who are breaking that targeted abuse rule, and should be held to enforcing their rules.
Well sure, but if creating one with better moderation is a good business plan... in theory people will switch right?
Twitter, even as it has grown to a position of dominance in social media, has yet to demonstrate that it can actually turn a profit. Nobody else is jumping in on that until there's actual proof of concept, leaving us with the one option that is still essentially an exploratory experiment.
They tried to bury us. They didn't know that we were seeds. 2018 Midterms. Get your shit together.
I am pretty passionate about this topic and I am curious where this thread will go. The above quote is the part that I think is basically impossible and where I have issues with social media deciding what is and isn't allowed. There is literally no way to make "all speakers feel safe". Someone will get offended about something and then the platform has to decide if its worth censoring.
In which of these situations do we censor the poster so that target user feels safe?
- A white person uses the n-word hatefully directed at another user.
- A black person uses the n-word hatefully directed at another user.
- Someone posts a crude image of Muhammed at a devout Muslim user.
- Someone posts a crude image of Christ at a devout Christian.
- A documented harasser tweets hateful things at a well-liked Celebrity
- A documented harasser tweets hateful things at a hated politician
To me this is the crux of the problem. Someone, somewhere has to decide if it needs to be censored. You could probably grab 100 people off the street and there would be many different answers to the above questions. This forum would probably agree that american liberal values should dictate the answers to those questions, but should they? Especially when american liberal values definitely do not make up the majority of the users of social media like Twitter, Facebook, Tumblr, Reddit, etc.
Some people might say "Eliminate all hateful or hurtful things". But literally nearly everything can hurt someone in someway. If all it takes is for someone to report a social media thing because it offends them, then social media might as well just shut down because its all gonna get reported.
If we say "Eliminate these specific hateful/hurtful things" then we are deciding that certain speakers who are hurt by stuff not on that list are not allowed to feel safe.
I'd rather not decide whose feelings/beliefs/opinions are important and whose are not. So to me its gotta be wild west style or (at the very least) there have to be super clear rules implemented and it needs to be applied evenly across the board. The issue is that I have yet to see a major social media platform apply the rules evenly across the board. Just as an example Tumblr has chosen to restrict hate Speech among other things. So, white supremacist tumblrs are taken down basically immediately, yet black supremacist tumblrs (that I've seen advocate killing/enslaving/raping all white people) are never touched and allowed to go on for years. I'm not saying whether or not Tumblr should allow hate like that on their platform, but I do know that if they are going to restrict malicious speech based on race, then it should be applied evenly.
But its rarely to never applied evenly and that is what is causing issues. That's the problem social media platforms are going to have as long as the rules and enforcement is done only on the whims of those who cry offense the loudest or policed by a small group of people who are left to act on their own feelings/beliefs/opinions.
If a social media platform doesn't give a user the ability to fully block other users, then I'm almost inclined to put the blame for any abuse/harassment on the platform.
It amazes me that Twitter doesn't have tools in place for a user to block a user so they literally never see anything from that user ever again.
No. Again, network effects.
They tried to bury us. They didn't know that we were seeds. 2018 Midterms. Get your shit together.
I'll try to post more as they come up on my feed full of female reporters
All my Twitter problems were solved with Block. It's a really handy thing. I've done it a lot this election cycle.
But I do appreciate the clarification. If I do not misunderstand you, you're not so much wanting speech policed as you are against harassment? There's a good middle ground to be found there. I'm as much a free speech purist as one can be but I understand harassment needs controls built in... I ALSO understand that if a system can be abused it will be abused, and controls can and will be used by people of all leanings to shut down people whose opinions they find "bad".
So I'm completely sympathetic to better harassment controls, but as far as I'm concerned there nothing wrong about a platform where people get to put forth opinions both good and bad without facing repercussions.
I agree with this, but there's more to it than that.
If I'm being discussed by people I've blocked, they're still impacting the environment the things I'm saying and the people I'm interacting with are inhabiting. I don't have to see it anymore, but that doesn't actually make them go away so much as it simply pushes them onto tangential methods of messing with me if they're dedicated to doing so.
They tried to bury us. They didn't know that we were seeds. 2018 Midterms. Get your shit together.
These two comments are interesting in that if we accept that more and more we live digitally and that for many people their digital presence is just as real as their physical one, then do we want to give corporations like Twitter the ability to (for all intents and purposes) govern us, make laws, and also be the equivalent of judge and jury?
Places like PA are different in that PA is not ubiquitous to digital life. This is basically like a club or a bar. So I can understand having their own rules.
But sites like Twitter and Facebook are so ingrained into our digital lives (and therefore physical lives) that they might have outgrown the ability/right to police themselves?
I gotta be honest, I don't see how you would go about controlling someone's ability to talk about you or why anyone would think that's a good idea. If someone commits libel or slander there are courts if you do choose, but beyond that people get to talk about what they talk about... including you if they like. That's just life.
Is that a thing now? It's cool if it is, I just wasn't aware there had been discussion down that road!
A racist yelling into the void doesn't bother me as long as I don't go looking for them. Sure, I disagree with them and will challenge them if I come into contact with them, but odds are very good they're just going to be screaming into their own darkness somewhere over there. As long as they aren't coming at me, they're pretty easy for me to ignore.
Now, it's important to note that I'm fairly privileged as a straight white cis male in that it's pretty tough to make an off-hand, non-directional comment that makes me feel threatened. There's no group out there that ideologically opposes my existence or social position. So my feelings of personal comfort via distance from bigots aren't necessarily universal.
That said, I have a rather strongly libertarian bent on these kinds of issues. I don't want to tell other people what to do, but I take the non-aggression principle very seriously. My personal stance is to police the aggression, not the idea, but there's never going to be a perfect definition for what that entails. I'd rather err on the side of making people from groups that are more likely to feel the effects of social aggression feel safe than protect the expression of those aggressions.
They tried to bury us. They didn't know that we were seeds. 2018 Midterms. Get your shit together.
It's not about controlling their ability to talk about you. It's about preventing them from simply sidestepping the protections in place and harrassing the people associated with you or filling up your feed for everyone else with shitlord comments and such.
They're always free to talk shit about me somewhere over there where it's not interfering with my friends or our conversation. Following me around and throwing shitlord comments on each of my tweets is still an aggression, even if I can't see it.
They tried to bury us. They didn't know that we were seeds. 2018 Midterms. Get your shit together.
I may differ on which side I err to but that's a pretty solid write up. Personally I worry about chilling effects in the other direction, God knows there's plenty of historical examples of how the "modern orthodoxy" of that day was wrong and held back many great leaps forward, and I hope that sort of environment never develops again.
Block is a handy solution for a few or a handful of harassers. It's not handy when you have an army going up against you.
EDIT: Added the quote for context.
Ah got ya. Yeah they should enhance the block feature to just fully excise them from your circle. But that seems like an easy mechanical fix as opposed to a philosophical gulf. Alternatively, I'm not against those who attempt to circumvent the rules facing censure.
It's an odd problem to be sure. A thousand people is still a thousand individuals posting individually and I'm not sure how you even approach fixing that. The human mob mentality might simply introduce a built in defect to all major open social platforms.
Oftentimes an army of people "harassing" someone online is viewed by the other side as an army of people "protesting"
Its all about the viewpoint, which is the problem and a super difficult thing to define and set hard rules for.
This is not a big forum. This forum deals with traffic four or five orders of magnitude below what Twitter does, if I remember the last Ask Tube thread correctly. To keep it at the level that it is, it has a moderation staff of, what, 10-20 people? So, assuming that human-run moderation scales linearly with posting volume, twitter would require at least ten thousand moderators. An abusive moderator is a lot worse than an abusive poster (you'll recall that on this very forum our banned users list boats a number of former moderators) and one admin can't keep ten thousand moderators line the way tube can keep twenty, so you'd need a whole hierarchy of super mods, community managers, etc. The community management staff here would certainly dwarf the technical staff.
I can't think of a discussion service within one order of magnitude of twitter that is significantly better policed than Twitter, can you? The problem ultimately is that staffing thing: setting up hardware and software to let people post on your service doesn't require staff to increase linearly with post volume, but moderating does.
I'm receptive to the idea that, like Lawn Darts, perhaps Twitter is a product that by its very nature is too harmful to be released to the public. How do you propose enforcing this in some kind of systematic way? Would the CPSB ban services with more than x number of users? Would Twitter, Inc be civilly liable for posts on their platform, so a user can sue Twitter for harassment (if that user can find a billionaire patron to fund the lawsuit, of course)?
I don't see having 10,000 employees dedicated to moderation to be all that ridiculous honestly. In terms of "number of employees needed to run an international business with millions of customers" that is not a ridiculously high number. Job creation!
One often brought up argument in this sphere, that I take exception to is the idea of "safety". Lots of people use it as you do here, as a substitute for comfort, but they are not the same thing.
More over the US, and I assume most other countries, already have well established legal lines for dealing with speech that is dangerous- that is speech that threatens safety. If you are going to argue for an equivalency of the online and the offline. It runs counter to your own argument then that one of these equivalent things warrants a far more draconian approach to free speech than the other.
Comparing a black women to a gorilla is racist and sexist garbage, but it isn't an issue of safety if done in the real world to their face(unless you support 'fightin words') let alone done as a tweet from 3000 miles away.
And while as a private company, Twitter is free to allow or ban who they please. As a historical activity private companies curtailing speech has been done disproportionately against minorities and minority viewpoints. Given your argument for the equivalency of the digital and the physical world, I'd think a more liberal approach to free speech is a better choice than supporting private companies country clubing these platforms as they see fit.
They tried to bury us. They didn't know that we were seeds. 2018 Midterms. Get your shit together.
Those stats end in 2015, they laid off a bunch of people around then too.