Zuck

kingofsheeba

Senior Member
Joined
Aug 22, 2013
Messages
1,049
Reaction score
1,356
Should Facebook still be the Wild West? Or should users take accountability and accept their punishments for violating Facebook’s policies and standards?
 
Social media platforms can't be the wild west and be consistent with the limitations of the First Amendment. I can't host a site where someone advocates for murder and expect the rules to be any different than in the real world. Social media sites have the flexibility to moderate questionable content, that's how they protect themselves and if they don't they should be held liable. If that is too vague, well too bad.
 
Social media platforms can't be the wild west and be consistent with the limitations of the First Amendment. I can't host a site where someone advocates for murder and expect the rules to be any different than in the real world. Social media sites have the flexibility to moderate questionable content, that's how they protect themselves and if they don't they should be held liable. If that is too vague, well too bad.
Okay. So, how about non-murder related items? Would you be okay with Facebook banning you for something that you posted ten years ago? It was innocent then but problematic now?

I understand why you’d want to ban hate speech, but then where do you draw the line? Everyone will have a grievance to report you for what you might find as innocent but not to someone else. Sky’s the limit and it can happen to you whether or not you want to believe. So, go off but you’re defending something that could backfire in your face and you won’t like it.

In my reporting days, I worked with LE and they told me that “cyberspace is not real life,” in reference to an online stalking case involving a mother and son and two adults.
 
I have no idea what you think I am defending. My (unsympathetic) point was that social media are actually private entities not public accommodations and cannot be used to break the law or let foreign actors undermine our democracy, and if they ultimately cannot stop people then they cannot exist. The burden is on them so they better figure it out. The user must understand that your personal First Amendment protections largely don't apply on a privately owned site like Facebook. You can't compel others to give you a soapbox; if a user can't follow the rules his only option is to start his own site and assume the liability if others post there. "Cyberspace" is real life and can be used against you. A good example is that what people have said or posted is being used to prosecute those responsible for January 6th.
 
My (unsympathetic) point was that social media are actually private entities not public accommodations

Essentially correct. (Insert technical quibble with not-really-relevant caveat with regard to the ADA and web site accessibility)

cannot be used to break the law

I mean, they absolutely can be used to break the law. If your meaning is "breaking the law via social media" is still "breaking the law" and not treated differently because it's via social media, then that's correct.

let foreign actors undermine our democracy

See above: social media can be (illegally) used to do illegal things, and can be (legally) used to do legal things. I'm not a lawyer, but "undermine our democracy" isn't likely to fly as a legal standard. That social media companies should take action to prevent/remove speech and conduct corrosive to democracy is a moral position (and, in my view, a very good position) but not legal standard. ("shouldn't" isn't the same as "can't")

if they ultimately cannot stop people then they cannot exist.

Tell me if I'm reading this wrong, because it's hard to read this as anything but a call for shutting down social media platforms/companies if they don't meet some (possibly nonspecific and/or impossible) standard of preventing/removing certain (legal? illegal?) speech/conduct.

The user must understand that your personal First Amendment protections largely don't apply on a privately owned site like Facebook. You can't compel others to give you a soapbox

Social media and other private sites aren't state actors, the First Amendment doesn't bind them, that's correct. There's no First Amendment implications to Facebook (or any other site, such as ArchBoston) having rules of conduct and prohibited speech, removing content against their terms of service, and banning those who violate these policies. The First Amendment does enter into the question of whether the government can try and coerce or force a site like Facebook into having/enforcing certain terms and rules, with the answer being, generally, no, they can't. The public doesn't have a First Amendment right to (filtered or unfiltered) access to Facebook's platform, but Facebook does have a First Amendment right to decide (in general) what types and categories of speech and expression are permitted or excluded on its platform.

The burden is on them so they better figure it out.
if a user can't follow the rules his only option is to start his own site and assume the liability if others post there.

Linking these two together because I'm a little confused but they seem to be pointing in the same direction. It's entirely accurate that (in almost all cases) a user who gets banned from a platform for violating its rules has no legal recourse (see again said private entities' First Amendment right to set their own rules), so you're not wrong that they have to essentially find a new conduit if they want to continue to try and have an audience.

Liability is where I'm confused by your post. Social media companies (actually, pretty much all sites, including, in the second quoted example, the "start their own" site) aren't liable for almost everything that gets posted on their sites by users. There are some exceptions (mostly involving certain criminal material, and involving copyrighted content), but for the most part sites are only liable for their own speech/conduct, and they are explicitly not liable for their content moderation decisions (which are, for the most part, likely also protected under the First Amendment).

If your point is that they should be liable (a common enough refrain these days, especially among those either annoyed at perceived over-moderation or perceived under-moderation), that's a policy preference, and one with a potentially-severe First Amendment problem, namely that since sites aren't public forums (in a legal sense) and aren't state actors, the First Amendment doesn't constrain them from moderating, while it simultaneously protects their right to moderate as they see fit, meaning that the government imposing certain standards of content moderation is likely to be in itself a First Amendment violation.

"Cyberspace" is real life and can be used against you. A good example is that what people have said or posted is being used to prosecute those responsible for January 6th.

Actually a good example. What people have said and posted about their own actions on January 6 is essentially them documenting their own crimes in real-time. Facebook and the other sites have no legal liability here, because it's not their speech or conduct, it's very clearly the conduct (and speech) of the people who were doing illegal things and chose to publicize it on Facebook. Facebook could, if they choose, ban insurrectionist content and/or ban people who took part on January 6th, that's entirely their affair: the First Amendment would simultaneously prohibit the government punishing them for doing so, or punishing them for declining to do so. It's the people who actually did the illegal things who are getting punished for what they did; that some of it involved cyberspace is beside the point (because it is indeed part of real life).
 
Are you confronting my ignorance or agreeing with me? I honestly can't tell
 
Are you confronting my ignorance or agreeing with me? I honestly can't tell

Mostly trying to work out what you were trying to say in the post I was replying to, because I was honestly confused as to whether you were making a statement about how things are (with a few mostly minor factual/legal mistakes I was trying to clear up) or an argument about how things ought to be, in which case I mostly agree with you (at least morally/ethically).
 
So, I’m against violence on social media of any kind. But Facebook’s algorithms and fact checkers need to be modified.

I don’t want the government running social media because that’s more dangerous than when Zuck does it now, but there has to be a middle ground. I don’t think people understand how much everyday folks rely on social media sites like Facebook, Instagram and TikTok. Like it or not, it’s the new Townsquare and will be for sometime.

As we are seeing on this board, nobody will have the same exact opinion as anyone else. As long as we aren’t threatening actual violence towards one another, we shouldn’t be banned because someone was offended.

My friend was recently given a thirty day ban on Facebook because he told his roommate that “he was gonna kill him if he cranked the heat up to 72 degrees.” Now, knowing my homie, that wouldn’t happen. He’s too scrawny to Jill someone. Plus, he’s also not actually going to kill his roommate. But try telling that to the Facebook poo poo.

Misinformation is deadly and has led to a lot of needless deaths. But let’s be cautious because the accountability police could be coming for you!
 
My friend was recently given a thirty day ban on Facebook because he told his roommate that “he was gonna kill him if he cranked the heat up to 72 degrees.” Now, knowing my homie, that wouldn’t happen. He’s too scrawny to Jill someone. Plus, he’s also not actually going to kill his roommate. But try telling that to the Facebook poo poo.

Excellent example of the nightmarish difficulty of content moderation (which is impossible to do well at scale). In context, it's pretty obvious that a comment like that is an expression of annoyance rather than an actual threat. Seen in a context-free vacuum by Facebook's content moderators (either automated or people) and it's a whole lot less obvious without the context. Don't know if a suspension for something like that was or could be appealed, but given the volumes Facebook has to deal with, it wouldn't surprise me if appeals tended to fail because they don't have either the time or the inclination to consider the relevant context. (Though, if so, that's a decent argument for them getting better at moderation. Just because you're inevitably going to fail some of the time at that scale doesn't mean you can't do better than you are now.)

The broader problem is that it's impossible to create standards of what is and isn't acceptable content and conduct that will please everybody, and it's also effectively impossible to moderate well at mega-scale (even where the rules themselves aren't at issue). Accepting those facts and working to optimize the rules and the processes to produce the best results while acknowledging that perfection is impossible is, at least to me, a good idea. Any of the sundry attention-hungry-politicians' schemes for imposing some set of rules seems to me to be a.) doomed to failure because of the above and b.) corrosive to democracy and offensive to constitutional principles of free speech, so I tend to look askance at such ideas, while (usually) understanding the motivations of those honestly interested in dealing with some of the crappiest cesspools of social media.
 
As we are seeing on this board, nobody will have the same exact opinion as anyone else. As long as we aren’t threatening actual violence towards one another, we shouldn’t be banned because someone was offended.
What about inciting violence? What you're describing only prevents the action of one individual, the person issuing the threat. What happens if they influence others to commit violent acts? Being a person of Asian descent, hearing violence committed to people that look like me because of misinformation and telling me to accept it is, well, not acceptable. Do we have to arm ourselves and meet violence with violence because their bigotry matters more than people's lives?
 
The broader problem is that it's impossible to create standards of what is and isn't acceptable content and conduct that will please everybody, and it's also effectively impossible to moderate well at mega-scale (even where the rules themselves aren't at issue). Accepting those facts and working to optimize the rules and the processes to produce the best results while acknowledging that perfection is impossible is, at least to me, a good idea. Any of the sundry attention-hungry-politicians' schemes for imposing some set of rules seems to me to be a.) doomed to failure because of the above and b.) corrosive to democracy and offensive to constitutional principles of free speech, so I tend to look askance at such ideas, while (usually) understanding the motivations of those honestly interested in dealing with some of the crappiest cesspools of social media.

I mean, I don't think the problem with social media is one of acceptability or not. If neo-nazis in Ohio want to create a Facebook group where they plan non-violent, white supremacism events then let them. The problem is false information and its spread. A lot of people don't bother to cross-reference information that they see on social media, and that can be dangerous. If the same neo-nazi group mentioned earlier made a public post saying that the Lebanese had created AIDS and it gets 1.1k likes, then now we have a problem.
 
What about inciting violence? What you're describing only prevents the action of one individual, the person issuing the threat. What happens if they influence others to commit violent acts? Being a person of Asian descent, hearing violence committed to people that look like me because of misinformation and telling me to accept it is, well, not acceptable. Do we have to arm ourselves and meet violence with violence because their bigotry matters more than people's lives?
We have to combat hatred and misinformation with better reasoning skills. Something that is lacking in today’s society. That’s the paradox of free speech. It allows even the most ignorant and hateful people to get up on their soapbox.

We should be arming ourselves no matter what. I’m pretty liberal on many issues that you guys are except when it comes to guns. Everyone needs to take up arms after they turn eighteen. Train and teach gun safety. You’re also teaching intelligence and critical thinking with gun use.
 
I mean, I don't think the problem with social media is one of acceptability or not. If neo-nazis in Ohio want to create a Facebook group where they plan non-violent, white supremacism events then let them. The problem is false information and its spread. A lot of people don't bother to cross-reference information that they see on social media, and that can be dangerous. If the same neo-nazi group mentioned earlier made a public post saying that the Lebanese had created AIDS and it gets 1.1k likes, then now we have a problem.
Wait? My biological father is 50% Syrian/Lebanese. Why am I just now getting this AIDS memo?
 
I mean, I don't think the problem with social media is one of acceptability or not. If neo-nazis in Ohio want to create a Facebook group where they plan non-violent, white supremacism events then let them.

I probably should have been clearer. In the context "acceptable" in "acceptable content" meant "acceptable to the platform", as I was noting that it's impossible for platforms to come up with moderation policies regarding what conduct/content is acceptable to them on their platform that will please everyone.

The problem is false information and its spread. A lot of people don't bother to cross-reference information that they see on social media, and that can be dangerous. If the same neo-nazi group mentioned earlier made a public post saying that the Lebanese had created AIDS and it gets 1.1k likes, then now we have a problem.

You're not wrong that we have a problem with misinformation. Unfortunately sifting out misinformation is easier said than done, given the scale that we're talking about. That said, do I think that certain platforms (cough*Facebook*cough) could definitely do more about the problem than they have? Yes, and their head-in-the-sand response (historically, at any rate) is both perfectly legal and morally irresponsible.
 
We have to combat hatred and misinformation with better reasoning skills. Something that is lacking in today’s society. That’s the paradox of free speech. It allows even the most ignorant and hateful people to get up on their soapbox.

We should be arming ourselves no matter what. I’m pretty liberal on many issues that you guys are except when it comes to guns. Everyone needs to take up arms after they turn eighteen. Train and teach gun safety. You’re also teaching intelligence and critical thinking with gun use.
The issue is vigilantism and different interpretations of morality. Give a person who grew up with a nativism background a gun and you'll have someone that can justify their act of violence on immigrants. Victims are then justified to respond and victims of victims are then justified to respond. Society will devolved into essentially gangs carrying out revenge killing. I do agree that the solution is to combat hatred and misinformation but there hasn't been any solution that can effectively make an impact at a large scale and I totally get that, like all things, things aren't black and white.
 
The issue is vigilantism and different interpretations of morality. Give a person who grew up with a nativism background a gun and you'll have someone that can justify their act of violence on immigrants. Victims are then justified to respond and victims of victims are then justified to respond. Society will devolved into essentially gangs carrying out revenge killing. I do agree that the solution is to combat hatred and misinformation but there hasn't been any solution that can effectively make an impact at a large scale and I totally get that, like all things, things aren't black and white.
And unfortunately society is collapsing and none of us have the easy answer. There’s no silver bullet, but think back to Judah and the Black Messiah. Or as simplistic as Malcom X. By any means necessary.

I reflect on my finding out that my family was multiracial and it didn’t occur to me that my folks tried to have us pass as white. So when kids asked if I was a mulatto, I didn’t know what they meant until years later.

When we talk about the violence towards AAPI, we have to use the same methods towards anyone who commits hatred towards other groups. Keep reminding them that in fifty years, they will truly regret their actions. Drill it into them that their hate will catch up with them. In the meantime, we have to arm ourselves. It’s called intelligence. And it’s on us to change the narrative.
 
I probably should have been clearer. In the context "acceptable" in "acceptable content" meant "acceptable to the platform", as I was noting that it's impossible for platforms to come up with moderation policies regarding what conduct/content is acceptable to them on their platform that will please everyone.



You're not wrong that we have a problem with misinformation. Unfortunately sifting out misinformation is easier said than done, given the scale that we're talking about. That said, do I think that certain platforms (cough*Facebook*cough) could definitely do more about the problem than they have? Yes, and their head-in-the-sand response (historically, at any rate) is both perfectly legal and morally irresponsible.

It seems like you like social media so much you are willing to overlook or excuse people breaking the law
 
It seems like you like social media so much you are willing to overlook or excuse people breaking the law

Hmm. That's certainly one way to characterize my post, though not a particularly accurate representation of what I actually said.

First of all, I don't particularly like social media (especially in the sense of the large platforms) all that much, though I also don't reflexively despise them as seems to be somewhat of a popular position (in some circles, anyway) these days.

Second, and more importantly, I don't recall anywhere where I indicated a willingness to overlook or excuse people breaking the law. Most of my commentary concerned the fact that a lot of what people complain about on social media (including a considerable amount of misinformation and a lot of what's deemed "hate speech") is in no way, shape, or form illegal; it doesn't involve breaking laws. Spreading misinformation and/or outright lying on social media (or most other media) is rarely, in practice, illegal. A massive amount of hate speech, while disgusting and despicable, is likewise not illegal, and in both cases it's very likely the case that laws against such things would themselves be thrown out as unconstitutional infringement of freedom of speech.

I don't condone, excuse, or overlook people breaking the law. But what the law forbids is a massively smaller universe of content and conduct than most people would tend to call unwanted or unacceptable. The point is that there's a metric ton of toxic garbage that is corrosive, offensive, and at times downright evil without being in any way illegal, and where there's enormously strong reasons why laws designed to make those things illegal are both a.) unconstitutional under existing legal precedents and b.) potentially problematic with unintended consequences.

The other element of the point, which could be, I suppose, misunderstood to condone overlooking or excusing lawbreaking, is similarly a factual statement that content moderation is impossible to do perfectly, particularly at scale. There's two elements to that, though. The first is that scale literally makes it impossible; there's not enough time or resources in the world for social media companies to thoroughly, with context, screen the volume of content that gets posted, meaning that things that shouldn't will get through and things that are fine will be inadvertently taken down. That's a fact, though it should not be interpreted as saying that companies can't do better: they can, and they absolutely should, but it is factually indisputable that they'll never get everything right every time (but they should try to get as much right as they can). The other element is that imperfection in content moderation isn't just about the difficulty in applying a platform's own rules, it's that crafting rules is inherently going to upset some people (either because content they like is prohibited or content they dislike is allowed). There are certain things that are outright illegal, and every effort needs to be made to eliminate those things, but there's plenty that isn't illegal, and either can't or shouldn't be made illegal, and that therefore platforms can host to their heart's content even if we might wish that they wouldn't.
 

Back
Top