kingofsheeba
Active Member
- Joined
- Aug 22, 2013
- Messages
- 920
- Reaction score
- 1,149
Should Facebook still be the Wild West? Or should users take accountability and accept their punishments for violating Facebook’s policies and standards?
Okay. So, how about non-murder related items? Would you be okay with Facebook banning you for something that you posted ten years ago? It was innocent then but problematic now?Social media platforms can't be the wild west and be consistent with the limitations of the First Amendment. I can't host a site where someone advocates for murder and expect the rules to be any different than in the real world. Social media sites have the flexibility to moderate questionable content, that's how they protect themselves and if they don't they should be held liable. If that is too vague, well too bad.
Essentially correct. (Insert technical quibble with not-really-relevant caveat with regard to the ADA and web site accessibility)My (unsympathetic) point was that social media are actually private entities not public accommodations
I mean, they absolutely can be used to break the law. If your meaning is "breaking the law via social media" is still "breaking the law" and not treated differently because it's via social media, then that's correct.cannot be used to break the law
See above: social media can be (illegally) used to do illegal things, and can be (legally) used to do legal things. I'm not a lawyer, but "undermine our democracy" isn't likely to fly as a legal standard. That social media companies should take action to prevent/remove speech and conduct corrosive to democracy is a moral position (and, in my view, a very good position) but not legal standard. ("shouldn't" isn't the same as "can't")let foreign actors undermine our democracy
Tell me if I'm reading this wrong, because it's hard to read this as anything but a call for shutting down social media platforms/companies if they don't meet some (possibly nonspecific and/or impossible) standard of preventing/removing certain (legal? illegal?) speech/conduct.if they ultimately cannot stop people then they cannot exist.
Social media and other private sites aren't state actors, the First Amendment doesn't bind them, that's correct. There's no First Amendment implications to Facebook (or any other site, such as ArchBoston) having rules of conduct and prohibited speech, removing content against their terms of service, and banning those who violate these policies. The First Amendment does enter into the question of whether the government can try and coerce or force a site like Facebook into having/enforcing certain terms and rules, with the answer being, generally, no, they can't. The public doesn't have a First Amendment right to (filtered or unfiltered) access to Facebook's platform, but Facebook does have a First Amendment right to decide (in general) what types and categories of speech and expression are permitted or excluded on its platform.The user must understand that your personal First Amendment protections largely don't apply on a privately owned site like Facebook. You can't compel others to give you a soapbox
The burden is on them so they better figure it out.
Linking these two together because I'm a little confused but they seem to be pointing in the same direction. It's entirely accurate that (in almost all cases) a user who gets banned from a platform for violating its rules has no legal recourse (see again said private entities' First Amendment right to set their own rules), so you're not wrong that they have to essentially find a new conduit if they want to continue to try and have an audience.if a user can't follow the rules his only option is to start his own site and assume the liability if others post there.
Actually a good example. What people have said and posted about their own actions on January 6 is essentially them documenting their own crimes in real-time. Facebook and the other sites have no legal liability here, because it's not their speech or conduct, it's very clearly the conduct (and speech) of the people who were doing illegal things and chose to publicize it on Facebook. Facebook could, if they choose, ban insurrectionist content and/or ban people who took part on January 6th, that's entirely their affair: the First Amendment would simultaneously prohibit the government punishing them for doing so, or punishing them for declining to do so. It's the people who actually did the illegal things who are getting punished for what they did; that some of it involved cyberspace is beside the point (because it is indeed part of real life)."Cyberspace" is real life and can be used against you. A good example is that what people have said or posted is being used to prosecute those responsible for January 6th.
Mostly trying to work out what you were trying to say in the post I was replying to, because I was honestly confused as to whether you were making a statement about how things are (with a few mostly minor factual/legal mistakes I was trying to clear up) or an argument about how things ought to be, in which case I mostly agree with you (at least morally/ethically).Are you confronting my ignorance or agreeing with me? I honestly can't tell
Excellent example of the nightmarish difficulty of content moderation (which is impossible to do well at scale). In context, it's pretty obvious that a comment like that is an expression of annoyance rather than an actual threat. Seen in a context-free vacuum by Facebook's content moderators (either automated or people) and it's a whole lot less obvious without the context. Don't know if a suspension for something like that was or could be appealed, but given the volumes Facebook has to deal with, it wouldn't surprise me if appeals tended to fail because they don't have either the time or the inclination to consider the relevant context. (Though, if so, that's a decent argument for them getting better at moderation. Just because you're inevitably going to fail some of the time at that scale doesn't mean you can't do better than you are now.)My friend was recently given a thirty day ban on Facebook because he told his roommate that “he was gonna kill him if he cranked the heat up to 72 degrees.” Now, knowing my homie, that wouldn’t happen. He’s too scrawny to Jill someone. Plus, he’s also not actually going to kill his roommate. But try telling that to the Facebook poo poo.
What about inciting violence? What you're describing only prevents the action of one individual, the person issuing the threat. What happens if they influence others to commit violent acts? Being a person of Asian descent, hearing violence committed to people that look like me because of misinformation and telling me to accept it is, well, not acceptable. Do we have to arm ourselves and meet violence with violence because their bigotry matters more than people's lives?As we are seeing on this board, nobody will have the same exact opinion as anyone else. As long as we aren’t threatening actual violence towards one another, we shouldn’t be banned because someone was offended.
I mean, I don't think the problem with social media is one of acceptability or not. If neo-nazis in Ohio want to create a Facebook group where they plan non-violent, white supremacism events then let them. The problem is false information and its spread. A lot of people don't bother to cross-reference information that they see on social media, and that can be dangerous. If the same neo-nazi group mentioned earlier made a public post saying that the Lebanese had created AIDS and it gets 1.1k likes, then now we have a problem.The broader problem is that it's impossible to create standards of what is and isn't acceptable content and conduct that will please everybody, and it's also effectively impossible to moderate well at mega-scale (even where the rules themselves aren't at issue). Accepting those facts and working to optimize the rules and the processes to produce the best results while acknowledging that perfection is impossible is, at least to me, a good idea. Any of the sundry attention-hungry-politicians' schemes for imposing some set of rules seems to me to be a.) doomed to failure because of the above and b.) corrosive to democracy and offensive to constitutional principles of free speech, so I tend to look askance at such ideas, while (usually) understanding the motivations of those honestly interested in dealing with some of the crappiest cesspools of social media.
We have to combat hatred and misinformation with better reasoning skills. Something that is lacking in today’s society. That’s the paradox of free speech. It allows even the most ignorant and hateful people to get up on their soapbox.What about inciting violence? What you're describing only prevents the action of one individual, the person issuing the threat. What happens if they influence others to commit violent acts? Being a person of Asian descent, hearing violence committed to people that look like me because of misinformation and telling me to accept it is, well, not acceptable. Do we have to arm ourselves and meet violence with violence because their bigotry matters more than people's lives?
Wait? My biological father is 50% Syrian/Lebanese. Why am I just now getting this AIDS memo?I mean, I don't think the problem with social media is one of acceptability or not. If neo-nazis in Ohio want to create a Facebook group where they plan non-violent, white supremacism events then let them. The problem is false information and its spread. A lot of people don't bother to cross-reference information that they see on social media, and that can be dangerous. If the same neo-nazi group mentioned earlier made a public post saying that the Lebanese had created AIDS and it gets 1.1k likes, then now we have a problem.
I probably should have been clearer. In the context "acceptable" in "acceptable content" meant "acceptable to the platform", as I was noting that it's impossible for platforms to come up with moderation policies regarding what conduct/content is acceptable to them on their platform that will please everyone.I mean, I don't think the problem with social media is one of acceptability or not. If neo-nazis in Ohio want to create a Facebook group where they plan non-violent, white supremacism events then let them.
You're not wrong that we have a problem with misinformation. Unfortunately sifting out misinformation is easier said than done, given the scale that we're talking about. That said, do I think that certain platforms (cough*Facebook*cough) could definitely do more about the problem than they have? Yes, and their head-in-the-sand response (historically, at any rate) is both perfectly legal and morally irresponsible.The problem is false information and its spread. A lot of people don't bother to cross-reference information that they see on social media, and that can be dangerous. If the same neo-nazi group mentioned earlier made a public post saying that the Lebanese had created AIDS and it gets 1.1k likes, then now we have a problem.
The issue is vigilantism and different interpretations of morality. Give a person who grew up with a nativism background a gun and you'll have someone that can justify their act of violence on immigrants. Victims are then justified to respond and victims of victims are then justified to respond. Society will devolved into essentially gangs carrying out revenge killing. I do agree that the solution is to combat hatred and misinformation but there hasn't been any solution that can effectively make an impact at a large scale and I totally get that, like all things, things aren't black and white.We have to combat hatred and misinformation with better reasoning skills. Something that is lacking in today’s society. That’s the paradox of free speech. It allows even the most ignorant and hateful people to get up on their soapbox.
We should be arming ourselves no matter what. I’m pretty liberal on many issues that you guys are except when it comes to guns. Everyone needs to take up arms after they turn eighteen. Train and teach gun safety. You’re also teaching intelligence and critical thinking with gun use.
And unfortunately society is collapsing and none of us have the easy answer. There’s no silver bullet, but think back to Judah and the Black Messiah. Or as simplistic as Malcom X. By any means necessary.The issue is vigilantism and different interpretations of morality. Give a person who grew up with a nativism background a gun and you'll have someone that can justify their act of violence on immigrants. Victims are then justified to respond and victims of victims are then justified to respond. Society will devolved into essentially gangs carrying out revenge killing. I do agree that the solution is to combat hatred and misinformation but there hasn't been any solution that can effectively make an impact at a large scale and I totally get that, like all things, things aren't black and white.
It seems like you like social media so much you are willing to overlook or excuse people breaking the lawI probably should have been clearer. In the context "acceptable" in "acceptable content" meant "acceptable to the platform", as I was noting that it's impossible for platforms to come up with moderation policies regarding what conduct/content is acceptable to them on their platform that will please everyone.
You're not wrong that we have a problem with misinformation. Unfortunately sifting out misinformation is easier said than done, given the scale that we're talking about. That said, do I think that certain platforms (cough*Facebook*cough) could definitely do more about the problem than they have? Yes, and their head-in-the-sand response (historically, at any rate) is both perfectly legal and morally irresponsible.
Hmm. That's certainly one way to characterize my post, though not a particularly accurate representation of what I actually said.It seems like you like social media so much you are willing to overlook or excuse people breaking the law