They also said that their existing moderation efforts were due to societal and political pressures. They aren't explicit about it, but it's clear that pressure does not exist anymore. This is another big win for Meta, because minimizing their investment in content moderation and simplifying their product will reduce operating expenses.
> it means people who care about the content being right will have to engage more with the Meta products to ensure their worldview is correctly represented.
To me it sounds better for large actors who pay shills to influence public opinion, like Qatar. I disagree that this is better for either Facebook users, or society as a whole.It does however certainly fit the Golden rule - he with the gold makes the rules.
I was cautiously optimistic when this was announced that India and Saudi Arabia (among others, incl. Qatar) might see some pushback on how they clamp down on free speech and journalism on social media. But since Zuck mentioned Europe, I fear those countries will continue as they did before.
[1] https://en.m.wikipedia.org/wiki/BJP_IT_Cell
[2] https://www.bbc.com/news/blogs-news-from-elsewhere-23695896
Only the name "Community Notes" is less misleading then "Fact checkers".
Or maybe such people have far better things to do than fact check concern trolls and paid propagandists.
I'm not sure if they do more good than harm. Often the entire point seems to be to get those specific people spun up, realizing that the troll is not constrained to admit error no matter how airtight the refutation. It just makes them look as frothing as trolls claim they are.
And yet, it's also unclear if any other course of action would help. Despite decades of pleading, the trolls never starve no matter how little they're fed.
Your point is exactly why I can’t take anyone serious who claims that randoms “debating” will cause the best ideas to rise to the top.
I cant count how many times i’ve seen influencer propagandists engage in an online “debate”, be handheld walked through how their entire point is wrong, only for them to spew the exact same thing hours later at the top of every feed. and remember these are often the people with some of the largest platforms claiming they’re being censored … to millions of people lol.
it’s too easy to manipulate what rises to the top. for debate to be anything close to effective all parties involved have to actually be interested in coming closer to a truth. and the algorithms have no interest in deranking sophists and propagandists.
Downvotes that hide posts below a certain threshold have always seemed like the best approach to me. Of course it also allows groups to silence views.
Strong disagree. This is a very naive understanding of the situation. "Fact-checking" by users is just more of the kind of shouting back and forth that these social networks are already full of. That's why a third-party fact checks are important.
How can this be replicated with topics that are by definition controversial, and happening in real time? I don't know. But I don't think Meta/X have any sort of vested interest in seeing sober, fact-based conversations. In fact, their incentives work entirely in the opposite direction: the more anger/divisive the content drives additional traffic and engagement [1]. Whereas, with Wikipedia, I would argue the opposite is true: Wikipedia would never have gained the dominance it has if it was full of emotionally-charged content with dubious/no sourcing.
So I guess my conclusion from this is that I doubt any community-sourced "fact checking" efforts in-sourced from the social media platforms themselves will be successful, because the incentives are misaligned for the platform. Why invest any effort into something that will drive down engagement on your platform?
[1] Just one reference I found: https://www.pnas.org/doi/abs/10.1073/pnas.2024292118. From the abstract:
> ... we found that posts about the political out-group were shared or retweeted about twice as often as posts about the in-group. Each individual term referring to the political out-group increased the odds of a social media post being shared by 67%. Out-group language consistently emerged as the strongest predictor of shares and retweets: the average effect size of out-group language was about 4.8 times as strong as that of negative affect language and about 6.7 times as strong as that of moral-emotional language—both established predictors of social media engagement. ...
1) Shouting matches create more ad impressions, as people interact more with the platform. The shouting matches also get more attention from other viewers than any calm factual statement. 2) Less legal responsibility / costs / overhead 3) Less potential flak from being officially involved in fact-checking in a way that displeases the current political group in power
Users lose, but are people who still use FB today going to use FB less because the official fact checkers are gone? Almost certainly not in any significant numbers
"Fact-checking" completely removed the ability for debate and is therefore antithetical to a functional democracy. Pushing back against authority, because they are often dead wrong, is foundational to a free society. It's hard to imagine anything more authoritarian than "No I don't have to debate because I'm a fact-checker and by that measure alone you're wrong and I'm right". Very Orwellian indeed!
Additionally, the number of times that I've observed "fact-checkers" lying thru their teeth for obvious political reasons is absurd.
It's clear that the pressure comes now from the other side of the spectrum. Zuck already put Trumpists at various key positions.
> I think this is a big win for Meta, because it means people who care about the content being right will have to engage more with the Meta products to ensure their worldview is correctly represented.
It's a good point. They're also going to push more political contents, which should increase engagement (eventually frustrating users and advertisers?)
Either way, it's pretty clear that the company works with the power in place, which is extremely concerning (whether you're left or right leaning, and even more if you're not American).
I didn't think it was any secret that Meta largely complies with US gov't instructions on what to suppress. It's called jawboning[1]
[1] https://www.thefire.org/research-learn/what-jawboning-and-do...
If you care so much about it, now you can contribute with Community Notes. The power is in your hands! Go forth and be happy.
If you assume they are immune to politics (not true but let's go with it), this is the most obvious reason.
They've seen X hasn't taken that much heat for Community Notes and they're like "wow we can cut a line item".
The real problem is, Facebook is not X. 90% of the content on Facebook is not public.
You can't Fact Check or Community Note the private groups sharing blatantly false content, until it spills out via a re-share.
So Facebook will remain a breeding ground of conspiracy, pushed there by the echo chamber and Nazi-bar effects.
The EU goes its own way with trusted flaggers, which is more or less the least sensible option. It won't take long until bounds are overstepped and legal content gets flagged. Perhaps it already happened. This is not a solution to even an ill-defined problem.
In the meantime, maybe now I can discuss private matters of my diagnosis without catching random warnings, bans, or worse.
That could lead to a debate between the fact checkers, which would derail the debate.
Better to not have fact checkers as part of the debate, and leave the fact checking to the post-debate analysis.
Were they fact-checking too much? Not enough? Incorrectly?
Oh wait, fact checkers don't work, better just inform yourself and make up your own mind, and don't just believe some supposedly authoritarian figures.
Go back a sentence.
https://www.reuters.com/article/world/fact-check-virginia-go...
> “where there may be severe deformities. There may be a fetus that’s non viable” he said. “If a mother is in labor, I can tell you exactly what would happen.”
Your dying grandma may go DNR, but that doesn’t mean murdering grandmas is broadly legal.
My wife does charity photography for https://www.nowilaymedowntosleep.org/. You see lots of this sort of withdrawal of care. Calling it an abortion is cruel and dumb.
I don't know if I'd call it a certain win for Meta long term, but it might well be if they play it right. Presumably they're banking on things being fairly siloed anyway, so political tirades in one bubble won't push users in another bubble off the platform. If they have good ways for people to ignore others, maybe they can have the cake and eat it, unlike Twitter.
Like Twitter, the network effect will retain people, and unlike Twitter, Facebook is a much deeper, more integrated service such that people can't just jump across to a work-alike.
A CEO who can keep his mouth shut is also a pretty big plus for them. They skated away from bring involved with a genocide without too many issues, so same ethical revulsion people have against Musk seems to be much less focused.
The problem with CN right now, though, is that Musk appears to block it on most of his posts, and/or right-wing moderators downvote the notes so they don't appear or disappear.
Indeed the ending of the famous story is:
> "But the Emperor has nothing at all on!" said a little child.
> "Listen to the voice of innocence!" exclaimed his father; and what the child had said was whispered from one to another.
> "But he has nothing at all on!" at last cried out all the people. The Emperor was vexed, for he knew that the people were right; but he thought the procession must go on now! And the lords of the bedchamber took greater pains than ever, to appear holding up a train, although, in reality, there was no train to hold.
If what they said about their design is to be believed, political downvoting shouldn't heavily impact them. I wish it was easier to see pending notes on a post though.
I think the fact-checking part is pretty straightforward. What's outrageous is that the content moderators judge content subjectively, labeling perfect discussions as misinformation, hate speech, and etc. That's where the censorship starts.
Facebook moderators have an even harder job than that because the inherent scale of the platform prevents the kinds of personal insights and contextual understanding I had.
It also starts when there is no third-party anymore. Where is the middle line?