Ze doen wel iets maar mogen het niet vrij geven.
Facebook heeft al aangetoond dat ze geen normale gesprekspartner zijn,
https://www.washingtonpos...-are-the-facebook-papers/For example, Zuckerberg testified last year before Congress that the company removes 94 percent of the hate speech it finds before a human reports it. But in internal documents, researchers estimated that the company was removing less than 5 percent of all hate speech on Facebook.
Facebook draagt bij aan extremisme en de rekening komt voor de samenleving inclusief talloze opstootjes en erger;
During the run-up to the 2020 U.S. presidential election, the social media giant dialed up efforts to police content that promoted violence, misinformation and hate speech. But after Nov. 6, Facebook rolled back many of the dozens of measures aimed at safeguarding U.S. users. A ban on the main Stop the Steal group didn’t apply to the dozens of look-alike groups that popped up in what the company later concluded was a “coordinated” campaign, documents show.
By the time Facebook tried to reimpose its “break the glass” measures, it was too late: A pro-Trump mob was storming the U.S. Capitol.
Weer iets wat opvalt;
According to one 2020 summary, the vast majority of its efforts against misinformation — 84 percent — went toward the United States, the documents show, with just 16 percent going to the “Rest of World,” including India, France and Italy.
Though Facebook considers India a top priority, activating large teams to engage with civil society groups and protect elections, the documents show that Indian users experience Facebook without critical guardrails common in English-speaking countries.
In andere landen gaat het dus veel extremer.
En facebook is hier niet alleen in,
meest recente is Twitter;
daar weten de bestuurders 100% dat er heel veel kindermisbruik en kinderporno op hun netwerk zit,
https://www.theverge.com/...content-problem-elon-muskBefore the final go-ahead to launch, though, Twitter convened 84 employees to form what it called a “Red Team.” The goal was “to pressure-test the decision to allow adult creators to monetize on the platform, by specifically focusing on what it would look like for Twitter to do this safely and responsibly,” according to documents obtained by The Verge and interviews with current and former Twitter employees.
What the Red Team discovered derailed the project: Twitter could not safely allow adult creators to sell subscriptions because the company was not — and still is not — effectively policing harmful sexual content on the platform.
Een relatief klein team dat bakken interne informatie stroomlijnt en rapporteert, waar vervolgens niks mee gedaan wordt.
Van de overheden en journalisten hoef je niks te verwachten,
dit soort bedrijven zijn to big to fail geworden en kunnen eindeloos trekken om vervolgens met een schikking weg te komen.