Changes in the USAInstagram, Facebook and threads allow more misinformation and hate speech again

Meta has announced in a press release that it will discontinue its fact-checking program on the Facebook, Instagram and Threads platforms. Instead of professional fact-checking organizations, the community will now decide whether statements are truthful or not. At the same time, the rules for hate speech on the platform have been changed and significantly weakened in some areas. We explain what exactly is changing and what these changes mean for users in Germany and the EU.

The changes to the fact-checking program announced by Meta will initially only apply in the USA. Meta has now also included the amended rules on "Hateful Conduct" in the German version of the community guidelines. It is currently unclear whether Meta plans to extend the changes to fact-checking to other countries. The wording in the company's press release, which states that the new guidelines will apply "first in the USA", can at least be interpreted in this way. There are no concrete announcements or even a date for changes in Germany at the time of publication of this article.

Why is Meta abolishing the fact-checking program?

Since 2016, content on Facebook, Instagram and threads can be subjected to a fact check. To date, Meta has cooperated with external experts and organizationswho checked posts for the accuracy of their content. If it turned out that the content contained false information, the content was provided with a corresponding large-scale notice. This was to prevent users from continuing to see this false information and believing it to be true. In addition, such false information was also displayed less frequently to other users to prevent it from spreading further.

Meta now claims that the fact checkers are biased and have wrongly labeled certain positions as false information. No concrete evidence is provided for this claim. This is problematic, as this justification makes use of a well-known narrative of right-wing authoritarian forces. According to this narrative, the traditional media and fact-checkers are not trustworthy and only allow information that serves their own agenda, while suppressing freedom of expression.

Meta presents the Community Notes process as a solution. Voluntary users register and take part in the community notes program. Their task is then to add notes or corrections to posts if they believe the content contains incorrect information. These comments are not published immediately, but are first evaluated by other participants in the Community Notes program. Only when enough other users have found the content to be good and correct is it added to the original content as a note.

The X platform (formerly Twitter), which has been working with community notes for some time, is an explicit model for this type of review. There are still many unanswered questions for Meta's community notes: What are the criteria for selecting users for the community notes program? How does Meta ensure that plurality is guaranteed? And are community notes by non-experts just as reliable and of the same high quality as fact checks by experts?

What is already clear is that posts that contain false information and are provided with a Community Note will now be displayed again without restriction. And the large notices indicating that the information is false will be removed. In future, at least in the USA, it will be easier to spreadmisinformation on Facebook, Instagram and threads and it will be more difficult to recognize known misinformation.

What is Meta changing about the rules on hate speech?

In addition to abolishing the fact-checking program, Meta is also changing the rules on hate speech on Facebook, Instagram and threads. These changes could potentially cause even greater damage. Some content has been removed from the list of what Meta considers to be hate speech. For example, Meta now explicitly allows homosexual or transgender people to be denigrated as mentally ill and abnormal. The reason given for this is the "political and religious discourse surrounding homosexuality and trans identity".

The reason given for the softening of the categories for hate speech makes us sit up and take notice. According to Meta , its own rules had been inadequately applied in the past. This has led to opinions and positions being categorized as a violation of the community rules, which in fact would not have been a violation at all. Meta also quotes a figure for this: on average, one to two posts per ten moderated posts are said to have been wrongly criticized. The company therefore now wants to lower the standards for what is permitted on the platform and moderate fewer posts. In addition, automatic systems that detect violations will only be used for serious violations, such as images of sexual violence against children. Violations such as hate speech should no longer be detected automatically, but should be reported to the platform by the users themselves.

In the past, klicksafe, among others, has criticized the fact that automatic systems are inadequate for moderating content given the current state of technology. However, it is surprising why the solution to this problem is not to improve moderation (for example by employing many trained specialists) but, on the contrary, to lower the moderation standards.

Will the changes also be applied in Germany and the EU?

To date, Meta has announced that the changes to the fact-checking program described above will only be implemented in the USA. However, the changes to the community standards have now also been incorporated into the German version. It is reasonable to assume that Meta has currently tailored these steps specifically to the USA in order to adapt the platforms to him and his supporters in time for Donald Trump's second term of office, which is soon to begin.

If Meta were to implement the changes to fact checks in the EU, this could result in fines in the millions. The Digital Services Act stipulates that "Very Large Online Platforms" (VLOPS) such as Facebook, Instagram and Threads must comply with the Strengthening Code of Practice on Disinformation comply. This requires these platforms to provide their users with information on the trustworthiness of content. To this end, the platforms are to work with independent third parties. Fact checkers are also explicitly listed as possible partners. Whether the community notes system envisaged by Meta meets these requirements would then have to be explained by Meta and reviewed by the EU. If it turns out that VLOPS are not fulfilling their duty to take effective action against misinformation, this would be a breach of their duty of care under the DSA. This can result in fines in the millions and a ban. As mentioned above, a community notes system is also in use on platform X. Proceedings against X have already been initiated by the EU Commission, but a decision has yet to be made.

The change in the standards for hate speech and a waiver of proactive moderation for violations in this area could also cost Meta dearly in the EU. Not least because children and young people also use the platforms, providers are obliged by the DSA to take appropriate action against content such as hate speech, discrimination and insults.

How can I protect myself from fake news?

The best protection against fake news is the competent handling of online information. You can succeed with these tips:

  • Don't panic - Don't let your fear, anger or sadness control you. Strong emotions make you more susceptible to fake news.
  • Check sources - Check reputable sources. If two or three sources say the same thing, you are safer. If in doubt, use reputable fact-checking sites such as Mimikama or Correctiv.
  • Be informed - learn about the technological and economic background of social media platforms and search engines (e.g. what do algorithms do?). See through manipulation techniques, e.g. when content is taken out of context or when counter-questions lead to other topics.
  • Think for yourself - ask yourself whether it is really possible for something like this to happen. Question information, even if it is shared by friends or influencers you like. Listen to your mind.
  • Don't share - Never forward false reports. Certain false reports can even make you liable to prosecution. You can report disinformation and other violations to the platforms directly, to complaints offices and to trusted flaggers.