Language in social mediaAlgospeak - What do the codes and emojis on TikTok and co. mean?

Platforms such as TikTok or Instagram automatically check posts and comments that are posted in the background. This is to ensure that no content that violates the platform guidelines is visible. In order to circumvent these filters, a special language has been developed in recent years: Algospeak. We explain what Algospeak is and which coded terms and emojis you should pay particular attention to.

Sex becomes Seggs, Lesbian becomes Le$Bean or Le Dollar Bean and the harmless eggplant emoji becomes male genitalia. Anyone who scrolls through social media a lot may have seen strange terms or emojis whose meanings are not immediately obvious. Behind this is Algospeak, a coding of sensitive wordsthat could be recognized as problematic content if written normally.

What is Algospeak?

Algospeak is a portmanteau of "algo" for "algorithm" and "speak", the English word for "talking". Algospeak is used as a communication strategy in social mediato prevent platform restrictions. Terms are rephrased, syllables are swapped or numbers are used instead of letters. The use of certain emojis also falls under algospeak. A graphic with examples can be found further down in the article.

With the help of Algospeak against "Shadowbans"

Many platforms, such as TikTok, try to prevent users from seeing hate speech, insults, sexualized or extremist posts by automatically detecting certain words. In the past, content that did not violate any guidelines has also been hidden. Research by NDR, WDR and Tagesschau in 2022 showed that comments containing terms such as "gay", "homosexual", "LGBTQ", "Auschwitz" and "National Socialism" were sometimes not displayed or blocked. This was despite the fact that they were not problematic in terms of content or were even educational in nature. Users were not informed about the restrictions on their posts. This is referred to as "shadowbans". Posted content isstill visible to the creator, but not to other users. The posts therefore have less reach, usually without this being made clear by the platform operator. 

In order to prevent shadowbans or even the deletion of posts, users have started to modify terms. An article in the Washington Post in 2022 made the term "algospeak" for this behavior more widely known. Some algospeak terms are easy to "decode".Only one letter is exchanged, for example "D1CK"(dick), "Le$bian" (lesbian) or "Depressi0n" (depression). The situation is different when a well-known word stands for something completely different. For example, during the #MascaraTrend in 2023, users on TikTok used the beauty product as a code word to report on their experiences of sexual abuse.

Algospeak in problematic and anti-democratic contexts

Algospeak is not always used to explain and discuss sensitive topics or controversial content. Some code words and emojis are used to sexually harass people, normalize self-harming behaviour or to spread hate and right-wing extremist content.

Women in particular are often sexually harassed on social media without the platforms taking action. Users mainly use supposedly harmless emojis. For example, a fire emoji stands for "You're hot!", three drops of water mean "ejaculation" and anyone who writes "Show cherry emoji!" wants breasts to be shown.

People also come together online in communities to discuss self-harming behavior or suicidal thoughts and sometimes even encourage them. Certain code words and emojis have also become established in these contexts. Instead of talking about suicide ( "Suicide"), some users use the term "Sewer slide". Or instead of talking about death, the word "unalive" is used. Emojis such as knife, scissors or razor stand for self-harm and a barcode emoji for scars caused by self-harming behavior. You can find a detailed article on this at the Canadian media literacy initiative "The White Hatter". 

The fact that right-wing extremist ideologies are often not spread openly, but "disguised", is nothing new. The Amadeu Antonio Foundation refers to this as "detour communication". On social media, some disseminators of anti-democratic and anti-Semitic content are primarily interested in gaining a wide reach. This allows them to reach people who have not previously come into contact with misanthropic ideologies. The strategy usually goes beyond Algospeak and is more complex. For example, there is a TikTok trend that uses humorous videos and memes to prove that dwarves really exist. The dwarf is a right-wing extremist symbolic image. On the one hand, gnome hats are reminiscent of medieval Jewish hats, while on the other, the garden gnome stands for an "Aryan" way of life in these circles. The blue heart emoji, which expresses sympathy for the AfD, is also more widespread. In the USA, on the other hand, the blue heart is a sign of support for the Democratic Party.

Overview with examples of Algospeak

We have put together some examples for you in the graphic. You can download and print out the overview.

Does Algospeak really make sense?

Users hope that Algospeak will not restrict their reach. However, it is not clear whether this approach is even necessary or effective. Platforms have not yet provided the public with a comprehensive insight into how the automatic recognition of content works and the exact criteria used for moderation. As a result, users have to speculate about what content could be affected. And it can also lead to an over-cautious approach in which things are coded even though there is no reason to do so.

It is also questionable how effective a coded letter is in preventing automatic recognition. After all, it can be assumed that a filter that searches for certain words can easily be expanded to include new terms. If a platform really wants to take restrictive action against content, the generally known Algospeak terms on this topic would certainly also be recognized. For example, a search for "Sewer Slide" on TikTok now triggers the same references to suicide help centers as a search for the term "Suicide".
Thanks to the use of artificial intelligence, semantic correlations can also be recognized automatically. Whether the term "Auschwitz" is used in the context of Holocaust denial or in a post to commemorate the liberation of the concentration camp would also be recognizable in an automatic check. However, as already mentioned, little is known about how exactly automatic detection and moderation is used on the individual platforms.

Further information from klicksafe