Algorithmaxxing

Originally published in Kernel Mag, March 2024
There is no organization. There is no manifesto. In order to optimize your content for the algorithm, or “algorithmax,” the creator must adapt as quickly as TikTok does. Users must keep up with an unseen opponent; we hear whispers, signals, and clues as we try to decode what the algorithm is pushing, what it suppresses, what it favors, and how it controls both what we express and consume.



There are several common practices that are rumored to boost algorithmic advantage: publishing content at certain times of the day, writing #fyp in the caption, or playing a trending song on low in the background of your video. Employing certain tactics to boost content is not new to us. Instagram users discovered long ago that posting a selfie increases engagement, along with posting consistently, utilizing multiple features on the platform, and interacting with other user’s posts. Engagement optimization strategies have been plotted since Instagram made the switch from a chronological feed to an algorithmic one. In the current digital era, TikTok is the new arena for the game of algorithmaxxing.


* * *


TikTok’s black box of a content moderation strategy has led users to determine amongst themselves what is permitted and devise their own hacks to get around those systems. In an attempt to keep their content algorithmically viable, avoiding content flags and shadow bans, users modify their output so it won’t get flagged by TikTok’s moderation filters. This can be seen as spiteful compliance, as users simultaneously rebel against and conform to the platform’s policies. 


One of the common ways this has manifested is in the increasing popularity of “algospeak” across popular social media sites. This phenomenon is an evolving form of communication, a creative response devised by users to bypass content moderation filters. Algospeak is a version of ingroup vernacular that exists as a natural progression of language; we divide ourselves into social groupings that range in scale from couples to entire ethnic groups, each creating their own dialect, slang, euphemisms, or code words. This linguistic resilience is representative of our natural, human capacity to adapt language in order to express ourselves effectively, even when the constraints that we face are occuring in the virtual sphere.




Distinct from but inspired by leetspeak, algospeak involves a myriad of linguistic adaptations that either modify, censor, or invoke certain words that might otherwise be flagged by a moderation filter. While leetspeak just involves the changing of letters into numbers, algospeak employs additional layers of obfuscation, incorporating code words or emojis to evade more sophisticated moderation. While algospeak is the de facto name that first surfaced on Twitter, linguists Kendra Calhoun and Alexia Fawcett prefer to call this phenomenon “linguistic self-censorship.” This way of speaking could also be thought of as an online form of avoidance speech, a style of language that is used in many cultures, but perhaps most overtly in Australian Aboriginal and Austronesian languages. Avoidant speech is defined as the phenomenon of speaking in a restricted style while in the presence of certain relatives or company, often as a sign of respect or due to cultural taboos. Aspects of this phenomenon are informally present in most cultures including North America. For instance, a person might omit profanity or slang in the presence of elders, code-switch when a speaker feels they might be in an oppressive environment, or use a more professional cadence in a job interview. These are all forms of modified speech that we might use in the presence of certain interlocutors, while online we restrict our speech under the domineering presence of the algorithm.


In order to keep up with the lightning-fast pace of the app, TikTok algospeak must constantly evolve. There are, however, certain phrases that have reached universal understanding among users; “SA” instead of sexual assault, “unalive” in lieu of kill or die, “seggs” for sex, “🌽” for porn, “🥷” instead of the N-word. Many instances of algospeak rely on the user creatively choosing their own words in order to fit their specific purpose; for instance, a sex-work creator introduces herself as: “I’m a mattress actress, OF girly and SW.” Algospeak is necessarily malleable and non-formulaic as it shifts to fit the needs of different niches, subgroups, or cultures.


Since TikTok is an audio-visual platform, algospeak often extends beyond words into emojis, sounds, or body language. For instance, creators of color might show the inside of their palm instead of saying “white people.” While “white people” isn’t necessarily regarded as a controversial or commonly flagged phrase, this is a testament to the playful linguistic creativity of social groups. We develop slang, gestures, and in-group vernacular online just as we have IRL. Algospeak doesn’t only serve to evade filters or optimize audience reach, but is also a natural extension of the languages we create within social, racial, class, or generation groups.


* * *


Relating algospeak to particular identity groups prompts a discussion on TikTok’s history of suppressing content related to marginalized communities. While the company has never outright admitted to flagging words like “gay” or “lesbian,” they did apologize in 2020 for censoring LGBTQ content, as well as videos posted by fat or disabled creators, supposedly part of an effort to reduce bullying on the platform. Before their admission, this practice was suspected by users on the platform who received warnings of content violation, saw videos taken down, had audio removed, or were subject to the more innocuous shadow ban . Earlier that year, documents were released that exposed TikTok for instructing their moderators to suppress posts that contained “abnormal body shapes, ugly facial looks, and shabby, dilapidated environments, such as, but not limited to: slums, rural fields, dilapidated housing.” While videos that fit this criteria don’t violate TikTok’s Community Guidelines, they were suppressed in order for the app to obtain an aspirational appeal that attracts new users.


Users of color, LGBTQ+ folks, sex workers, and disabled creators often face disproportionate censorship and content suppression on social media platforms, which may be a result of biased moderation algorithms. As a result, they adapt their language to continue expressing their experiences, issues, and identities without triggering these moderation systems. This digital linguistic adaptation is a form of both resistance and resilience, allowing users to maintain their visibility and voice in virtual spaces where they might otherwise be silenced.


User Seansvv posted a TikTok where he explains this reaction: “A lot of these [flagged] colloquialisms are from BIPOC, LGBTQIA, or marginalized communities. They're subjects that we are very well-versed in speaking [about] and navigating professionally. Yet, these systems flag us because they are universal flags placed on any person who talks about them.” Certain terms are flagged regardless of their context, whether the term is simply being referenced or even reappropriated by a member of the affected group.


Beyond the rare admissions that we receive from TikTok directly, most suspicions of algorithmic suppression and counteractive strategies are anecdotal. While there has been limited research conducted on this particular subject, researcher Daniel Klug published a study in 2021 that analyzed the efficacy of certain algorithmaxxing tactics, and another in 2023 that reported on algospeak and content moderation evasion. In the 2023 work, Klug and fellow researchers Ella Steen and Kathryn Yurechko found that using algospeak, along with posting at certain times of day, were correlated with higher engagement, while other practices like using #fyp and leetspeak were less effective. In the latter study, the researchers write:


Many creators of sex education videos realized that TikTok’s algorithmic content moderation easily figured out Leetspeak whereas new words were more difficult to comprehend… We can see that participants carefully evaluated the algospeak they used and eventually used different algospeak or invented new terms to improve its effectiveness.


Users tend to rely more heavily on their own experiences, as well as the testimony of other members within their online communities, to determine the best route to algorithmaxxing.


* * *


Is the threat of content suppression real or perceived? While it might be true that certain terms were born out of necessity, it seems as though this phenomenon has become so ubiquitous that it overblows the threat of content moderation.


While watching the content of a user who posts heavy political, sexual, or racial content, it’s easy to reach the conclusion that these excessive self-censorship measures are verging on superfluous or conspiratorial. For instance, when I search for the term “sex work,” the results are filled with relevant content, some of which use the word freely, and some of which include censored versions. This would suggest that either the algorithm is extremely ineffective at moderating all content that includes the word “sex,” or that the word might not be flagged at all. On the other hand, searching for a word might surface its correlating results, but that doesn’t necessarily mean that the content is getting algorithmically pushed on TikTok’s For You Page.


Earlier this year, a nature documentary clip of a blue whale surfaced on my FYP. Following the impulse to go down the rabbit hole, I went to the search bar and typed in “blue whale.” Confusingly, the search offered no results and instead directed me to a suicide hot line. When I Googled “blue whale TikTok,” I was met with results about the “Blue Whale Challenge,” a social media “game” that encourages users to engage in self-harm or suicidal behaviors. TikTok had initially wiped all search results for “blue whale,” but eventually corrected this censor to include only results that also feature the keyword “challenge.” This issue of potential double meaning drives the fear of being misinterpreted, particularly when being moderated by AI. This concern can prompt users to apply censored spelling and algospeak to modify even innocuous content.


A common practice is for users to write an adapted word in captions only, while the spoken content of the video remains uncensored. For example, the video will feature the user saying “sex work” while the captions read “seggs work.” In Klug, Steen, and Yurechko’s 2023 study, they found that “all participants assessed the details of videos they had previously restricted to infer that TikTok’s content moderation mainly scrutinized written text. This encouraged them to use algospeak mostly in text rather than in spoken language.” This practice of caption-specific adaptation gives the impression that TikTok might be moderating its content more heavily through its auto captions, an accessibility feature launched in 2021 that has become increasingly popular on the app ever since. Users can review the automatically generated captions before posting and manually edit out or respell the words that they expect to be flagged. Another tactic is to sound out the algospeak spelling, so it reads in the caption without any manual editing—an example of this is the purportedly flagged word “lesbian” which has been given the algospeak equivalent “le$bian,” or, when said phonetically, “le dollar bean.”


Recently, a common trend has surfaced where users post “🍉”in reference to discourse involving the Israel-Palestine conflict. As the Palestinian flag has been banned in many circumstances in Israel and the West Bank since 1967, Palestinians and allies have used the watermelon as a coded referent. Recently, user Marajazzcabbage posted a TikTok discussing the algorithm’s suppression of pro-Palestinian content. The video describes her efforts to like and comment on every Palestine related TikTok, but upon double-checking, finding that the videos did not correctly receive the likes. In her analysis, “the censorship is real. They’re literally taking away our likes so these videos don’t get boosted. They’re scared shitless of us. It lowkey makes me smile because they’re terrified. They’re literally running around… trying to get us to shut up by censoring us.”


The comments, on the other hand, reflect inconsistent experiences. Several comments reveal a similar frustration, while others express skepticism over the root cause, or offer up alternative theories. 


“yes! ok it's not just me.”


“Im not sure this is about Palestine, this happened very often for me even before everything. Liking videos takes up space, so tiktok unlikes videos”


“That’s so wild??? I am not having this issue 🥴”


Another user replies to the above comment: “tiktok is like, different for every person every month so maybe its that”


In the video, every instance of the word “pro-Palestine” is edited to read “watermelon” or “🍉” within its auto captions. More curiously, the TikTok itself is captioned: “try to only engage with watermelon content bc they're gonna try [to] shove other content to your FYP bc you liked it #freepalestine🇵🇸❤️ #freepalestine #humanity.” The inclusion of #freepalestine and the Palestinian flag shows that the user isn’t employing all possible modifications to avoid content filters. This might indicate that the use of “🍉” has become a colloquial substitution, no longer intended to evade moderation filters; conversely, this may also suggest that self-censorship is as hard to consistently implement as TikTok’s own moderation system. Marajazzcabbage, a fervently pro-Palestine poster, diligently censors the captions in most of her TikToks— “izzy” in reference to Israel, “Olive Town” instead of Gaza, “jello slide” for genocide, “banana” in lieu of shadow-banned. Despite all of this algorithm vernacular hacking, her most popular videos are those that include #freepalestine in the caption, or even certain videos that don’t censor the words Israel or Palestine at all.


When asked how TikTok censorship has affected her posts, Marajazzcabbage commented:


Censorship is absolutely something I’ve been experiencing. Especially since I caption my videos for my audience. My views will tank if I accidentally forget to replace the word Palestine or Gaza with something else. And based off of other user’s experiences as well, I’ve learned to change my wording on certain things. My content has indeed been removed before. My previous account was banned for a little while as well because of my Pro Palestine content. I was able to get it back after appealing but not everyone is that lucky. I remember posting a video just encouraging people to yell Free Palestine in random spaces and then added clips of me doing just that. In one of the clips I screamed, ‘The United States is funding a genocide. Ceasefire now!’ and the video got taken down for ‘Misinformation.’


The suggestion that algospeak overblows the threat of censorship is not to say that creators don’t experience any form of suppression or silencing on the app. Rather, this is a testament to the mystery of TikTok’s content moderation system. If a video gets flagged, users are unable to discern if their posts were systematically removed by the platform’s moderation filters or reported by another user. If a video is flagged, the user will simply receive a notice of removal citing a reason for violation. These reasonings are often vague, citing broad categories like “minor safety”, “illegal activities”, “nudity and sex”, “graphic content”, and most commonly, simply “Community Guidelines.” With these overly general terms and limited pathways to appeal, users are left to deduce their exact violation themselves. Oftentimes, the verdict appears to be a misinterpretation by AI moderators, as the user may not have featured anything that violates community guidelines. Other times, the creator might have covered a controversial topic like politics or abuse; the video may contain words that could be mistaken as something derogatory or offensive, which prompts the creator to either reshoot entirely or modify the auto captions.


This unpredictable dysfunctionality of the app sends us into a frenzy trying to identify causal connections. Inferring these causal relationships is the only way we can gain an understanding beyond what we gather directly through our experience; our impulse to infer is how we make sense of the world, how we defend ourselves against it. Even so, our subjective experiences may create a good justification for these connections. If TikTok isn’t saving my likes, and I’m liking politically controversial content, it’s a plausible conclusion that TikTok is purposefully interfering with the content I consume. We sense a vague threat, and try our best to identify its weight, impact, and purpose so we may better strategize our retaliation.


While content hacking and algorithmaxxing might lend itself to perpetuating Big Tech conspiracy theories and paranoia, there is no doubt that these strategies have been born out of a certain necessity. A platform as large as TikTok is home to countless niches and subgroups, many of them marginalized and threatened by censorship, suppression, or bans. The procedure of bypassing content filters is rooted in the desire for self expression, but also to broadcast one’s narrative. We send off a piece of ourselves into the app which gets swept away into the algorithm. When a person or community has felt silenced or suppressed through the social sphere, they’re given a renewed opportunity for representation through the virtual.


Our penchant to be seen, convene with community, and defy oppressive authority are all significant factors that impact how we use social media. As algorithms and moderation systems change, we adapt to them in order to better suit our user needs. In that sense, the company-to-user feedback loop is a perpetual cycle of ideation and creative resilience. As Big Tech companies continue to tighten the limits of what users can post or say, creators are more inspired to innovate within those confines. Of course, our solutions are never ideal, but practical. No matter our methods of adaptation or modification, we can never use our apps as we could if we owned them.




Follow us

Podcast

Twitter

Email

About us

USURPATOR is an online magazine sharing essays and interviews about the user experience of our current virtual landscape

Run by @hard_boiledbabe