Skip to main content
Advertisement

Women

Those absurd AI cartoons aren’t harmless fun – they’re spreading misogyny

They may resemble playful children’s cartoons, but many viral AI videos are built on sexist storylines. Beneath the bizarreness of cheating foxes, gold-digging cats and coy papayas, what looks like harmless entertainment, or AI slop, is shaping how women and girls are seen.

Those absurd AI cartoons aren’t harmless fun – they’re spreading misogyny

Viral AI videos mimic children’s cartoons but embed sexist tropes. (Photos: TikTok/@Happytoon_kids, TikTok/@ojoenlafama)

New: You can now listen to articles.

This audio is generated by an AI tool.

06 May 2026 07:28AM (Updated: 06 May 2026 09:42AM)

A female fox cheating on a bear with a cobra and birthing a baby cobra. A cinnamon stick slurping up a cafe latte’s brain and being poisoned by her. A gold-digging cat dumping her husband and child. A banana impregnating two sultry grapes and then abandoning his babies for a coy papaya.

These four videos alone have amassed more than 2 million likes on TikTok.

Across social media platforms such as TikTok and YouTube, AI-generated slop is subverting and sexualising children’s cartoons. And many of these clips are going viral.

If the idea of grapes, cats, foxes and teacups with breasts is not disturbing enough, these animated characters also act out misogynist plots. Common themes include cheating, paternity tests, humiliation and violence. Female characters are stereotyped as gold-digging, dishonest or vapid.

The combination of child-like cartoon characters and soap opera themes is so absurd that many netizens watch till the end. And this prompts the algorithm to feed them more of such videos.

It may seem harmless – a little mindless scrolling – but this is not just brain rot. Experts warn that such videos can shape gender dynamics and social norms over time.

SHOCK CONTENT SELLS

Extreme content and polarising views on social media are not new. Algorithms push such content because they elicit stronger emotional responses and lead to longer engagement. However, the ease of AI content creation has taken this to a whole new level.

Before generative AI, an animated short video used to take days of scripting, storyboarding, illustration, animation, voice recording and editing, and involve multiple content creators. Now, it can be produced by a single person in minutes with a few AI tools that cost less than a couple of hundred dollars in total monthly subscription.

As a result, many content creators are flooding social media with AI content, which is pushed out by algorithms. According to one report, more than 20 per cent of videos shown to new YouTube users are AI slop.

“Most social media platforms reward volume and visibility, not content quality,” said Natalie Pang, head and associate professor at the Department of Communications and New Media at the National University of Singapore.

“Misogynistic tropes can provoke emotional reactions which contribute to one’s visibility (on social media). Algorithms are often trained to recognise trending content, so content that triggers many reactions and are consequently shared and discussed, is often amplified,” Assoc Prof Pang explained.

An animated short video that may have taken a team days to produce can now be made by a single person in minutes using AI tools. (Photo: iStock/Dragos Condrea)

Your attention is being monetised by AI content farms. According to a 2025 study of 15,000 of the world’s most popular YouTube channels, 278 of them consist entirely of AI slop. Together, these channels amassed 63 billion views and generated more US$117 million (S$149 million) in annual revenue.

MASKING MISOGYNY AS HUMOUR

Most of us usually won’t watch a video of a cheating woman being humiliated. So why would we view a clip with the same plotline featuring fruits?

The humour and absurdity of exaggerated characters, bizarre storylines and awkward narration can act like a social camouflage, said Dr Annabelle Chow, clinical psychologist at Annabelle Psychology.

“Packaging degrading content in absurd or child-like formats can make it easier to accept. It feels playful, harmless and easy to dismiss as 'just a joke'. People laugh before they stop to think about what is being said,” she added.

Research has also found that sexist humour can make such content feel less offensive or harmful, so women are less likely to call it out, she noted. Even if people had wanted to confront such content, they can be dismissed as too serious, too sensitive, or as someone who simply “doesn’t get it”, she added.

“But humour does not reduce the harm. When degrading ideas are repeatedly packaged as funny, people can become more tolerant of them and might normalise extreme attitudes as legitimate or plausible,” she cautioned.

Assoc Prof Pang added: “Additionally, it is relatively easy to create pseudonymous accounts – so there is minimal accountability for creators who put out misogynistic content.”

AI-generated slop is sexualising children’s cartoons in social media platforms like TikTok and YouTube, and many are going viral. (Photo: iStock/Kenneth Cheung)

Such videos can stoke sexism, especially in a world where gender attitudes among young men and women are becoming increasingly polarised.

“When we encounter the same ideas again and again, they start to feel familiar, and familiarity can be mistaken for truth,” explained Dr Chow.

“Over time, if messages like 'women are unfaithful' or 'women are only interested in money or status' are repeated often, they may start to feel less jarring, and from there, can begin to seem more plausible or representative, even if people did not explicitly endorse them at first,” she said.

“AI videos supercharge these effects. (Their) sheer volume can help make these ideas seem more widespread than they really are,” she added.

INFLUENCING CHILDREN’S VIEWS ON GENDER

What is more alarming is that some of these videos are tagged as #kidscartoon, #storyforkids, #kidshorts and #fruitstory. And sometimes, algorithms inadvertently push such content to children.

“Algorithms don’t typically target younger audiences to push harmful content, including misogynistic content,” clarified Assoc Prof Pang.

She said that while social media platforms have policies to address safety concerns with younger audiences, they may end up pushing misogynistic content to younger audiences because of the ways such content is designed.

There is a much greater volume of synthetic content today and misogynistic content can be embedded in a video’s soundtrack or a series of images, she explained. And instead of being overtly misogynistic, it may contain subtle tropes that make it difficult for algorithms to detect, she said.

AI slop often contains hashtags such as #storyforkids, #kidscartoons and #aifoodstories, even though they are unsuitable for children. (Photos: TikTok/@ojoenlafama, TikTok/@ai_devil1; Art: CNA/Jasper Loh)

Sometimes, younger audiences may also be pushed such content if accounts they follow have also liked or shared them, Assoc Prof Pang added.

Children may watch these videos because “it follows the visual and emotional grammar of children’s shows”, said Dr Chow.

At an age where they have yet to develop their gender identity and critical thinking, this can be particularly damaging.

“Exposure helps shape children’s views of gender by giving them repeated cues about what boys and girls are like, how they are supposed to behave, and what roles they are expected to occupy.

“If a child is repeatedly exposed to content that portrays girls as manipulative, selfish, dramatic, lesser or blameworthy… it reinforces stereotypes, shapes peer judgments, and influences what children come to see as normal or expected for girls and boys,” she explained.

And these can affect children earlier than many adults assume, Dr Chow warned.

Children begin to understand gender labels like “boy” and “girl” when they are 18-24 months old and start to form their own gender biases and scripts in preschool. (Photo: iStock/Userba011d64_201; Art: CNA/Jasper Loh)

Studies have found that many children begin to understand and use gender labels like “boy” and “girl” by 18 to 24 months of age and use them to organise what they notice about the world, said Dr Chow.

“However, while a preschooler may not fully understand misogyny as an abstract social concept, they already begin forming their own personal gender biases and scripts,” she added.

“Therefore, the issue is not whether a four-year-old fully understands harmful messages in AI videos or the themes such as cheating, paternity tests or betrayal – they probably do not.

“The issue is that they may still absorb the underlying script: who is framed as bad, who is blamed, who is mocked, who is punished, and who is seen as trustworthy or untrustworthy.”

Young children may not understand themes like cheating and paternity tests but will absorb the underlying script of who is blamed or punished, and who is seen as trustworthy. (Photo: iStock/miya227)

Dr Chow suggested helping children and young teens build media literacy by asking questions like “Is this being kind or mean?”, “How do you think that person feels?”, “Who is this video making fun of?” or “What assumptions is this video making?”.

Platforms are trying to regulate such content. TikTok is working to label AI-generated content more accurately and announced in November 2025 that it is testing a new control that would allow users to adjust the amount of AI content in their For You feeds.

Social media platforms also have policy guidelines that prohibit harassment, hate speech, sexual exploitation, or non-consensual imagery. However, most misogynist content tends not to cross these boundaries and hence, slip under the radar.

“Platform moderation and labelling of AI-generated content is important and we need to continue to push platforms to higher standards,” said Assoc Prof Pang.

“What worries me most is that while these AI videos look silly, colourful, absurd and childish, the ideas underneath are not new. They can echo the same misogynistic messages found in more seriously produced content, such as pseudo-intellectual podcasts, debates or influencer content,” added Dr Chow.

“When those ideas are repeated across different formats, they can start to feel less fringe and more like part of everyday reality,” she said.

So let’s not dismiss AI slop as just for laughs or blindly consume such content. Misogyny is not cute, even if it wears a strawberry face.

CNA Women is a section on CNA Lifestyle that seeks to inform, empower and inspire the modern woman. If you have women-related news, issues and ideas to share with us, email CNAWomen [at] mediacorp.com.sg.

Source: CNA/pc
Advertisement

RECOMMENDED

Advertisement