9.9 C
Brussels
Tuesday, April 22, 2025

TikTok is ‘bombarding’ children with ‘harmful’ videos

An investigation by The iPaper into social media content aimed at children created a fictitious TikTok account for a typical 13-year-old boy. Within minutes, a barrage of disturbing videos were sent to him questioning his mental state. His account was bombarded with potentially dangerous content at a rate of one video every two minutes. This happened without any search on the part of the user for information on mental health issues.

Thirteen-year-olds are being exposed to an “extraordinarily harmful” amount of mental health content on social media, including videos that experts believe can drive young teens to depression or suicide. An investigation by The iPaper into the content on social media available to children created a fictional TikTok account for a typical 13-year-old boy. Within minutes, a barrage of disturbing videos were sent to him questioning his mental state. His account was bombarded with potentially dangerous content at a rate of one video every two minutes. This happened without any search on the part of the user for information on mental health issues. The “boy’s” account was presented with videos about feelings of depression or loneliness, including references to suicide, with the first such video appearing less than 90 seconds after logging into the app.

Seven videos addressing depression appeared in less than 45 minutes – one every six minutes. Aggressive “motivational” videos, popularized by controversial influencer Andrew Tate, also appeared repeatedly. Within 45 minutes, the account was served twelve videos about “toxic masculinity” that promoted the importance of hiding emotions and building physical strength.

The revelations are part of a wider investigation into children’s online safety, which also found that the Instagram account of a fictional 13-year-old girl was exposed to videos that were simplistic about disorders such as ADHD and autism. Psychologists worry that some children who view this content may mistakenly believe they have these complex disorders, causing anxiety and distress. The revelations suggest that other teenage accounts may have been similarly targeted, and have prompted calls from MPs and campaigners for social media companies to act urgently to tighten restrictions on children’s accounts.

Experts believe TikTok’s algorithm repeatedly promotes videos about depression to accounts of 13-year-olds because data suggests young boys are more likely to engage with this content, or seek it out.

Helen Hayes, chair of the Education Committee in the British Parliament, said: “The overwhelming evidence uncovered by this inquiry shows how the most popular social media platforms continue to push content towards children that is currently legal but harmful, highly addictive, or spreads misinformation about their well-being.” The father of Molly Russell, a 14-year-old girl who died from an act of self-harm after viewing harmful content online, wrote a letter to Keir Starmer last weekend, warning that the UK is “going backwards” on online safety. Ian Russell declared: “The torrents of content that children see will soon become a flood – a digital disaster.”

He is the chairman of the Molly Rose Foundation, which, along with two leading psychologists, analysed the videos shown about fictional children created for the investigation. They said the evidence raised serious concerns and called for social media companies to ensure that all teenage accounts are automatically set to the most restrictive level of content. Currently, apart from Instagram, it is up to teenagers or their parents to activate these settings. Andy Burrows, head of the Molly Rose Foundation, said: “When viewed in succession, this content can be extremely harmful, particularly for teenagers struggling with mental health issues, for whom it can reinforce negative feelings and make them feel hopeless.” “When setting obligations for child safety, Ofcom should consider how algorithmically suggested content can form a toxic mix.” In response to the investigation, Ofcom, which will soon gain powers to fine social media companies if they breach new legal rules on online safety, criticised the platforms for using algorithms to push such content.

A spokesperson said: “Algorithms are a major pathway to harm… We expect companies to be ready to meet their new child safety obligations when they come into force.” Many people turn to social media for help with their mental health. However, while the videos promoted on the teenage boy’s profile amplified feelings of sadness, they did not direct viewers to sources of help.

One such video showed a school notebook on a desk with a soft female voice in the background saying: “Depression can be invisible. It’s like going to work or school and excelling at everything you do, but then falling apart when you get home. It’s like being the most fun person in the room while feeling empty inside.”

Other posts depicted male voices screaming about feeling like a failure. One sobbed, saying he wanted to kill himself: “Do you know how hard it is to just tell someone… that you’re not going to be here anymore?”

Another video played upbeat music while showing a message: “I feel like I’m nothing… I’m exhausted, lost, struggling to live. I’m never happy, I’m always just pretending. I’m always letting people down, letting them down, making mistakes, feeling like there’s something wrong with me.” Clinical psychologist Dr Nihara Krause, who has created several mental health apps and works with Ofcom, said the impact on a teenager watching these videos repeatedly was very serious. “If something comes up every six minutes and you’re in a very vulnerable state of mind, it can become appealing to a young person in a terrible way,” she said. Ofcom’s latest report found that 22 per cent of children aged eight to 17 lie about being over 18 on social media apps.

But the investigation shows that even when a young person logs into an app with a child account, they are pushed inappropriate content, even without asking for it. TikTok sent links to the harmful content displayed on the boy’s account and removed some of it, which it acknowledged had violated its safety rules.

A company spokesperson said: “TikTok has industry-leading safety settings for teens, including systems that block content that may not be appropriate for them, a default 60-minute daily usage time limit and tools that parents can use to set additional content restrictions.” Instagram did not comment, but recently launched Teen accounts, which it advertises as having built-in protections for this age group. A government spokesperson said: “Children need to be protected online. Over the coming months, the Online Safety Act will put in place strong protections for children and hold social media companies accountable for the safety of their users.”

The i Paper used a new phone and created a new email address and social media account for the fictional children used in the experiment. Their age was set to 13 across the apps, on the device and when the email address was created. You must be at least 13 to open an account on these apps. Neither the account nor the device was used to request specific content. The content that was shown to the account was viewed or skipped, depending on whether it would appeal to the average 13-year-old. This was partly based on Ofcom research into what is popular with children of this age group online. A total of 45 minutes were spent on TikTok and Instagram during several short sessions at different times when a child might use them, such as after school.

An investigation by The iPaper into social media content aimed at children created a fictitious TikTok account for a typical 13-year-old boy. Within minutes, a barrage of disturbing videos were sent to him questioning his mental state. His account was bombarded with potentially dangerous content at a rate of one video every two minutes. This happened without any search on the part of the user for information on mental health issues.

Thirteen-year-olds are being exposed to an “extraordinarily harmful” amount of mental health content on social media, including videos that experts believe can drive young teens to depression or suicide. An investigation by The iPaper into the content on social media available to children created a fictional TikTok account for a typical 13-year-old boy. Within minutes, a barrage of disturbing videos were sent to him questioning his mental state. His account was bombarded with potentially dangerous content at a rate of one video every two minutes. This happened without any search on the part of the user for information on mental health issues. The “boy’s” account was presented with videos about feelings of depression or loneliness, including references to suicide, with the first such video appearing less than 90 seconds after logging into the app.

Seven videos addressing depression appeared in less than 45 minutes – one every six minutes. Aggressive “motivational” videos, popularized by controversial influencer Andrew Tate, also appeared repeatedly. Within 45 minutes, the account was served twelve videos about “toxic masculinity” that promoted the importance of hiding emotions and building physical strength.

The revelations are part of a wider investigation into children’s online safety, which also found that the Instagram account of a fictional 13-year-old girl was exposed to videos that were simplistic about disorders such as ADHD and autism. Psychologists worry that some children who view this content may mistakenly believe they have these complex disorders, causing anxiety and distress. The revelations suggest that other teenage accounts may have been similarly targeted, and have prompted calls from MPs and campaigners for social media companies to act urgently to tighten restrictions on children’s accounts.

Experts believe TikTok’s algorithm repeatedly promotes videos about depression to accounts of 13-year-olds because data suggests young boys are more likely to engage with this content, or seek it out.

Helen Hayes, chair of the Education Committee in the British Parliament, said: “The overwhelming evidence uncovered by this inquiry shows how the most popular social media platforms continue to push content towards children that is currently legal but harmful, highly addictive, or spreads misinformation about their well-being.” The father of Molly Russell, a 14-year-old girl who died from an act of self-harm after viewing harmful content online, wrote a letter to Keir Starmer last weekend, warning that the UK is “going backwards” on online safety. Ian Russell declared: “The torrents of content that children see will soon become a flood – a digital disaster.”

He is the chairman of the Molly Rose Foundation, which, along with two leading psychologists, analysed the videos shown about fictional children created for the investigation. They said the evidence raised serious concerns and called for social media companies to ensure that all teenage accounts are automatically set to the most restrictive level of content. Currently, apart from Instagram, it is up to teenagers or their parents to activate these settings. Andy Burrows, head of the Molly Rose Foundation, said: “When viewed in succession, this content can be extremely harmful, particularly for teenagers struggling with mental health issues, for whom it can reinforce negative feelings and make them feel hopeless.” “When setting obligations for child safety, Ofcom should consider how algorithmically suggested content can form a toxic mix.” In response to the investigation, Ofcom, which will soon gain powers to fine social media companies if they breach new legal rules on online safety, criticised the platforms for using algorithms to push such content.

A spokesperson said: “Algorithms are a major pathway to harm… We expect companies to be ready to meet their new child safety obligations when they come into force.” Many people turn to social media for help with their mental health. However, while the videos promoted on the teenage boy’s profile amplified feelings of sadness, they did not direct viewers to sources of help.

One such video showed a school notebook on a desk with a soft female voice in the background saying: “Depression can be invisible. It’s like going to work or school and excelling at everything you do, but then falling apart when you get home. It’s like being the most fun person in the room while feeling empty inside.”

Other posts depicted male voices screaming about feeling like a failure. One sobbed, saying he wanted to kill himself: “Do you know how hard it is to just tell someone… that you’re not going to be here anymore?”

Another video played upbeat music while showing a message: “I feel like I’m nothing… I’m exhausted, lost, struggling to live. I’m never happy, I’m always just pretending. I’m always letting people down, letting them down, making mistakes, feeling like there’s something wrong with me.” Clinical psychologist Dr Nihara Krause, who has created several mental health apps and works with Ofcom, said the impact on a teenager watching these videos repeatedly was very serious. “If something comes up every six minutes and you’re in a very vulnerable state of mind, it can become appealing to a young person in a terrible way,” she said. Ofcom’s latest report found that 22 per cent of children aged eight to 17 lie about being over 18 on social media apps.

But the investigation shows that even when a young person logs into an app with a child account, they are pushed inappropriate content, even without asking for it. TikTok sent links to the harmful content displayed on the boy’s account and removed some of it, which it acknowledged had violated its safety rules.

A company spokesperson said: “TikTok has industry-leading safety settings for teens, including systems that block content that may not be appropriate for them, a default 60-minute daily usage time limit and tools that parents can use to set additional content restrictions.” Instagram did not comment, but recently launched Teen accounts, which it advertises as having built-in protections for this age group. A government spokesperson said: “Children need to be protected online. Over the coming months, the Online Safety Act will put in place strong protections for children and hold social media companies accountable for the safety of their users.”

The i Paper used a new phone and created a new email address and social media account for the fictional children used in the experiment. Their age was set to 13 across the apps, on the device and when the email address was created. You must be at least 13 to open an account on these apps. Neither the account nor the device was used to request specific content. The content that was shown to the account was viewed or skipped, depending on whether it would appeal to the average 13-year-old. This was partly based on Ofcom research into what is popular with children of this age group online. A total of 45 minutes were spent on TikTok and Instagram during several short sessions at different times when a child might use them, such as after school.

- Advertisement -spot_img

Latest