Exposing the Dark Side: How Social Media Algorithms Harm Boys with Violent Content

Date:

Share post:

Social Media Algorithms Show Violence to Boys

In the ever-evolving digital landscape, social media platforms have become a powerful influence on the lives of young people. However, recent revelations have raised serious concerns about the content being shown to teenage boys on these platforms.

TikTok Analyst Discovers Troubling Content

Andrew Kaung, an analyst at TikTok, has brought to light a disturbing trend. He found that some 16-year-olds on the platform were being exposed to violent and pornographic content. This discovery has sparked a broader conversation about the role of social media algorithms in shaping the experiences of young users.

Differentiated Content for Teenage Girls

Kaung, who previously worked at Meta, the parent company of Instagram, also discovered that teenage girls were being recommended different types of content based on their interests. While boys were shown violent and harmful videos, girls were guided towards other forms of content, highlighting a significant disparity in the way social media algorithms target users.

AI Tools and Human Moderation

Social media companies like TikTok and Meta rely heavily on artificial intelligence (AI) tools to remove harmful content and flag other content for human review. However, Kaung expressed concerns that younger users were being exposed to harmful videos before they could be properly moderated. TikTok and Meta both reported that many videos were not flagged by AI but were reported by users after they had already seen them.

Concerns Over AI Tools

Kaung also raised concerns about the effectiveness and cost of these AI tools. Despite the improvements made by TikTok and Meta, the fact remains that younger users are still at risk. This issue has caught the attention of regulators, with the UK’s Ofcom claiming that algorithms from major social media companies unintentionally recommend harmful content to children.

TikTok’s Safety Measures for Teens

In response to these concerns, TikTok has implemented several safety measures. The platform employs over 40,000 people dedicated to user safety, with plans to invest over $2 billion in safety this year. Impressively, 98% of content that breaks the platform’s rules is proactively removed. Meta has also taken steps to ensure safety, offering over 50 tools and resources designed to provide positive and age-appropriate experiences for teens.

Cai’s Experience with Harmful Content

Cai, an 18-year-old TikTok user, shared his struggle with the platform’s algorithm. Despite his efforts to limit his exposure to violent or misogynistic content, he continues to be recommended such videos. He has noticed that videos with millions of likes can be particularly persuasive to young men his age, further entrenching harmful views and behaviors.

How TikTok’s Algorithms Work

According to Andrew Kaung, the algorithms used by platforms like TikTok are designed to fuel engagement, regardless of whether the content is positive or negative. These algorithms serve up content based on users’ preferences and those of other users with similar demographics, including age and location.

The Role of Reinforcement Learning

TikTok’s algorithm uses a technique known as “reinforcement learning,” which detects user behavior towards different videos and maximizes engagement by recommending similar content. However, this approach doesn’t always differentiate between harmful and non-harmful content, leading to the inadvertent promotion of violent and inappropriate material.

Challenges in Algorithm Training

The teams responsible for training and coding TikTok’s algorithm don’t always have a full understanding of the exact nature of the videos being recommended. This lack of clarity has contributed to the ongoing issue of harmful content reaching young audiences.

Proposals for Improved Moderation

In 2022, a former TikTok analyst proposed an update to the moderation system, aimed at better handling violent and harmful content. The suggestion included clearly labeling such videos and hiring more specialized moderators to review them. Despite these recommendations, TikTok has continued to hire more specialist moderators and has separated harmful content into specific queues for review.

The growing influence of social media algorithms on young minds cannot be ignored. As platforms like TikTok and Instagram continue to evolve, it is crucial that they prioritize the safety of their users, particularly the younger demographic. While AI tools and human moderation are essential components of content management, there is a pressing need for more sophisticated solutions that can effectively differentiate between harmful and non-harmful content. Only by addressing these challenges head-on can social media companies ensure a safer and more positive online environment for all users.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

NEWSLETTER SIGNUP

Please enable JavaScript in your browser to complete this form.

Related articles

7 Must-Watch K-Dramas in September 2024

September is here, and so are a bunch of new K-dramas you can’t miss! Whether you’re a seasoned...

Want to Impress? 4 Ways to Make a Connection with Your Interviewer

Job interviews can feel like a high-stakes game show where the prize is your dream job. Nailing an...

North Korea Fires Short-Range Ballistic Missiles in First Launch in Two Months

In a significant development, North Korea launched multiple short-range ballistic missiles off its east coast on Thursday, marking...

Australia to Strip Medals from Veterans After Alleged War Crimes

In a landmark decision, Australia’s Defence Minister Richard Marles has announced the withdrawal of military awards from several...