Entertainment

As disinformation and hate thrive online, YouTube quietly changed how it moderates content

YouTube, the world’s largest video platform, has recently made changes to its moderation policies that allow more content that violates its own rules to remain online. This change was reportedly made quietly in December, with training documents for moderators indicating that a video could stay online if the offending material did not make up more than 50 per cent of the video’s duration – double the previous guidelines.

With 20 million videos uploaded each day, YouTube maintains that it regularly updates its guidance and applies exceptions when content is presented in an educational, documentary, scientific, or artistic context. Nicole Bell, a spokesperson for YouTube, stated that these exceptions are crucial for ensuring important content remains available on the platform.

However, in a landscape saturated with misinformation and conspiracy theories, concerns have been raised that YouTube’s relaxation of its moderation policies could lead to the spread of harmful content for profit. This move is not unique to YouTube, as other social media platforms like Meta, which owns Facebook and Instagram, have also scaled back their content moderation efforts.

Imran Ahmed, CEO of the Center for Countering Digital Hate, warns that this trend could result in a “race to the bottom” where hate and disinformation thrive. He believes that platforms prioritizing profits over online safety are contributing to the growth of harmful content.

While YouTube aims to protect free expression, critics argue that the platform’s leniency may allow problematic or false information to proliferate. Matt Hatfield, executive director of OpenMedia, acknowledges the difficulty of moderating content but emphasizes the need to strike a balance between removing harmful material and allowing for free expression.

See also  Dalhousie study examines how youth are dealing with online sexual harms

YouTube’s recent transparency report revealed that nearly 2.9 million channels and over 47 million videos were removed for community guideline violations in the first quarter. The majority of these removals were due to spam, violence, hateful or abusive material, and child safety concerns.

Ahmed advocates for government regulation to hold companies accountable for the content on their platforms. He points to Canada’s Online Harms Act, which aimed to address online abuse but was ultimately scrapped. Hatfield suggests that regulations should focus on addressing business models that incentivize the spread of harmful content.

In conclusion, while YouTube’s moderation policies have evolved to accommodate a wide range of content, there are growing concerns about the potential consequences of allowing harmful material to remain online. As the debate over online content moderation continues, striking a balance between free expression and protecting users from harmful content remains a complex challenge for social media platforms.

Related Articles

Leave a Reply

Back to top button