Britain Takes Bold Action Against Harmful Online Content on TikTok and Instagram

80
2
Britain Takes Bold Action Against Harmful Online Content on TikTok and Instagram

In a move to safeguard the youth from harmful online content, Britain has brought social media giants TikTok and Instagram under the regulatory spotlight, mandating new rules set by Ofcom. These regulations compel platforms to reevaluate their algorithms to prevent the spread of detrimental material to children, such as content promoting self-harm, suicide, and eating disorders. The stringent code of practice outlined by Ofcom under the Online Safety Act represents a significant effort to clean up social media and search engines, with a focus on establishing age verification measures to restrict minors' access to explicit content.

Failure to comply with these regulations could lead to severe consequences for these platforms, including hefty fines of up to £18 million or 10% of their global revenue, potential service blockages, and even criminal proceedings against senior executives. By positioning the UK as a global frontrunner in combating harmful online content, the Online Safety Act aims to make the nation the safest online environment globally, surpassing measures seen in the US and resonating with similar legislation in Australia and Europe. Furthermore, messaging services like WhatsApp and Snapchat are also impacted by these regulations, necessitating consent for minors to be added to group chats and empowering them with more control over their online interactions, like blocking accounts and disabling comments.

Ofcom's Chief Executive, Dame Melanie Dawes, stressed the weight of these measures, emphasizing the tech firms' responsibility to ensure children's online safety by curbing aggressive algorithms that promote harmful content. The decision to focus on algorithms comes after investigations into their role in disseminating dangerous material to minors, notably critiquing TikTok for its algorithmic feed that exposes users quickly to potentially harmful content. To bolster age verification, platforms will need to implement strict measures like facial recognition technology and photo ID verification to shield children from online risks. Technology Secretary Michelle Donelan applauded these actions as crucial, underlining the necessity for platforms to conduct real-world age checks and rectify algorithmic deficiencies contributing to youth exposure to harmful content. The draft code received support from NSPCC CEO Sir Peter Wanless and Ian Russell, father of Molly, a teenager who tragically took her life after encountering disturbing content online, highlighting the urgent need for bold measures to protect children from preventable harm.