The UK’s long-awaited Online Safety Act has officially come into force, requiring tech firms to implement stricter safety measures to protect users from illegal online harms. Ofcom, the UK’s communications regulator, has published its first codes of practice and guidance, marking a significant step in regulating the digital landscape. These new rules apply to social media platforms, search engines, gaming apps, messaging services, and other online platforms.
Effective immediately, tech companies must begin assessing risks related to illegal content such as terrorism, hate speech, fraud, child sexual abuse material (CSAM), and content that encourages self-harm or suicide. Platforms have until March 16, 2025, to complete their risk assessments. From March 17, 2025, they will be required to implement safety measures outlined in Ofcom’s codes of practice.
The codes focus heavily on protecting vulnerable groups, particularly children and women. Platforms are expected to use automated tools like hash-matching and URL detection to quickly identify and remove CSAM. Additionally, child profiles and personal information will be shielded from public view, and strangers will be unable to send direct messages to minors.
Ofcom has made it clear that it will take a tough stance against non-compliance. The regulator has the authority to fine companies up to £18 million or 10% of their global revenue, whichever is higher. In severe cases, it can even request court orders to block non-compliant platforms in the UK.
“For too long, sites and apps have been unregulated, unaccountable, and unwilling to prioritize people’s safety over profits. That changes from today,” said Dame Melanie Dawes, Chief Executive of Ofcom. “The safety spotlight is now firmly on tech firms, and it’s time for them to act. Those that come up short can expect Ofcom to use the full extent of our enforcement powers.”
This first set of codes and guidance is only the beginning. Ofcom plans additional consultations in 2025, including proposals to block accounts of users who share CSAM, use AI to combat illegal harms, and establish protocols for emergency events. Additional protections for children from harmful content promoting suicide, self-harm, eating disorders, and cyberbullying are also expected in 2025.
The Online Safety Act also empowers Ofcom to require platforms to develop or deploy specific technologies to combat child exploitation and terrorism-related content. A consultation is underway to set minimum standards of accuracy for such technologies, with the deadline for responses set for March 10, 2025.