Published: 16 January 2026. The English Chronicle Desk. The English Chronicle Online
TikTok is preparing to roll out a new generation of age-verification technology across the European Union in the coming weeks, marking a significant shift in how one of the world’s most influential social media platforms seeks to identify and protect underage users. The move comes at a moment of intensifying political, regulatory and public scrutiny, as governments across Europe and beyond debate whether existing safeguards are sufficient or whether outright bans on social media use by children and teenagers should be introduced.
Owned by Chinese technology company ByteDance, TikTok has long been popular with younger audiences, a fact that has fuelled repeated concerns about child safety, online harm and excessive screen time. Pressure has mounted not only from regulators but also from parents, educators and child welfare campaigners, many of whom argue that platforms have been too slow to take responsibility for identifying underage users who circumvent age limits with ease.
The new system, which has been quietly piloted in parts of the EU over the past year, represents TikTok’s most ambitious attempt yet to move beyond self-declared ages. Rather than relying solely on users entering their date of birth, the technology analyses a range of signals, including profile information, the nature of posted videos and patterns of user behaviour. By assessing how an account interacts with content and how it presents itself, the system aims to predict whether a user may be under the age of 13, which is the minimum age requirement under TikTok’s terms and EU data protection law.
Accounts flagged by the system will not be automatically removed. Instead, TikTok says they will be passed to specialist human moderators trained to assess age-related indicators and context. If moderators determine that an account does belong to an underage user, it may then be removed from the platform. The company argues that this approach balances child protection with fairness, reducing the risk of older users being wrongly excluded.
A pilot of the technology in the UK has already led to the removal of thousands of accounts, according to TikTok, suggesting that automated analysis combined with human review could significantly increase enforcement compared with traditional methods. However, the scale of the challenge remains vast, given the platform’s hundreds of millions of users worldwide.
TikTok’s announcement comes amid growing calls for tougher action on social media use by children, inspired in part by developments in Australia. In December, Australia introduced a landmark ban on social media access for under-16s, placing the responsibility on platforms to enforce age limits rather than on parents or users themselves. This week, the country’s eSafety commissioner revealed that more than 4.7 million accounts had been removed across 10 platforms, including YouTube, TikTok, Instagram, Snapchat and Facebook, since the ban came into force on 10 December.
Those figures have intensified debate in Europe and the UK about whether similar measures should be considered. Proponents of bans argue that the Australian example demonstrates both the scale of underage social media use and the feasibility of enforcing restrictions when platforms are compelled to act. Critics, however, warn of unintended consequences, including the risk that teenagers could migrate to less regulated corners of the internet.
In the UK, the debate has reached the highest levels of government. Prime Minister Keir Starmer told MPs earlier this week that he was open to considering a ban on social media for young people, a notable shift in tone from his earlier scepticism. Speaking to Labour MPs, Starmer said he had become increasingly alarmed by reports of very young children spending hours a day on smartphones, and by mounting evidence of the impact of social media on the mental health and wellbeing of under-16s.
Starmer has previously argued that banning social media outright would be difficult to police and could drive young users towards the dark web or unregulated platforms. His latest comments suggest, however, that political resistance to tougher measures may be softening as public concern grows and international precedents emerge.
Across Europe, momentum is also building. The European parliament has been pushing for clearer age limits and stronger enforcement mechanisms, while Denmark has publicly backed the idea of banning social media use for children under 15. Regulators are increasingly focused on whether platforms’ existing age checks comply with the EU’s stringent data protection framework, particularly when it comes to collecting and processing information about minors.
TikTok says its new technology has been developed specifically with these regulatory requirements in mind. The company has worked closely with Ireland’s Data Protection Commission, which acts as TikTok’s lead privacy regulator in the EU, during the development process. By embedding compliance considerations into the system’s design, TikTok hopes to demonstrate that stronger age verification can be achieved without breaching privacy rules or collecting excessive personal data.
Other major platforms are also experimenting with more robust verification methods. Meta, the parent company of Facebook and Instagram, has partnered with the age-verification firm Yoti, which allows users to confirm their age through facial analysis or identity documents without permanently storing sensitive information. Despite these efforts, campaigners argue that enforcement remains inconsistent and that underage users continue to slip through the net.
Concerns about the real-world consequences of online harm have added urgency to the debate. Earlier this month, Ellen Roome, whose 14-year-old son Jools Sweeney died after an online challenge went wrong, called for parents to be given greater rights to access their children’s social media accounts in the event of a child’s death. Her appeal has resonated widely, highlighting how digital platforms can intersect with tragedy in deeply personal ways.
TikTok’s record on child safety has faced criticism in the past. A 2023 investigation by The Guardian found that some moderators had been instructed to allow under-13 users to remain on the platform if they claimed their parents were supervising their accounts, a practice that appeared to conflict with official policy. The company has since said it has tightened its rules and training, but the episode continues to inform scepticism about whether platforms can be trusted to police themselves.
The rollout of TikTok’s new age-verification system will therefore be closely watched by regulators, politicians and child safety advocates alike. Supporters see it as a step towards more meaningful enforcement in an industry long criticised for prioritising growth over protection. Critics caution that technological solutions alone may not be enough to address the deeper social and psychological issues associated with children’s online lives.
As governments weigh options ranging from enhanced verification to outright bans, TikTok’s move underscores a broader shift in the digital landscape. Social media companies are increasingly being forced to adapt to a world in which child protection is no longer treated as a secondary concern, but as a central test of their legitimacy and responsibility. Whether the new system will satisfy regulators and reassure parents remains to be seen, but it signals that the era of voluntary, light-touch age checks may be drawing to a close.



























































































