Published: 30 December 2025. The English Chronicle Desk. The English Chronicle Online.
A leading anti-hate organisation has accused Facebook of hosting posts celebrating the Bondi beach massacre and promoting Islamic State propaganda. The Community Security Trust (CST), which works to protect Jewish communities in the UK, highlighted that content praising the December 14 attack remained visible on Facebook as late as 16 December, attracting likes, shares, and comments before removal. The posts included graphic imagery and statements glorifying the perpetrators, allegedly a father and son who carried out the massacre during Hanukah celebrations in Sydney, killing fifteen people.
One post contained video footage of the attack aftermath, captioned with religious praise for Allah, and had been liked over 100 times with multiple comments. Another post showed a gunman’s image alongside pro-IS messaging, receiving hundreds of interactions, according to CST. While Facebook said it had begun removing offending content after media inquiries, CST criticised the platform’s delayed response, describing the prevalence of terrorist-supporting accounts as “deeply alarming” and “utterly nauseating.”
Dave Rich, CST’s director of policy, emphasised that social media companies repeatedly fail to fulfil even basic responsibilities to prevent the spread of terrorist material, putting public safety at risk. CST intends to request Ofcom, the UK media regulator, to investigate Meta’s handling of illegal content and take robust action where possible. Ofcom stated that platforms must promptly remove content illegal under UK law and that evidence suggests terrorist material persists across major social media sites.
Meta, which owns Facebook, did not answer detailed questions but confirmed some content had been removed for violating its policies regarding dangerous organisations and individuals. A Home Office spokesperson reiterated that social media content promoting terrorism or violence against communities is unacceptable and illegal, and platforms are obliged to act swiftly against such material.
The criticism comes amid heightened concern over pro-IS content on social media and ongoing threats to Jewish communities in the West. Two men were convicted in England on 22 December for attempting to obtain machine guns to target Jewish people in the north-west of England. Though not directly instructed by IS, they had overseas connections and planned to bypass community security and infiltrate Facebook groups for potential targeting information.
Rich described the English plot as “even more serious” than prior attacks on synagogues, reflecting increased sophistication and planning among extremist networks. Vicki Evans, senior national coordinator for counter-terrorism policing, reminded the public that threats evolve constantly and urged vigilance both online and in real-world communities, citing recent attacks in Manchester and Australia as clear warnings.
The CST report also raised concerns about the lack of proactive monitoring by social media platforms, stressing that content celebrating terrorist attacks fuels extremism and risks inspiring copycat actions. It warned that without rapid removal of extremist content, the digital space could inadvertently become a recruitment and coordination tool for terrorists. Facebook has faced previous criticism for slow responses to extremist content and mismanagement of algorithms that inadvertently promote divisive material.
Experts point out that terrorist groups exploit social media to spread propaganda, coordinate attacks, and recruit vulnerable individuals, often disguising content to avoid detection. Analysts have highlighted how digital platforms’ commercial priorities, including user engagement metrics, can conflict with content moderation duties, potentially delaying the removal of illegal posts. This tension has prompted calls for stricter regulatory oversight and greater transparency in reporting the number and nature of content removals.
In Australia, the Bondi beach massacre has led to renewed debate about online radicalisation, the accountability of social media firms, and the effectiveness of counter-terrorism measures. Government officials, law enforcement agencies, and anti-hate groups are pushing for more robust mechanisms to detect and remove extremist material before it can cause harm, emphasising collaboration between tech companies and national security bodies.
Facebook’s response, including the removal of some posts, has been welcomed but remains insufficient according to CST, which continues to monitor platforms for extremist activity targeting Jewish communities. With increasing sophistication in terrorist plots and rising online dissemination of extremist material, experts argue that platform accountability, public reporting, and international cooperation are essential to prevent future attacks.
The Bondi attack underscores the ongoing challenge of balancing free expression with public safety, highlighting the urgent need for technology firms to invest in proactive content moderation. CST calls for Meta and other platforms to implement real-time monitoring, enhance reporting systems, and provide transparency regarding the handling of terrorist and extremist content. Meanwhile, authorities in the UK and Australia continue investigations, with a focus on preventing future attacks and supporting affected communities.
As digital platforms remain central to extremist communication, the CST stresses that failure to act swiftly against content celebrating attacks can have deadly consequences. Observers warn that extremist networks will exploit any gaps in moderation policies, making it critical for both regulators and tech companies to adopt more stringent, responsive approaches.
The Bondi beach massacre, coupled with the English plot, serves as a stark reminder of evolving terrorist threats and the dangers of online radicalisation. The CST maintains that while legal frameworks exist, their effective enforcement requires vigilance, swift platform response, and collaborative effort between governments, law enforcement, and social media companies to safeguard communities.



























































































