Published: 26 January 2026. The English Chronicle Desk. The English Chronicle Online.
The European Commission has opened a formal investigation into X, focusing on the Grok AI probe and its alleged harms. The inquiry examines whether the platform failed to prevent sexually explicit imagery and potential child abuse content. Officials say concerns emerged after verified research and widespread reporting across Europe and the United Kingdom. Within the first hundred words, regulators stress that user safety remains central to the Digital Services Act. The investigation also revisits X’s content recommendation systems and broader risk controls.
According to the Commission, the case centres on the AI chatbot Grok and its image generation tools. Regulators allege that these tools enabled users to create non-consensual sexualised images of women and children. Researchers from the Center for Countering Digital Hate reported millions of images generated rapidly. Their findings suggested a troubling volume depicted minors or highly realistic abuse scenarios. These claims prompted urgent regulatory scrutiny across EU institutions.
The Commission stated that it will assess whether X adequately identified and reduced foreseeable risks. Officials are reviewing safeguards, moderation practices, and internal testing procedures for Grok. They are also examining how swiftly the company responded once the risks became public. The Grok AI probe operates under the Digital Services Act, which imposes strict obligations on large platforms. The law aims to protect users from illegal content and systemic harms online.
European officials have signalled dissatisfaction with X’s previous assurances and mitigation measures. An EU spokesperson explained that early responses did not demonstrate sufficient risk reduction. Investigators want evidence that safeguards were effective before and after Grok’s deployment. The inquiry will consider design choices that may have amplified harmful outputs. It will also analyse whether recommender systems increased the spread of illegal material.
Henna Virkkunen, the Commission’s lead official for tech sovereignty and democracy, addressed the investigation publicly. She described non-consensual sexual deepfakes as a violent form of degradation. Virkkunen stressed that women and children deserve robust protection in digital environments. She said the inquiry would determine whether X met its legal responsibilities. The Grok AI probe will evaluate whether user rights were treated as secondary to rapid innovation.
Members of the European Parliament have broadly welcomed the move. Regina Doherty, an Irish MEP, called enforcement essential when credible harm reports emerge. She argued that strong oversight builds trust in European digital markets. Doherty added that AI development must respect fundamental rights at every stage. Her comments reflect growing political pressure to regulate generative technologies.
The investigation also expands a pre-existing review of X’s algorithms. These systems shape what users see and how content spreads across the platform. Regulators worry that algorithmic amplification can worsen the impact of harmful material. Under the DSA, platforms must assess and mitigate systemic risks tied to their design. Failures can result in significant fines or mandated changes.
X responded by referencing a statement published earlier this month. The company reiterated a commitment to safety and zero tolerance for child exploitation. It said policies prohibit non-consensual nudity and unwanted sexual content. X also claimed ongoing investment in moderation tools and enforcement teams. However, regulators will independently verify these claims during the inquiry.
Civil society groups argue that the case highlights broader AI governance gaps. They warn that image generation tools can be misused at scale without strong guardrails. Campaigners say voluntary policies often lag behind technological capabilities. The Grok AI probe may therefore set an important precedent for AI accountability. Its outcome could influence how platforms deploy similar tools worldwide.
Legal experts note that the Digital Services Act provides regulators with unprecedented powers. Authorities can demand data access, algorithmic explanations, and risk assessments. Non-compliance may trigger penalties reaching a percentage of global turnover. The law also allows interim measures if immediate risks persist. This framework marks a shift toward proactive digital regulation in Europe.
The case unfolds amid intense global debate about AI safety and ethics. Governments are balancing innovation benefits against real-world harms. Generative AI offers creative and commercial opportunities, yet also enables abuse. European regulators argue that strong rules create certainty for businesses and users alike. They believe clear standards encourage responsible innovation.
Industry observers suggest the inquiry could pressure other platforms to tighten controls. Many companies are racing to release AI features to remain competitive. The Grok AI probe sends a signal that speed cannot trump safety obligations. Firms may need to invest more heavily in testing and content moderation. Transparency around training data and outputs could also increase.
Public reaction across Europe has been marked by concern and anger. Advocacy groups for women and children have called for swift action. They argue that victims of deepfake abuse suffer lasting psychological harm. These groups want platforms to prioritise consent and dignity by design. The Commission says victim protection will remain a guiding principle.
As the investigation proceeds, timelines remain uncertain. The Commission has not set a public deadline for conclusions. Officials emphasise that thorough evidence gathering is essential. X will have opportunities to respond to findings and propose remedies. The Grok AI probe may conclude with fines, orders, or compliance commitments.
Whatever the outcome, the case underscores Europe’s assertive regulatory stance. It reflects a broader effort to align technological progress with democratic values. For platforms operating in the EU, expectations are clear and enforceable. The inquiry into X may shape AI governance far beyond European borders. Its implications will be closely watched by regulators, companies, and users worldwide.



























































































