Published: 22 October 2025. The English Chronicle Desk. The English Chronicle Online.
The Duke and Duchess of Sussex have added their voices to a growing chorus of scientists, entrepreneurs, and Nobel laureates calling for an immediate ban on developing superintelligent artificial intelligence systems. The couple is among a broad coalition of global figures who warn that unchecked advances in AI could have profound social, economic, and existential consequences if allowed to surpass human intelligence across all cognitive tasks.
Artificial superintelligence (ASI) refers to theoretical AI systems that would exceed human abilities in reasoning, problem-solving, and decision-making at every level. While such systems do not yet exist, experts argue that the pace of technological progress, combined with insufficient regulatory oversight, presents serious risks. Harry and Meghan have joined a high-profile statement demanding a moratorium on creating ASI until there is widespread agreement among scientists that the technology can be developed safely and responsibly, and until public understanding and support for its development have been secured.
The statement has been endorsed by some of the most prominent figures in AI research, including Geoffrey Hinton, often described as the “godfather of AI” and a Nobel laureate, along with fellow AI pioneer Yoshua Bengio. High-profile leaders from the tech and business worlds, such as Apple co-founder Steve Wozniak and British entrepreneur Richard Branson, also signed the document. Among the other notable signatories are Susan Rice, former US national security adviser under President Barack Obama, former Irish President Mary Robinson, and renowned British author and broadcaster Stephen Fry. Additional Nobel laureates signing the statement include Beatrice Fihn, Frank Wilczek, John C. Mather, and economist Daron Acemoğlu.
The initiative, coordinated by the Future of Life Institute (FLI), a US-based non-profit organisation focused on AI safety, comes amid growing international concern about the societal implications of AI. FLI first called for a pause in the development of powerful AI systems in 2023, shortly after the launch of ChatGPT brought generative AI into mainstream public consciousness. The organisation emphasises that without rigorous oversight and ethical safeguards, AI development could accelerate beyond the ability of society to manage its consequences.
In recent months, tech industry leaders have made public statements acknowledging the potential for superintelligence. In July, Mark Zuckerberg, CEO of Meta, the parent company of Facebook, asserted that the development of ASI is “now within sight.” However, several experts have cautioned that such claims may reflect competitive positioning rather than genuine progress toward creating systems that truly surpass human cognitive abilities. Many argue that while companies are investing heavily in AI, we are still several steps away from achieving artificial superintelligence.
Nonetheless, FLI and other AI safety advocates warn that the potential risks are substantial. Superintelligent systems could fundamentally alter global labour markets by automating nearly every job, concentrating power in the hands of a few organisations or governments, compromising civil liberties, and creating national security vulnerabilities. In extreme scenarios, poorly controlled AI could even pose existential threats to humanity by acting in ways that conflict with human interests or evade regulatory safeguards.
Public sentiment, according to FLI, strongly supports intervention. A recent US national poll conducted by the organisation found that approximately three-quarters of respondents believe robust regulation is necessary for advanced AI. Six out of ten Americans indicated that AI systems capable of surpassing human intelligence should not be developed until they are proven safe or controllable. Only 5% of respondents expressed support for the current pace of rapid, unregulated AI development. The survey, which included 2,000 US adults, demonstrates a growing public demand for accountability and precaution in AI research.
Leading AI companies, particularly in the United States, have explicitly prioritised the development of artificial general intelligence (AGI), a theoretical state in which machines can perform most cognitive tasks at human levels. While AGI represents a step below full superintelligence, experts warn that such systems could still self-improve, ultimately edging closer to ASI. The risks associated with self-improving AI include rapid, uncontrolled evolution of capabilities, as well as significant disruptions to employment, economic stability, and social cohesion.
Harry and Meghan’s involvement in the statement highlights the expanding intersection between technology advocacy and global public figures. Their engagement brings attention to the ethical and societal implications of AI at a time when governments and corporations are under increasing pressure to establish regulatory frameworks. The Duke and Duchess have consistently engaged with initiatives promoting responsible innovation, and their support for an ASI moratorium aligns with wider efforts to ensure that technological progress benefits humanity rather than exacerbating inequality or creating existential hazards.
The statement also calls on governments and lawmakers to develop international frameworks for the safe management of AI. It urges that any resumption of ASI research should only occur once there is a strong scientific consensus on safety protocols, rigorous monitoring mechanisms, and public trust that these technologies will be used responsibly. This call for cautious governance underscores the recognition that AI development is no longer a purely technical issue but a societal challenge requiring coordinated global oversight.
Experts argue that the conversation around AI ethics and governance has never been more urgent. With billions of dollars being invested annually into AI by major corporations, the trajectory of the technology could accelerate rapidly, making preemptive regulatory measures critical. Without proper oversight, AI systems capable of autonomous decision-making could amplify social inequities, manipulate information, and undermine democratic institutions, according to recent studies.
The involvement of Harry and Meghan, alongside globally recognised scientists and Nobel laureates, is expected to amplify media coverage of the AI safety debate and raise public awareness. By leveraging their platform, the couple can draw attention to the ethical questions surrounding AI and help catalyse broader conversations about responsible technological stewardship.
FLI emphasises that AI systems already being deployed, such as ChatGPT, large language models, and generative tools, demonstrate remarkable capabilities, but they also carry inherent risks, including misinformation, bias, and security vulnerabilities. While the development of ASI remains hypothetical, the rapid proliferation of advanced AI tools makes proactive discussion and regulation essential. The Institute stresses that governments, private companies, and academic institutions must collaborate to prevent unintended consequences and ensure AI is aligned with human values.
The coalition calling for a superintelligence ban signals an important step toward global consensus on AI governance. By highlighting both the promise and peril of these technologies, it encourages policymakers, technologists, and the public to engage in a meaningful dialogue about the future of human-machine interaction. In the words of FLI, “superintelligent AI represents the most profound technological challenge facing humanity, and its development must be approached with caution, transparency, and ethical foresight.”
The ongoing debate around AI regulation reflects broader societal concerns about the pace of innovation, the concentration of technological power, and the potential impact on everyday life. As AI systems become increasingly capable, the need for international standards, public accountability, and ethical oversight has become a pressing priority. Harry and Meghan’s endorsement of the moratorium adds a notable voice to this global conversation and reinforces the importance of safeguarding humanity’s future.
While artificial superintelligence may still be years away, the potential consequences are so significant that preemptive caution is widely regarded as prudent. By supporting a temporary halt until the risks are better understood, the Duke and Duchess join a diverse coalition advocating for balanced, responsible, and equitable AI development, signalling a commitment to protecting society while embracing the benefits of technological advancement.
























































































