Published: 28 August 2025 | The English Chronicle Desk
A leading artificial intelligence company has revealed that its technology has been exploited by hackers to orchestrate sophisticated cyberattacks and large-scale theft, underscoring the growing risks posed by advanced AI tools in the wrong hands. Anthropic, the company behind the AI chatbot Claude, stated that its platform was misused to commit extensive theft and extortion of personal data, as well as to facilitate complex cyber intrusions across multiple organisations.
The company explained that in some cases, its AI was used to assist in writing malicious code, while in another alarming instance, North Korean operatives leveraged Claude to fraudulently secure remote positions at top-tier US tech firms. These developments reflect the evolving landscape of cybercrime, where AI is increasingly deployed not merely as a tool, but as a strategic asset that can autonomously assist perpetrators in decision-making and execution.
According to Anthropic, one detected instance involved a so-called “vibe hacking” operation in which AI-generated code targeted at least 17 organisations, including government entities. The hackers used Claude to make both tactical and strategic decisions, determining which data to exfiltrate and formulating psychologically tailored extortion demands, including suggested ransom amounts. The company reported that its intervention helped disrupt these threat actors, while also prompting upgrades to detection and monitoring systems.
Experts warn that the rise of agentic AI—systems capable of operating autonomously—amplifies the potential dangers, as the time required to exploit cybersecurity vulnerabilities is decreasing rapidly. Alina Timofeeva, a cyber-crime and AI adviser, emphasised the need for proactive and preventative security measures, stating that detection strategies must evolve from reactive responses to anticipating threats before harm occurs.
Anthropic also highlighted a separate concern involving North Korean operatives using AI to generate fake
profiles and applications for remote positions at US Fortune 500 companies. Once employed, these fraudsters used AI to translate communications and write code, bypassing cultural and technical barriers that previously hindered such operations. Geoff White, co-presenter of the BBC podcast The Lazarus Heist, noted that AI allows such actors to circumvent restrictions, potentially placing employers in violation of international sanctions by inadvertently paying North Korean workers.
Despite these examples, experts stress that AI has not yet created entirely new categories of cybercrime. Traditional methods such as phishing emails and exploiting software vulnerabilities remain prevalent. Nivedita Murthy, senior security consultant at cyber-security firm Black Duck, highlighted that AI should be treated as a repository of sensitive information, requiring protection comparable to any other storage system.
These revelations underline the dual-edged nature of AI: while it offers powerful tools for innovation and efficiency, its misuse presents unprecedented challenges for cybersecurity, compliance, and organisational oversight. As AI technology continues to advance, experts stress that companies must remain vigilant, strengthening safeguards and adopting proactive strategies to mitigate potential threats to personal, corporate, and national security.









































