Published: 18 November 2025 Tuesday . The English Chronicle Desk. The English Chronicle Online
Google’s parent company Alphabet has issued a clear warning to the public about the limits of artificial intelligence, with chief executive Sundar Pichai urging people not to “blindly trust” everything generated by AI systems. Speaking in an exclusive interview with the BBC, Pichai said that while AI tools have become remarkably capable, they still remain “prone to errors”, and should be viewed as helpful assistants rather than infallible sources of truth.
Pichai’s comments come at a time of intense scrutiny over the accuracy, safety, and influence of rapidly advancing AI models. As the global race to dominate AI accelerates, governments, researchers, and consumers are grappling with how to use the technology responsibly. For Pichai, the answer lies in understanding AI’s strengths and weaknesses, and in maintaining a broad and diverse information ecosystem where search engines, verified journalism, and human judgment continue to play a vital role.
He noted that Google’s own AI products, including its evolving Gemini system, are capable of generating creative content and assisting with complex queries but are not designed to replace factual research. “We take pride in the amount of work we put in to give as accurate information as possible,” Pichai said. “But the current state-of-the-art AI technology is prone to some errors.” That acknowledgement highlights why users must learn to evaluate AI output critically rather than accepting it as unquestionable truth.
The interview also comes against the backdrop of renewed competition between Google and OpenAI. From May of this year, Google began rolling out a new AI Mode in its flagship search engine, integrating the Gemini chatbot experience more deeply into everyday search queries. The update marked what Pichai described as a “new phase of the AI platform shift”, aimed at providing users with more conversational, expert-like interactions. It also reflects Google’s determination to retain dominance over the global search market, where new AI-driven competitors such as ChatGPT and Perplexity have disrupted long-established habits.
However, while AI integration grows, recent research by the BBC demonstrated widespread inaccuracies across major chatbot systems. When ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity were all tested using BBC news content, their summaries routinely contained significant factual errors. The findings reinforced long-standing concerns that generative AI, despite its fluency, remains imperfect and must be treated with caution.
Pichai acknowledged that such findings highlight a broader tension within the industry: the rapid pace at which AI is advancing versus the slower, more complex development of safeguards necessary to protect users. For Alphabet, he said, this means balancing ambition with responsibility. “We are moving fast through this moment. I think our consumers are demanding it,” he explained. “But at the same time, we must be bold and responsible.”
One area of responsibility Pichai emphasised is security. Alphabet has dramatically increased its AI-related safety investments over the past year, allocating more resources to technologies that identify risks and detect manipulation. He pointed to new open-source tools Google is releasing which allow users to identify whether an image has been generated by AI, an effort designed to combat the rising threat of deepfakes and digital misinformation.
The discussion also touched on resurfaced comments from Elon Musk, who years ago raised concerns that DeepMind, now a subsidiary of Google, could one day wield dangerous levels of influence in AI. Pichai responded by saying that no single company should control a technology as powerful as artificial intelligence, but he added that the ecosystem today is far too diverse for such fears to materialise in the present climate. “If there was only one company building AI technology and everyone else had to use it, I would be concerned about that too,” he said. “But we are so far from that scenario right now.”
Despite Pichai’s reassurances, the broader conversation reflects a world still coming to terms with the far-reaching implications of AI. As tech giants deploy AI tools capable of writing, analysing, predicting, and generating, societies are being forced to adapt. Issues around misinformation, job disruption, algorithmic bias, data-security risks, and the environmental footprint of AI development continue to shape public debate. Pichai’s cautionary remarks suggest that even those at the forefront of the industry understand the potential dangers of over-reliance.
For the average user, the message is straightforward: AI can be an extraordinarily useful companion, capable of enhancing productivity and offering creative support, but it is not a replacement for human reasoning or verified sources. Pichai’s warning serves as a reminder that even the most advanced systems are still learning, still fallible, and still require thoughtful oversight. In an age where AI output is increasingly convincing, the responsibility rests with individuals, developers, and institutions alike to ensure that truth remains anchored in evidence, not in algorithms.






















































































