The Union government is in discussions with Anthropic over concerns linked to its AI system ‘Mythos’, a senior official from the Ministry of Electronics and Information Technology (MeitY) said.
The engagement comes as India sharpens its oversight of emerging artificial intelligence tools, focusing on safety, accountability and potential risks to users.
Government Raises AI Safety, Accountability Questions
Officials indicated that the government has sought clarifications from Anthropic on the functioning and safeguards of the Mythos system. The concerns relate to how such AI tools process information, generate outputs and handle sensitive or misleading content.
The discussions are part of a broader regulatory approach that MeitY is shaping to ensure that AI systems deployed in India comply with safety norms.
Authorities are also examining whether proper checks are in place to prevent misuse, bias or unintended outcomes.
India has, in recent months, stepped up scrutiny of global AI firms as generative AI adoption accelerates across sectors, including governance, finance and media.
Also Read: Humans Left Behind as ‘Lightning’ Robot Smashes Beijing Half-Marathon Record
India Tightens Oversight as AI Regulation Evolves
The stance towards Anthropic reflects a broader policy shift in which the government is engaging directly with AI developers. Officials are aiming to establish clearer accountability frameworks, especially for systems that can influence public information or decision-making.
India’s approach combines consultation with companies and the possibility of future regulatory measures. The aim is to balance innovation with safety, ensuring AI tools operate within defined ethical and legal boundaries.
The development follows similar global moves, where governments are assessing the risks posed by advanced AI systems and exploring regulatory mechanisms to address them.

