Google Blocks AI-Driven Cyberattack in First Known Attempt at Mass Exploitation
Google's security team thwarts a hacker group's AI-powered attack targeting two-factor authentication. The incident marks the first known use of AI to develop a zero-day exploit for mass exploitation.

By Samarjit Kaur

on May 12, 2026

Alphabet-owned Google has disclosed that it thwarted an artificial intelligence (AI) cyberattack aimed at identifying and exploiting a previously unknown software vulnerability in what researchers described as a planned “mass exploitation event”.

The incident marks the first known case where cyber-attackers used AI tools to uncover a zero-day flaw and prepare an exploit at scale, according to Google’s Threat Intelligence Group (GTIG).

Also Read: AI Hiring: Firms Face Reality Check on Human Replacement Plans

AI Used to Discover Zero-Day Vulnerability

Google said the hackers targeted a widely used open-source system administration platform and attempted to bypass two-factor authentication mechanisms using the vulnerability.

The company did not reveal the name of the affected software or the identity of the attackers, but confirmed that the flaw was patched before any large-scale attack could occur.

“Traces inside the malicious code suggest that the exploit was probably developed with assistance from a large language model (LLM). Some markers included structured coding patterns and even an inaccurate vulnerability scoring reference, typically associated with an AI-generated output.”

-said, security researchers

John Hultquist, chief analyst at GTIG, described the incident as an early sign of a broader shift in cybercrime tactics. Researchers warned that AI systems are increasingly being used to automate vulnerability discovery, malware development, and parts of cyberattack operations.

Also Read: Why Google Chrome Is Downloading a 4GB AI File Without Users Knowing

Rising Concerns on AI-Powered Cyber Threats

The development has further raised concerns among governments and cybersecurity firms over the growing use of generative AI in offensive cyber operations.

Industry experts said that advanced AI models are shortening the time between discovering software flaws and launching attacks, potentially making cyber threats faster and more scalable.

Google’s report also pointed to rising interest among state-linked hacking groups in countries such as China, Russia and North Korea in using AI for cyber operations. At the same time, technology firms are expanding efforts to build defensive AI systems capable of detecting vulnerabilities before attackers can exploit them.

The incident comes as the industry debates AI safety and cybersecurity following several technology firms’ recent unveilings of AI models with advanced coding and vulnerability-testing capabilities.

News Image
News Image