Google: AI used to craft zero-day bypassing 2FA

Google: AI used to craft zero-day bypassing 2FA - GNcrypto

Google reports what it believes is the first case of hackers using an AI model to develop a zero-day that bypassed two-factor authentication on an open-source admin tool.

Google’s Threat Intelligence Group reported it found what it believes is the first case of hackers using an AI model to develop a zero-day exploit that bypassed two-factor authentication on a popular open-source web-based system administration tool. The finding was published in a Tuesday blog post.

The exploit required valid user credentials but removed the second authentication step, which is often used to protect administrative access and cryptocurrency wallets.

Google described the bug as a high-level semantic logic flaw caused by a hardcoded trust assumption in the tool’s code. The company said the flaw was not an implementation error such as memory corruption.

The report expressed high confidence attackers used a large language model to find and weaponize the vulnerability. Google pointed to elements in the exploit script, including a hallucinated output and structural patterns consistent with AI training data.

The report did not name the affected tool or identify the actors. It noted state-aligned groups from China and North Korea have shown interest in using AI for vulnerability discovery in past operations. Google added the observed campaign involved collaboration among prominent cybercrime actors and appeared aimed at high-volume exploitation. Earlier this month, researchers at Cambridge, Edinburgh and Strathclyde said AI fuels spam and scams, not advanced malware.

Google described other ways adversaries are using AI. Several malware families, named PROMPTFLUX, HONESTCUE and CANFAIL, are generating decoy or filler code with LLMs to hide malicious logic and evade detection.

The company said abuse of LLM access has become industrialized. Threat actors are cycling through premium AI accounts, pooling API keys and using anti-detect browsers and account-pooling services to run high-volume, anonymized operations while trying to bypass safety guardrails.

Attackers are also targeting components that support AI systems in production, such as autonomous agent skills and third-party data connectors, which can create additional compromise paths if unsecured. The report added that adversaries have not demonstrated consistent methods to override core security controls of frontier AI models.

The report referenced an AI developer’s recent model that reportedly found thousands of potential software vulnerabilities across major systems.

Google urged organizations to assume AI-aided discovery is a growing threat and to review application logic and how authentication is implemented.

The material on GNcrypto is intended solely for informational use and must not be regarded as financial advice. We make every effort to keep the content accurate and current, but we cannot warrant its precision, completeness, or reliability. GNcrypto does not take responsibility for any mistakes, omissions, or financial losses resulting from reliance on this information. Any actions you take based on this content are done at your own risk. Always conduct independent research and seek guidance from a qualified specialist. For further details, please review our Terms, Privacy Policy and Disclaimers.

Articles by this author