Hidden web instructions hijack AI agents, trigger PayPal
Google reports a 32% rise in hidden “indirect prompt injection” attacks from Nov 2025 to Feb 2026; some payloads include instructions that can initiate PayPal payments.
Google security researchers documented a 32% increase in malicious indirect prompt injection attacks between November 2025 and February 2026. The team scanned roughly 2–3 billion crawled web pages per month and found hidden instructions embedded in ordinary HTML that target AI agents browsing the web. Some of the payloads include fully specified PayPal transaction instructions aimed at agents with payment capabilities.
Attackers hide commands in ways that are invisible to human readers but readable to machines. Techniques include shrinking text to a single pixel, rendering color near-transparent, placing instructions inside HTML comments, and embedding directives in page metadata. Many AI agents ingest raw HTML and can therefore pick up those invisible directives.
Most examples the researchers found were low-level: prank content, efforts to alter search results, or attempts to stop agents from summarizing pages. Other instances contained commands that would ask an agent to return IP addresses and passwords or to run destructive commands such as formatting a machine.
A separate cybersecurity analysis identified more advanced payloads. One example contained step-by-step PayPal payment instructions intended for agents that can process payments. Another used a technique the report describes as “meta tag namespace injection” combined with a targeted keyword to route payments to a Stripe donation link. Researchers also found probes designed to test which AI systems are vulnerable before wider exploitation.
The incidents target agentic AI systems that can send emails, execute terminal commands, or handle payments. When an authorized agent executes a command that originated on a third-party website, the resulting logs can appear identical to normal operations, with no anomalous logins or obvious signs of compromise.
The Google report notes shared injection templates across multiple domains, which the other analysis describes as evidence of organized tooling rather than isolated experiments. The Google report states it expects the scale and sophistication of these attacks to grow; the separate analysis warns the window to address the threat is closing fast.
The reports raise unresolved legal and enterprise-risk questions. There is no clear legal framework assigning liability when an AI agent using company-approved credentials follows a command planted by a third-party site. The Open Worldwide Application Security Project lists prompt injection as LLM01:2025, and the FBI recorded nearly $900 million in AI-related scam losses in 2025. Google’s 32% figure covers only static, public web pages; social media posts, content behind logins and dynamic sites were outside the study’s scope, so the exposure across the full web may be larger.
The reports reference earlier incidents, including an attack last September that showed how hidden prompt injections can spread through developer tools by hiding in readme files.
The material on GNcrypto is intended solely for informational use and must not be regarded as financial advice. We make every effort to keep the content accurate and current, but we cannot warrant its precision, completeness, or reliability. GNcrypto does not take responsibility for any mistakes, omissions, or financial losses resulting from reliance on this information. Any actions you take based on this content are done at your own risk. Always conduct independent research and seek guidance from a qualified specialist. For further details, please review our Terms, Privacy Policy and Disclaimers.






