Home » Technology » Artificial Intelligence » Gemini Vulnerability Exposes Users to Hidden Command Attacks — Google Refuses Fix

Gemini Vulnerability Exposes Users to Hidden Command Attacks — Google Refuses Fix

Google is refusing to patch a known vulnerability in Gemini that allows attackers to embed invisible commands in harmless text. While other AI systems are already protected, Gemini remains vulnerable.

Google ignores security flaw in Gemini

Google has decided not to fix a known security flaw in its AI assistant Gemini. The vulnerability allows attackers to hide invisible commands in seemingly harmless text and manipulate the system. The company classifies the problem as “social engineering” and sees no need for technical countermeasures. The attack, known as “ASCII smuggling,” uses invisible Unicode control characters to embed hidden instructions in text strings.

While the user interface only displays the visible text, the AI ​​agent processes the raw input data including all hidden characters and executes the smuggled commands. This is the core of the vulnerability, which has been known for months. FireTail cybersecurity researcher Viktor Markopoulos discovered this vulnerability and tested several large language models for their vulnerability to ASCII smuggling attacks. How Bleeping computers Reportedly, FireTail reported the vulnerability to Google on September 18, but received a “no action required” message.

Workspace integration increases risks

The problem becomes particularly explosive due to Gemini’s integration into Google Workspace. Attackers can send calendar invitations with smuggled tag characters. The user interface displays a normal event title, but the AI ​​agent processes hidden instructions and changes organizer details and meeting descriptions – without users even having to accept the invitation.

Automated content poisoning can trick e-commerce platforms that aggregate product reviews into embedding malicious URLs. A seemingly innocuous review like “Great phone. Fast delivery and good battery life” could theoretically contain hidden commands that trick the system into promoting a scam shop. This shows the practical relevance of the vulnerability for companies and consumers. Google demonstrates the new possibilities of Gemini 2.5 in the video

Competition is already better protected

Markopoulos’ tests showed different levels of security across different AI systems. While OpenAI’s ChatGPT, Anthropic’s Claude, and Microsoft’s Copilot are already sanitizing or rejecting the hidden inputs, Gemini, along with Elon Musk’s Grok and China’s DeepSeek, remain vulnerable to these attacks. AWS has already published guidelines via blog to defend LLM applications against Unicode smuggling, showing that the industry is taking the problem seriously.

Recommendations include input validation, character filtering, and suspicious pattern monitoring. The problem is exacerbated by the deep integration of AI systems into corporate networks. Modern AI assistants read emails, summarize documents and schedule meetings – all potential attack vectors for ASCII smuggling. The findings show that this is not just theoretical: the technology enables automated impersonation and systematic data poisoning.

Security teams can only defend themselves through comprehensive protective measures. This includes logging all characters, analyzing for tag blocks, and alerting on suspicious patterns. Monitoring the raw input stream is currently the only reliable defense against these application-level attack vectors – an effort that should actually be undertaken by AI vendors.

Leave a Reply