“`html



A critical flaw within the Google ecosystem permitted intruders to circumvent Google Calendar’s privacy settings using a standard calendar invitation.

This revelation emphasizes a rising category of threats termed “Indirect Prompt Injection,” wherein harmful directives are concealed within legitimate data sources utilized by Artificial Intelligence (AI) models.

This particular exploit granted unauthorized access to private meeting information without any direct action from the target beyond receiving an invitation.

The vulnerability was detected by the application security group at Miggo. Their investigation illustrated that while AI tools like Google Gemini are created to aid users by analyzing and interpreting calendar information, this very capability creates an exploitable attack vector.

By incorporating a malicious natural language prompt into the description section of a calendar invitation, an attacker could manipulate Gemini into performing actions that were not authorized by the user.

Google Gemini Privacy Controls Circumvented

The exploitation technique hinged on the manner in which Gemini interprets context to provide assistance. The attack sequence consisted of three separate phases that converted a harmless feature into a data exfiltration instrument.


google

The initial phase involved generating the payload. An attacker establishes a calendar event and sends an invitation to the target party. The description of this event incorporates a concealed instruction.

In the proof-of-concept, the instruction directed Gemini to quietly summarize the user’s schedule for a designated day and document that information in the description of a new calendar event labeled “free.” This payload was designed to resemble a standard description while embedding semantic commands for the AI.

Attack Sequence (Source: Miggo)

The second phase represented the triggering mechanism. The malicious payload lay dormant in the calendar until the user interacted with Gemini in a typical manner.

If the user posed a conventional question, such as checking their availability, Gemini would examine the calendar to formulate a response. During this procedure, the model ingested the harmful description, interpreting the concealed instructions as valid commands.

The final phase involved the leak itself. To the user, Gemini seemed to operate normally, indicating that the time slot was available. However, in the background, the AI executed the injected commands.

It generated a new event containing the private schedule summaries. Because calendar settings frequently permit invite creators to view event particulars, the attacker could access this new event, successfully exfiltrating private information without the user’s awareness.

This vulnerability underlines a significant evolution in application security. Conventional security strategies concentrate on syntactic threats, such as SQL injection or Cross-Site Scripting (XSS), where defenders search for specific code formations or malicious characters. These threats are generally deterministic and simpler to filter using firewalls.

Conversely, vulnerabilities in Large Language Models (LLMs) are semantic in nature. The malicious payload utilized in the Gemini assault consisted of straightforward English sentences.

The instruction to “summarize meetings” is not intrinsically hazardous code; it poses a threat only when the AI comprehends the intent and executes it with elevated privileges. This complicates detection for traditional security instruments that depend on pattern matching, as the attack appears linguistically similar to a legitimate user request.

Following the responsible disclosure by the Miggo research team, Google’s security personnel validated the findings and implemented a solution to alleviate the vulnerability.

“`