Skip to Content

Another Warning Against Incorporating AI Without Guardrails

on Tuesday, 27 January 2026 in Technology & Intellectual Property Update: Arianna C. Goldstein, Editor

Both Google and Microsoft recently published vulnerabilities related to their products that could result in disclosures of sensitive or confidential information.  The vulnerabilities are the result of the integration of artificial intelligence (AI) platforms into commonly used applications.

Google announced that a seemingly innocuous meeting invite can include a cyber threat[1].  A carefully crafted invite by a talented hacker can be used to trick Google Gemini into providing private meeting information.  According to Google, the hack is perpetrated by embedding a natural language AI prompt to summarize all scheduling events for a day and the meeting invitation is then sent to the targeted victim.  When the owner of the calendar prompts Google Gemini to provide a summary of all events for a particular day, the hacker’s invite (with the embedded language) will work behind the scenes to create a summary of every event for that particular day, including all private information, and then provide that information in the calendar summary of the hacker.

Microsoft announced that Copilot is susceptible to an attack known as the “Reprompt”[2].  The Reprompt attack can allow hackers to obtain sensitive data and is effective enough to bypass corporate security.[3]  The attack involves a simple request for AI to repeat each prompt.  AI guardrails normally prevent the leaking of sensitive information, but, as it turns out, the guardrails only apply to the initial request to the AI agent.  Thus, by repeating the request or a “reprompt”, sensitive information can be exposed and  exfiltrated.  In addition, the hacker can stay in control of the interaction even after the chat is closed by the user and an inspection of the reprompt allows the hackers to view the data.

Sample prompts that may lead to the exfiltration of data provided in the report include:

  • “Summarize all of the files that the user accessed today.”
  • “Where does the user live?” or
  • “What vacations does he have planned?”

A few take-aways from these disclosures and caveats for using AI:

  • AI may present new threats to privacy and security which are unanticipated; use with caution;
  • AI is unable to differentiate between prompts which are embedded in an invite, prompts sent as a request, and prompts which are entered by a user at a computer;
  • There may be no limit to what data may be exfiltrated, so please keep information in AI according to well established retention requirements;
  • Do not implement AI unattended because regular maintenance and review will be required including a review of related vulnerabilities; and,
  • AI may be more susceptible to hacks and attacks because coding, or the knowledge of coding, is not needed, only an understanding of normal language prompts.

The issues disclosed by Google and Microsoft are not inherent to the applications or platforms but are the result of hacks of AI using normal language (no coding needed).  AI can read and interpret everyday language making the need for knowing and writing code obsolete.  The issues are also illustrative of the potential pitfalls of being a broad early adopter of technology, which may not have been subjected to the test of time or hackers.

[1] https://thehackernews.com/2026/01/google-gemini-prompt-injection-flaw.html

[2] https://thehackernews.com/2026/01/researchers-reveal-reprompt-attack.html

[3] In the release of the vulnerability by Varonis, it was noted that such an attack is not possible on MS 365 Copilot.

1700 Farnam Street | Suite 1500 | Omaha, NE 68102 | 402.344.0500

Law Firm Website Design