New Prompt-Injection Attack on Microsoft Copilot Exposed with Just One Click

Varonis reveals a new prompt-injection method that compromises Microsoft Copilot, prompting a swift response from Microsoft.

Updated Jan 15, 2026
New Prompt-Injection Attack on Microsoft Copilot Exposed with Just One Click
Andrew Wallace

Andrew Wallace

Professional Tech Editor

Focuses on professional-grade hardware, software, and enterprise solutions.

  • Varonis discovers a new prompt-injection method via malicious URL parameters, termed “Reprompt.”
  • Attackers can deceive Generative AI tools into revealing sensitive information with a single click.
  • Microsoft has patched the vulnerability, preventing prompt injection attacks through URLs.

Security researchers at Varonis have identified Reprompt, a novel technique for executing prompt-injection attacks on Microsoft Copilot that does not rely on sending emails with hidden prompts or embedding malicious commands in compromised websites.

Similar to other prompt injection methods, this attack requires only a single click.

Prompt injection attacks involve cybercriminals injecting prompts into Generative AI tools, tricking them into disclosing sensitive data. These attacks exploit the tool's inability to differentiate between prompts meant for execution and data intended for reading.

Prompt Injection via URLs

Typically, prompt injection attacks occur when a victim uses an email client with embedded Generative AI (such as Gmail with Gemini). The victim receives an innocuous-looking email containing a concealed malicious prompt, which may be formatted in white text on a white background or reduced to a font size of zero.

When the victim instructs the AI to read the email (for instance, to summarize key points or check for meeting invitations), the AI inadvertently reads and executes the hidden prompt. These prompts can instruct the AI to exfiltrate sensitive data from the inbox to a server controlled by the attackers.

Varonis has discovered a similar method—prompt injection through URLs. Attackers can append a long series of detailed instructions in the form of a query parameter at the end of an otherwise legitimate link.

An example of such a link is: http://copilot.microsoft.com/?q=Hello

Copilot (and many other LLM-based tools) interpret URLs with a query parameter as input text, akin to user-typed prompts. In their experiments, Varonis successfully leaked sensitive data that the victim had previously shared with the AI.

Varonis reported its findings to Microsoft, which promptly addressed the issue, making prompt injection attacks via URLs no longer feasible.

React to this story

Related Posts