- OpenAI’s new “apps” feature enables ChatGPT to connect with external services like email and storage
- Radware discovered “ZombieAgent,” a prompt injection flaw allowing hidden commands to exfiltrate or propagate data
- Exploits include zero-click, one-click, persistence, and worm-like propagation; OpenAI patched it December 16
OpenAI recently launched a new feature for ChatGPT, which, unfortunately, increases the risk of data exfiltration and unauthorized access.
In December 2025, the feature called Connectors transitioned from beta to general availability, allowing ChatGPT to integrate with various applications like calendars, cloud storage, and email accounts, enhancing its contextual understanding and response relevance.
However, this feature, now referred to as ‘apps’, has introduced a significant vulnerability known as prompt injection attacks, as highlighted by security researchers at Radware.
Four Methods of Abuse
Radware has named this vulnerability 'ZombieAgent', which is similar to vulnerabilities observed in other GenAI tools.
By connecting ChatGPT to Gmail, for instance, the tool can read incoming emails and provide contextual responses regarding conversations, scheduled calls, and pending invitations.
However, an incoming email may contain a concealed malicious prompt, such as text written in white font on a white background or with a font size of zero. While invisible to the human eye, it remains detectable by the machine.
If a user requests ChatGPT to read that email, the tool could execute those hidden commands without the user's knowledge or consent. These commands could range from exfiltrating sensitive data to a third-party server to using the inbox for further propagation.
Radware identified four methods of exploiting ZombieAgent: a zero-click server-side attack (where the malicious prompt is embedded in the email, allowing ChatGPT to exfiltrate data before the user sees it), a one-click server-side attack (where the prompt is in a file that the user must upload), gaining persistence (a malicious command designed to be stored in ChatGPT’s memory), and propagation (where the malicious prompt spreads further, akin to a worm).
Radware reported that OpenAI addressed this issue on December 16, although specific details were not disclosed.




