Latest HubSpot AI = Breeze Agents
HubSpot customers using Breeze Customer Agent are resolving over 50% of support tickets automatically while spending nearly 40% less time closing the...
If you needed proof that we're moving too fast and breaking critical things, look no further than ShadowLeak—the latest AI security exploit that makes traditional cyberattacks look quaint by comparison. This week, security researchers at Radware demonstrated how a single malicious email can trick OpenAI's Deep Research agent into silently exfiltrating sensitive Gmail data with zero user interaction, zero network traces, and zero hope of detection by conventional security tools.
Welcome to the future, where your AI assistant can be turned against you by someone who simply knows how to write persuasive English. We're officially in the "move fast and break democracy" phase of AI development, and nobody seems interested in pumping the brakes.
ShadowLeak weaponizes what should be AI's greatest strength: the ability to follow complex instructions. The attack works by embedding malicious prompts in seemingly innocuous emails using invisible white-on-white text or microscopic fonts. When Deep Research—ChatGPT's autonomous research agent—processes the victim's inbox, it encounters these hidden instructions and dutifully executes them without the user's knowledge.
Here's the kicker: the entire operation happens on OpenAI's cloud infrastructure, not the victim's device. There's no suspicious network traffic from your laptop, no malware to detect, just a benign-looking query from you asking ChatGPT to "summarize today's emails." Your security team has no idea your data is being exfiltrated because the theft originates from OpenAI's own servers.
The researchers had to craft increasingly sophisticated social engineering prompts to overcome AI safeguards, complete with fake urgency, false authority claims, and detailed instructions for encoding stolen data. After extensive trial and error, they achieved a 100% success rate in exfiltrating Gmail data using the ShadowLeak method.
Don't think this stops at email. Deep Research integrates with Gmail, Google Drive, Dropbox, SharePoint, Outlook, Teams, GitHub, HubSpot, and Notion. The same attack vectors work across any of these connectors, meaning attackers can potentially exfiltrate contracts, meeting notes, customer records, API keys, and other sensitive business data.
As Pascal Geenens, Radware's director of cyber threat intelligence, noted: "Since it is an AI agent, once you can trick it into believing you, you can ask it to do pretty much anything. For example, one could ask the ChatGPT agent if it is running as Deep Research. If so, ask the agent if it has access to GitHub resources and if it does, compile a list of all API secret keys and post it to a website for review."
This isn't theoretical. We're talking about real attacks with immediate business consequences. And OpenAI just announced expanded beta support for Model Context Protocol (MCP) servers, potentially exposing AI agents to tens of thousands of community-provided data sources—each a potential attack vector.
Here's what really grinds my gears: this isn't a surprise. Prompt injection is the number one security vulnerability on the OWASP Top 10 for LLM Applications. We've known about these attacks since 2022, when researchers first demonstrated that AI models could be manipulated through carefully crafted instructions.
The problem is fundamental: prompt injections target the model's instruction-following logic itself, exploiting an intrinsic vulnerability where application instructions aren't fully separated from user input. Unlike traditional cybersecurity attacks that exploit code vulnerabilities, prompt injection requires no specialized technical skills—just the ability to craft persuasive language.
Yet the industry's response has been cosmetic at best. Microsoft 365 Copilot already had a high-severity vulnerability (CVE-2025-32711, CVSS score 9.3) involving AI command injection that could allow attackers to steal sensitive data over networks. It was patched in June, but the broader security implications remain unaddressed.
The shift toward autonomous AI agents represents a fundamental change in the threat landscape. As Nick Turley, VP of product for ChatGPT, noted in August, ChatGPT has 5 million paying business users. That's 5 million potential attack vectors where a single malicious email could compromise sensitive corporate data.
As Michael Bargury, CTO of Zenity, explained: "We demonstrated memory persistence and how attackers can silently hijack AI agents to exfiltrate sensitive data, impersonate users, manipulate critical workflows, and move across enterprise systems, bypassing the human entirely. Attackers can compromise your agent instead of targeting you, with similar consequences."
The problem compounds with every new integration. Recent research has shown how prompt injections can be used to hijack smart home systems, manipulate AI code editors, and trigger zero-click attacks across multiple enterprise tools simultaneously. We're creating digital entities with broad system access and fundamental security vulnerabilities—what could go wrong?
The speed of AI deployment is inversely proportional to our security preparedness. According to IBM research, 96% of business leaders believe that adopting generative AI makes a security breach more likely. Yet companies are racing to integrate AI agents with their most sensitive data sources.
Perplexity's Comet browser—launched just weeks ago—already has documented vulnerabilities that "underline the security challenges faced by agentic AI implementations." The company has attempted twice to fix the core prompt injection issues but still hasn't fully mitigated these attacks.
This is emblematic of the entire industry. Ship first, patch later, hope nobody notices the fundamental architectural problems that can't be fixed with a software update.
The ShadowLeak attack is "nearly impossible to detect by the impacted organization" because sensitive data leaks directly from the AI provider's infrastructure, not the client device. Enterprise defenses simply can't see attacks that originate from trusted cloud services.
As Palo Alto Networks researchers noted, AI agents inherit all the security risks of LLMs while adding new attack surfaces through external tool integrations. The combination creates "an expanded attack surface, combined with the agent's ability to interact with external systems or even the physical world."
Traditional security tools are designed for a world where threats come from outside your organization. When your own AI tools become the attack vector, those defenses become irrelevant.
Here's the part nobody wants to admit: prompt injections may be fundamentally unfixable. As IBM researchers noted, "prompt injections have proved impossible to prevent" because they exploit the core functionality of LLMs—their ability to follow natural language instructions.
The attack surface is constantly changing. Every new model release introduces fresh behaviors and new vulnerabilities. Attackers continue to refine their methods, making robust defenses essential, but the defensive approaches must "adapt in real time" to keep pace.
We're building mission-critical systems on technology with known, unfixable vulnerabilities. It's like constructing skyscrapers on quicksand and being surprised when they sink.
OpenAI patched the specific ShadowLeak vulnerability in September, but Pascal Geenens warned that "there is still a fairly large threat surface that remains undiscovered." This wasn't a one-off bug—it's a symptom of systemic problems with autonomous AI systems.
As Radware's Pascal Geenens explained: "Enterprises adopting AI cannot rely on built-in safeguards alone to prevent abuse. Our research highlights that the combination of AI autonomy, SaaS services and integration with customers' sensitive data sources introduces an entirely new class of risks."
The solution isn't more AI—it's treating AI agents like the high-risk, high-privilege systems they actually are. That means comprehensive access controls, continuous monitoring, and accepting that some integrations are simply too dangerous to implement.
ShadowLeak should be the industry's Chernobyl moment—a clear signal that we need to fundamentally rethink AI security before deploying these systems at scale. Instead, it will likely be treated as just another vulnerability to patch while the underlying problems remain unaddressed.
We need security-first AI development, not AI-first security solutions. We need to acknowledge that some AI capabilities are too dangerous to deploy without robust safeguards that don't exist yet. And we need to stop treating AI security as an afterthought to be addressed once we've captured market share.
The technology industry's "move fast and break things" mantra made sense when we were breaking websites. When we're breaking fundamental security assumptions that protect sensitive data and critical infrastructure, it becomes criminal negligence.
ShadowLeak isn't just another security vulnerability—it's a preview of the chaos we're creating by prioritizing capability over security. The question isn't whether we'll see more attacks like this, but whether we'll learn from them before it's too late.
Ready to secure your organization against AI-specific threats that traditional security can't catch? Winsome Marketing's growth experts help businesses develop comprehensive AI security strategies that actually work.
HubSpot customers using Breeze Customer Agent are resolving over 50% of support tickets automatically while spending nearly 40% less time closing the...
5 min read
Google's announcement of AI Ultra at $249.99 per month represents more than just another premium subscription tier—it's the smoking gun that...
Sam Altman just announced that OpenAI will roll out a personalization hub for ChatGPT within the next couple of days, consolidating previously...