Generative AI security is the set of practices and controls that keep large language models (LLMs) and other content-producing AI systems safe from misuse, manipulation, or data exposure. It focuses on protecting the algorithms, training data, and outputs so the technology…
The AI Supply Chain: Lessons from the Drift Incident
The first major AI-adjacent SaaS supply-chain breach has arrived. In August 2025, attackers exploited integrations tied to Salesloft’s Drift app, an AI chatbot and sales automation assistant, to compromise OAuth tokens and pivot into Salesforce and Google Workspace. This was not…
Acuvity vs SASE/CASB: Choosing the Right Tool for Securing Generative AI
Background As generative AI becomes embedded across modern enterprise workflows, organizations are under pressure to address a fast-evolving risk landscape. From employees using ChatGPT to AI agents operating autonomously, the security perimeter has shifted and traditional data governance tools are not…
OpenAI’s MCP Integration: Power Meets Peril in the Age of Connected AI
The Game-Changing Launch That Should Make Security Teams Nervous OpenAI just launched “Developer Mode” for ChatGPT, giving Plus and Pro subscribers full read-and-write access to external tools via the Model Context Protocol (MCP). The company itself describes the feature as “powerful…
What is Shadow AI?
Shadow AI refers to employees using artificial intelligence tools—often generative AI—without approval or oversight from IT, security, or compliance teams. These unsanctioned tools can expose sensitive data, create compliance gaps, and weaken security controls. Understanding what Shadow AI is, why it spreads, and how to manage it is now a critical priority for CIOs, CISOs, and governance leaders.
Lessons from Cloud Security: Why Detection Alone Fails for AI
Every major shift in enterprise technology brings a scramble to secure it. When companies moved to the cloud, security teams invested heavily in tools that promised visibility and control. What they got instead was an overwhelming flood of detections. Misconfigurations, anomalous…
Why Shadow AI Is a Compliance Problem
Your employees are already using AI tools at work. While you’re still figuring out your company’s AI strategy, they’ve moved ahead without you. And they’re creating serious security and compliance risks in the process. This blog explores the growing threat of…
AI Misuse in the Wild: Inside Anthropic’s August Threat Report
Anthropic released its August 2025 threat intelligence report, adding to a growing body of evidence that artificial intelligence is now deeply embedded in criminal operations. Security researchers have long anticipated this shift, but the specificity of the examples in this report makes…
Key Takeaways from IBM’s 2025 Cost of a Data Breach Report
For 20 years, IBM’s Cost of a Data Breach Report has been one of the industry’s most trusted sources on the financial and operational impact of security incidents. Each edition provides a rare combination of breadth, spanning hundreds of breaches across industries and geographies,…
Tool Poisoning: Hidden Instructions in MCP Tool Descriptions
Imagine installing a seemingly benign math tool on your AI assistant that simply adds two numbers. Unbeknownst to you, the tool’s description itself contains hidden directives intended for the AI model. These malicious instructions are invisible or inconspicuous to the user,…