Book a Demo
what-is-generative-AI-security

What is Generative AI Security?

Generative AI security is the set of practices and controls that keep large language models (LLMs) and other content-producing AI systems safe from misuse, manipulation, or data exposure. It focuses on protecting the algorithms, training data, and outputs so the technology…

Read More
AI-supply-chain-drift

The AI Supply Chain: Lessons from the Drift Incident

The first major AI-adjacent SaaS supply-chain breach has arrived. In August 2025, attackers exploited integrations tied to Salesloft’s Drift app, an AI chatbot and sales automation assistant, to compromise OAuth tokens and pivot into Salesforce and Google Workspace.  This was not…

Read More
Acuvity vs SASE/CASB

Acuvity vs SASE/CASB: Choosing the Right Tool for Securing Generative AI

Background As generative AI becomes embedded across modern enterprise workflows, organizations are under pressure to address a fast-evolving risk landscape. From employees using ChatGPT to AI agents operating autonomously, the security perimeter has shifted and traditional data governance tools are not…

Read More
open-ai

OpenAI’s MCP Integration: Power Meets Peril in the Age of Connected AI

The Game-Changing Launch That Should Make Security Teams Nervous OpenAI just launched “Developer Mode” for ChatGPT, giving Plus and Pro subscribers full read-and-write access to external tools via the Model Context Protocol (MCP). The company itself describes the feature as “powerful…

Read More
shadow-ai-management

What is Shadow AI?

Shadow AI refers to employees using artificial intelligence tools—often generative AI—without approval or oversight from IT, security, or compliance teams. These unsanctioned tools can expose sensitive data, create compliance gaps, and weaken security controls. Understanding what Shadow AI is, why it spreads, and how to manage it is now a critical priority for CIOs, CISOs, and governance leaders.

Read More
page-title-bg-min

Lessons from Cloud Security: Why Detection Alone Fails for AI 

Every major shift in enterprise technology brings a scramble to secure it. When companies moved to the cloud, security teams invested heavily in tools that promised visibility and control. What they got instead was an overwhelming flood of detections. Misconfigurations, anomalous…

Read More
fiona-MttR_H38e3g-unsplash

Why Shadow AI Is a Compliance Problem

Your employees are already using AI tools at work. While you’re still figuring out your company’s AI strategy, they’ve moved ahead without you. And they’re creating serious security and compliance risks in the process. This blog explores the growing threat of…

Read More
abstract-3d-rendering-geometric-surface 1-min

AI Misuse in the Wild: Inside Anthropic’s August Threat Report

Anthropic released its August 2025 threat intelligence report, adding to a growing body of evidence that artificial intelligence is now deeply embedded in criminal operations. Security researchers have long anticipated this shift, but the specificity of the examples in this report makes…

Read More
cost-data-breach-ibm2

Key Takeaways from IBM’s 2025 Cost of a Data Breach Report

For 20 years, IBM’s Cost of a Data Breach Report has been one of the industry’s most trusted sources on the financial and operational impact of security incidents. Each edition provides a rare combination of breadth, spanning hundreds of breaches across industries and geographies,…

Read More
tool-poisoning

Tool Poisoning: Hidden Instructions in MCP Tool Descriptions

Imagine installing a seemingly benign math tool on your AI assistant that simply adds two numbers. Unbeknownst to you, the tool’s description itself contains hidden directives intended for the AI model. These malicious instructions are invisible or inconspicuous to the user,…

Read More