Why do we need Zero Trust to secure AI initiatives?
Traditional perimeter-based security tools like firewalls, IDS/IPS, and network DLP were designed for predictable, deterministic traffic patterns and on-premises environments. They struggle with modern AI scenarios for several reasons:
1. **AI traffic is encrypted and abstracted**
AI services typically use encrypted channels for privacy and security. This means network tools have limited visibility into what data is being sent, how it’s being used, or whether an AI interaction is risky. AI also operates at the data and application layer, not just the network layer, so you need controls that understand users, data, and business context.
2. **AI behavior is dynamic, not static**
Many AI systems, especially GenAI, generate different outputs to the same prompt. Signature-based or pattern-based network defenses are tuned for repeatable behavior and static indicators. They’re not well suited to monitoring or governing dynamic AI interactions.
3. **Data moves beyond the network perimeter**
AI applications and data now live across cloud services, SaaS platforms, mobile devices, and partner ecosystems. Network-centric controls can’t follow data onto USB drives, personal devices, or third‑party SaaS tools. A data-centric and asset-centric approach is needed.
Zero Trust addresses these gaps by:
- **Verifying explicitly**: Every access request is evaluated using all available signals (identity, device health, location, data sensitivity, behavior) rather than assuming anything inside a network is trusted.
- **Using least privilege access**: Access is limited to just-enough and just-in-time, often with adaptive policies. This reduces the blast radius if an identity, device, or app is compromised.
- **Assuming breach**: The model starts from the premise that attackers can and will get in somewhere—identity, device, app, or infrastructure—and designs controls to contain and detect that.
For AI specifically, Zero Trust’s **asset-centric and data-centric** focus is key. It:
- Protects AI applications from being used as an entry point or “beachhead” for broader attacks.
- Ensures that sensitive training data, prompts, and outputs are governed by classification, encryption, and access policies.
- Works consistently across cloud, on-premises, and hybrid environments.
In short, as AI reshapes how data is used and where it lives, Zero Trust provides a practical framework to secure AI applications and the data they depend on, in ways that traditional perimeter defenses cannot.
How does AI change our data security and governance priorities?
AI, and particularly generative AI (GenAI), significantly raises the stakes for data security and governance. It doesn’t just add another use case for data; it reshapes how valuable that data is and how exposed it can become.
Here’s how AI changes the picture:
1. **Data becomes more valuable—to you and to attackers**
- GenAI can extract insights, patterns, and summaries from large volumes of data, turning existing data into a more direct driver of business outcomes and profitability.
- High‑quality, original enterprise data becomes a critical asset for training and tuning AI models, especially as public internet data becomes noisier and less reliable.
- That same data is now more attractive to attackers, who may want to train their own models, sell the data, or use it for fraud and extortion.
2. **Existing data security gaps get amplified**
Many organizations have postponed deep data classification and protection work in favor of other priorities (like cloud identity, DevOps security, or SOC modernization). AI brings those deferred tasks to the forefront:
- If sensitive documents (e.g., salary data, M&A plans, product roadmaps) are not properly classified and protected, AI applications may surface them to users who should not see them.
- For example, an internal user might ask an AI assistant about executive compensation or secret projects. If the underlying data isn’t labeled and access rules aren’t enforced, the AI could respond with details that were previously hard to find, even if technically accessible.
3. **Discoverability increases risk**
One of GenAI’s strengths is making information easier to find. That’s a benefit for productivity, but it also means:
- Data that was “secure by obscurity” (hard to locate, buried in file shares or email) can suddenly become easily discoverable.
- Users who already had broad access but lacked time or search skills can now quickly surface sensitive content through natural language queries.
4. **Data ownership, lineage, and privacy become more complex**
- AI often combines multiple data sources, which can blur lines around who owns what, who is accountable, and how privacy and IP rights are enforced.
- Accidental disclosures can occur through model training or retrieval-augmented generation (RAG) if data lineage and access controls are not clear.
What this means for your priorities:
- **Elevate data classification and labeling** from a “nice to have” to a core prerequisite for AI. You need clear policies and consistent technical enforcement across repositories and AI applications.
- **Apply data-centric controls** such as encryption, sensitivity labels, and access policies that travel with the data, regardless of where it’s stored or accessed.
- **Align AI applications with your identity and permissions model** so that AI respects existing access rights, retention policies, and audit requirements.
- **Use AI to help with data security**: AI can assist in discovering sensitive data, identifying unusual data movement, and highlighting where access is too broad or misaligned with policy.
In practice, AI forces organizations to move data security and governance from the backlog to the front of the roadmap. The same capabilities that make AI valuable for the business also make robust data protection and governance non‑negotiable.
What is the AI security shared responsibility model?
Securing AI is a joint effort between your organization and your AI providers, similar to the cloud shared responsibility model. You don’t own every control, and your provider doesn’t either. Understanding where responsibilities sit is essential for planning investments, controls, and governance.
You can think of AI security across three layers:
1. **AI platform** – The underlying infrastructure, models, and platform services (for example, Azure AI or the platform behind Copilot).
2. **AI application** – The software your organization builds or configures to use AI productively and securely (e.g., custom apps, plugins, integrations).
3. **AI usage** – How people in your organization actually use AI, including what data they provide and how they interpret and act on outputs.
How responsibilities typically break down by deployment model:
### 1. Infrastructure as a Service (IaaS) – “Build your own model”
- **Provider (e.g., Microsoft on Azure)**:
- Secures the core cloud infrastructure (compute, storage, networking).
- Provides baseline security capabilities and compliance controls.
- **Your organization**:
- Designs, trains, and secures the AI models themselves.
- Manages training data governance, including quality, lineage, and protection.
- Secures the applications that use the models (identity, access, input/output handling).
- Implements monitoring, incident response, and risk management for the AI stack.
### 2. Platform as a Service (PaaS) – “Build on Azure AI”
- **Provider**:
- Delivers and secures the AI platform and many embedded safety and security controls.
- Manages model hosting, scaling, and core platform reliability.
- **Your organization**:
- Builds and secures the custom application layer (business logic, plugins, integrations).
- Governs how data flows into and out of the AI services.
- Configures and enforces identity, access, and data protection policies.
- Trains users and defines acceptable use.
### 3. Software as a Service (SaaS) – “Use managed AI like Copilot”
- **Provider**:
- Operates and secures the full AI service, including model safety, platform controls, and core application features.
- Implements built‑in protections such as respecting identity models, sensitivity labels, retention policies, and audit capabilities (for example, how Microsoft Copilot is designed).
- **Your organization**:
- Decides how and where the service is used in the business.
- Manages user identities, roles, and access rights.
- Applies data security and governance policies to the content the service can reach.
- Provides user training, usage policies, and oversight.
Across all models, your organization consistently owns:
- **AI usage**: user training, acceptable use policies, and escalation paths for suspicious activity.
- **Identity and access management**: who can use which AI capabilities and what data they can reach.
- **Data security and governance**: classification, labeling, encryption, and lifecycle management of your data.
To operationalize this model, it helps to focus on three control pillars:
1. **Data access controls** – APIs, ACLs, and labeling to govern who and what can access which data.
2. **Application controls** – Policies and safeguards that manage how applications interact with data and models (e.g., plugin governance, input/output filtering).
3. **AI model controls** – Measures to prevent unintended disclosures or misuse at the model level, including safety systems and monitoring.
By mapping these responsibilities clearly with your AI providers, you can avoid gaps, reduce duplicated effort, and build AI systems that are both productive and resilient.