LouAdesida reviews Microsoft’s expanded security approach for AI, explaining how tools like Defender, Entra, and Purview are used to protect AI systems, data, and identities across modern organizations.

From Traditional Security to AI-Driven Cyber Resilience: Microsoft’s Approach to Securing AI

Author: LouAdesida

AI is transforming organizational workflows, enabling teams to work more efficiently—yet with adoption comes new security risks that traditional tools weren’t designed to address. This article outlines how Microsoft is shifting its security strategy to meet these new AI challenges.

Changing Security Risks with AI Adoption

As organizations integrate AI into coding, automation, fraud detection, and decision-making, security leaders find that legacy cybersecurity tools can’t fully address novel AI risks. Key distinctions include:

  • Unpredictable AI Behavior: AI models can misinterpret input, be manipulated through crafted prompts, leak sensitive training data, or act against business rules.
  • Agentic AI Risks: Agentic systems can act, decide, and modify other software or infrastructure, introducing new propagation risks if compromised.
  • Expanded Attack Surface: AI risks now touch data, networks, endpoints, applications, and cloud environments.

The Need for AI-Aware Cyber Resilience

To be resilient, organizations must:

  • Track use of sensitive data in AI training and inference
  • Govern digital (non-human) identities, including AI agents and copilots
  • Detect and mitigate AI misuse or manipulation
  • Align with emerging AI compliance and regulatory requirements (e.g., EU AI Act)

Microsoft’s Evolving Security Portfolio for AI

Microsoft’s approach builds on existing security products, extending them for AI usages:

1. Microsoft Defender: Protecting AI Workloads

  • Defender for Cloud: Secures AI workloads (Azure, AWS, GCP), monitors deployments, detects vulnerabilities.
  • Defender for Cloud Apps: Extends protection to AI-enabled edge applications.
  • Defender for APIs: Addresses prompt injection and model manipulation threats.
  • AI Red Teaming & Evaluation: Tools to test and continuously assess AI agent safety pre-deployment.

2. Microsoft Entra: Managing Non-Human Identities

  • Identity Oversight: Entra manages digital identities for AI agents and copilots.
  • Conditional Access: Restricts AI agent resource access based on context.
  • Privileged Identity Management: Monitors and controls high-risk privileges for automation agents.

3. Microsoft Purview: Safeguarding AI Data

  • Data Discovery & Classification: Identifies and tracks sensitive information in AI workflows.
  • Data Loss Prevention: Prevents leaks via Copilot or custom AI agents, especially using Azure AI Foundry.
  • Insider Risk Management: Detects unauthorized data exposure through AI systems.
  • Compliance Support: Extends regulatory policies to AI workloads for seamless governance.

Strategic Priority: Secure AI Proactively

Continuous AI evolution requires new security priorities. Microsoft recommends securing AI systems from the very first line of code, focusing on integrating identity, data, and endpoint protection to establish cyber resilience.

For more, see:


Updated Aug 13, 2025 — Version 1.0

Key Tags: Data Loss Prevention, Identity and Access Management, Microsoft Defender, Microsoft Entra, Microsoft Purview, AI Workloads, Compliance, Insider Risk Management, Red Teaming

This post appeared first on “Microsoft Tech Community”. Read the entire article here