Ground Your Agents Faster with Native Azure AI Search Indexing in Foundry
Farzad Sunavala explains how Azure AI Foundry now lets developers instantly create Azure AI Search vector indexes from major cloud data sources, simplifying agent grounding and accelerating AI projects.
Ground Your Agents Faster with Native Azure AI Search Indexing in Foundry
Author: Farzad Sunavala
Overview
Azure AI Foundry introduces a streamlined process that allows developers to natively create an Azure AI Search vector index during the agent “Add knowledge” workflow. With this new capability, ingestion from Azure Blob Storage, ADLS Gen2, or Microsoft OneLake is seamless, requiring no pre-existing index or manual pipeline setup. The indexing flow tightly integrates embedding model selection and hybrid vector-plus-keyword retrieval, all protected by Azure RBAC and network isolation features.
Why This Matters
Traditional grounding of AI agents previously demanded manual setup of Azure AI Search indexes, custom schema definition, and pipeline work—all acting as barriers to quick experimentation and integration. With Foundry’s enhancement, developers can:
- Choose a data source from supported Azure storage options
- Select an Azure OpenAI embedding model (e.g.,
text-embedding-*
) - Initiate automatic ingestion, document chunking, embedding, and index creation—all with a single click
- Enable their agents to answer grounded, enterprise-specific questions instantly
Key Capabilities
- Inline index creation: No need to bring or configure a pre-existing Azure AI Search index.
- Automatic ingestion: Content is pulled and prepared for embeddings automatically.
- Embedding model selection: Choose models during creation for tailored results.
- Hybrid-ready: Index supports combined vector and keyword search.
- Security: Built-in respect for Azure RBAC and network isolation of data.
Supported Data Sources (Initial Wave)
- Azure Blob Storage
- Azure Data Lake Storage (ADLS) Gen2
- Microsoft OneLake (Fabric)
How To Use It
- Open or create an agent in Azure AI Foundry.
- Click Add knowledge.
- Choose your preferred data source (Blob / ADLS Gen2 / OneLake).
- Authorize the data source if required and select relevant containers or file paths.
- Select an Azure OpenAI embedding model.
- Hit Create index & ingest.
- Foundry ingests, chunks, embeds, and provisions an Azure AI Search index for your agent.
- Your agent is now immediately ready to answer grounded questions using your data.
No extra pipelines, no schema hand-editing—just connect, select, and deploy.
Additional Resources
- How to create an Azure AI Search index in Foundry (Tutorial)
- Azure AI Search Concepts
- Hybrid Retrieval Overview
- Embeddings Models in Foundry
- Agentic Retrieval Updates in Azure AI Search
Developers are encouraged to try out the new workflow and share their experiences and launches using #AzureAIFoundry.
This post appeared first on “Microsoft AI Foundry Blog”. Read the entire article here