Building Voice Agents with Azure Communication Services Voice Live API and Azure AI Agent Service

Building Voice Agents with Azure Communication Services Voice Live API and Azure AI Agent Service


🎯 TL;DR: Real-time Voice Agent Implementation

This post walks through building a voice agent that connects traditional phone calls to Azure’s AI services. The system intercepts incoming calls via Azure Communication Services, streams audio in real-time to the Voice Live API, and processes conversations through pre-configured AI agents in Azure AI Studio. The implementation uses FastAPI for webhook handling, WebSocket connections for bidirectional audio streaming, and Azure Managed Identity for authentication (no API keys to manage). The architecture handles multiple concurrent calls on a single Python thread using asyncio.

Implementation details: Audio resampling between 16kHz (ACS requirement) and 24kHz (Voice Live requirement), connection resilience for preview services, and production deployment considerations. Full source code and documentation available here


Recently, I found myself co-leading an innovation project that pushed me into uncharted territory. The challenge? Developing a voice-based agentic solution with an ambitious goal - routing at least 25% of current contact center calls to AI voice agents. This was bleeding-edge stuff, with both the Azure Voice Live API and Azure AI Agent Service voice agents still in preview at the time of writing.

When you’re working with preview services, documentation is often sparse, and you quickly learn that reverse engineering network calls and maintaining close relationships with product teams becomes part of your daily routine. This blog post shares the practical lessons learned and the working solution we built to integrate these cutting-edge services.

The Innovation Challenge

Building a voice agent system that could handle real customer interactions meant tackling several complex requirements:

  • Real-time voice processing with minimal latency
  • Natural conversation flow without awkward pauses
  • Integration with existing contact center infrastructure
  • Scalability to handle multiple concurrent calls
  • Reliability for production use cases

With both Azure Voice Live API and Azure AI Voice Agent Service in preview, we were essentially building on shifting sands. But that’s what innovation is about - pushing boundaries and finding solutions where documentation doesn’t yet exist.

Understanding the Architecture

Our solution bridges Azure Communication Services (ACS) with Azure AI services to create an intelligent voice agent. Here’s how the pieces fit together:

graph TB
    subgraph "Phone Network"
        PSTN[📞 PSTN Number
+1-555-123-4567] end subgraph "Azure Communication Services" ACS[🔗 ACS Call Automation
Event Grid Webhooks] MEDIA[🎵 Media Streaming
WebSocket Audio] end subgraph "Python FastAPI App" API[🐍 FastAPI Server
localhost:49412] WS[🔌 WebSocket Handler
Audio Processing] HANDLER[⚡ Media Handler
Audio Resampling] end subgraph "Azure OpenAI" VOICE[🤖 Voice Live API
Agent Mode
gpt-4o Realtime] AGENT[👤 Pre-configured Agent
Azure AI Studio] end subgraph "Dev Infrastructure" TUNNEL[🚇 Dev Tunnel
Public HTTPS Endpoint] end PSTN -->|Incoming Call| ACS ACS -->|Webhook Events| TUNNEL TUNNEL -->|HTTPS| API ACS -->|WebSocket Audio| WS WS -->|PCM 16kHz| HANDLER HANDLER -->|PCM 24kHz| VOICE VOICE -->|Agent Processing| AGENT AGENT -->|AI Response| VOICE VOICE -->|AI Response| HANDLER HANDLER -->|PCM 16kHz| WS WS -->|Audio Stream| ACS ACS -->|Audio| PSTN style PSTN fill:#ff9999 style ACS fill:#87CEEB style API fill:#90EE90 style VOICE fill:#DDA0DD style TUNNEL fill:#F0E68C

Core Components

  1. Azure Communication Services: Handles the telephony infrastructure, providing phone numbers and call routing
  2. Voice Live API: Enables real-time speech recognition and synthesis with WebRTC streaming
  3. Azure AI Agent Service: Provides the intelligence layer for understanding and responding to customer queries
  4. WebSocket Bridge: Our custom Python application that connects these services
Read more
Custom Voices in Azure OpenAI Realtime with Azure Speech Services

Custom Voices in Azure OpenAI Realtime with Azure Speech Services


🎯 TL;DR: Hybrid GPT-4o Realtime with Azure Speech Services Custom Voices

This post demonstrates bypassing GPT-4o Realtime’s built-in voice limitations by creating a hybrid architecture that combines GPT-4o’s conversational intelligence with Azure Speech Services’ extensive voice catalog. The solution configures GPT-4o Realtime for text-only output (ContentModalities.Text) and routes responses through Azure Speech Services, enabling access to 400+ neural voices, custom neural voices (CNV), and SSML control. The implementation includes intelligent barge-in functionality using real-time audio amplitude monitoring, allowing users to interrupt the assistant naturally mid-response.

Technical implementation: C# application using Azure.AI.OpenAI and Microsoft.CognitiveServices.Speech SDKs, NAudio for audio I/O, streaming text collection from GPT-4o responses, RMS-based speech detection with configurable thresholds, and concurrent audio management for seamless interruption handling. Complete C# source code with audio helpers available here


Building realtime voice-enabled applications with Azure OpenAI’s GPT-4o Realtime model is incredibly powerful, but there’s one significant limitation that can be a deal-breaker for many use cases: you’re stuck with OpenAI’s predefined voices like “sage”, “alloy”, “echo”, “fable”, “onyx”, and “nova”.

What if you’re building a branded customer service bot that needs to match your company’s voice identity? Or developing a therapeutic application for children with autism where the voice quality and tone are crucial for engagement? What if your users need to interrupt the assistant naturally, just like in real human conversations?

In this comprehensive guide, I’ll show you exactly how I solved these challenges by building a hybrid solution that combines the conversational intelligence of GPT-4o Realtime with the voice flexibility of Azure Speech Services. We’ll dive deep into the implementation, covering everything from the initial problem to the complete working solution.

flowchart TD
    A[👤 User speaks] --> B[🎤 Microphone Input]
    B --> C{Barge-in Detection
Audio Level > Threshold?} C -->|Yes| D[🛑 Stop Azure Speech] C -->|No| E[📡 Stream to GPT-4o Realtime] E --> F[🧠 GPT-4o Processing] F --> G[📝 Text Response
ContentModalities.Text] G --> H[🗣️ Azure Speech Services
Custom/Neural Voice] H --> I[🔊 Audio Output] D --> E I --> J[👂 User hears response] J --> A style A fill:#e1f5fe style D fill:#ffebee style G fill:#f3e5f5 style H fill:#e8f5e8 style I fill:#fff3e0
Read more