Q: Software Clarification Questions
Hello Support Team,
1.) With your software being AI powered and having AI voice agents, does your software provide MCP server?
2.) Does your software have OpenAPI or plans to have this feature on your roadmap? If so, when would it be available for use?
3.) Can your AI voice agents work with other software AI agents? If so, please provide instructions.
Avi_Dialora
Dec 17, 2025A: Great questions — I’ll answer each one clearly and without marketing fluff.
1) Does Dialora provide an MCP server?
No. Dialora does not expose an MCP (Model Context Protocol) server today.
Dialora is not a model-hosting or agent-runtime platform in the sense MCP is designed for. We do not expose low-level model orchestration, token streaming, or context injection endpoints the way MCP servers do.
Instead, Dialora operates as an application-layer AI voice system:
• We manage the LLM orchestration
• We manage speech → reasoning → action → speech
• We manage telephony, latency, turn-taking, and reliability
So today:
• ❌ No MCP server
• ✅ High-level AI agent orchestration handled internally
2) Does Dialora have an OpenAPI? Is it on the roadmap?
Current state
Yes — Dialora already has API access, but it is:
• Event + action–based
• Focused on practical integrations, not raw model control
Current API capabilities include:
• Triggering calls
• Receiving call events (completed, transferred, failed, etc.)
• Fetching call metadata, transcripts, summaries
• Creating leads / pushing data to CRMs
• Webhooks for real-time automation
This is exposed via:
• Native API
• Webhooks
• Make / Zapier / Pabbly / n8n-style connectors (Tier 3+)
Roadmap
A public, fully documented OpenAPI spec is:
• ✅ Planned
• 🎯 Focused on agent control, analytics, and orchestration
• ⏳ Rolling out progressively (not a single “big bang” release)
We’re prioritizing:
• Agent lifecycle control
• Call execution APIs
• Usage + billing visibility
• Enterprise automation use cases
Exact date depends on stability milestones, but this is actively being built, not a “someday maybe”.
3) Can Dialora AI voice agents work with other AI agents?
Yes — at the workflow and event level (not at the raw model level).
Dialora agents can:
• Trigger other AI agents
• Be triggered by other AI agents
• Exchange structured data with them
How this works in practice
Dialora emits events, such as:
• Call started
• User intent detected
• Qualification complete
• Appointment booked
• Escalation required
You can use these events to:
• Call another AI agent (text, workflow, RAG, decision agent)
• Send data to an AI system that decides next steps
• Return instructions back to Dialora via webhook/API
Example architecture
• Dialora = voice interface + real-time interaction
• External AI agent = reasoning, planning, decisioning
• CRM / backend = state + persistence
This is typically implemented via:
• Webhooks
• API calls
• Automation platforms (Make, Zapier, n8n, Pabbly)
• Custom middleware
What Dialora does not do (by design)
• We do not let external agents take over real-time turn-taking mid-call
• We do not expose token-level streaming control
That’s intentional — it keeps calls stable, low-latency, and production-safe.
Summary (plain English)
• ❌ MCP server: No
• ✅ APIs & webhooks: Yes
• ✅ OpenAPI-style public spec: On the roadmap
• ✅ Can work with other AI agents: Yes, via events, APIs, and workflows
• 🎯 Dialora’s role: AI voice execution + orchestration, not raw model hosting
If you’re trying to build agent swarms, decision trees, or AI-driven operations, Dialora fits cleanly as the voice interface layer in that stack.
If you want, tell me what you’re trying to build (architecture-level), and I can tell you whether Dialora fits — and how to wire it correctly.