What Is Contact Center Direct Inward Dialing (DID)?
A telephony feature that routes external callers to a specific extension, queue, or skill via a unique inbound number, used as a routing key in modern AI contact centers.
What Is Contact Center Direct Inward Dialing (DID)?
Contact Center Direct Inward Dialing (DID) is a telephony and AI-routing pattern where each inbound phone number maps to a specific extension, queue, campaign, skill, or voice agent. In production LLM contact centers, the DID is not just carrier metadata; it becomes trace context that selects the prompt, knowledge base, persona, guardrail set, and evaluation cohort before the caller speaks. FutureAGI treats the inbound DID as a routing field so teams can detect wrong-prompt and wrong-campaign failures per number.
Why Contact Center DID matters in production LLM and agent systems
DID-as-routing-key is where most outbound and inbound AI campaigns silently misbehave. A team buys 200 DIDs for a billing campaign and 50 for a renewals campaign; the LLM agent code branches on a hardcoded list; a marketing ops change adds 30 new DIDs without updating the branch; those 30 DIDs hit the default prompt and the bot answers as if it were the renewals agent on billing calls.
The pain is felt unevenly. A voice engineer chases reports of bots “answering the wrong way” on a small subset of calls and finds DID drift. A compliance officer is asked whether the consent script the bot played matches the campaign rules — but the DID-to-campaign map was last reviewed eight months ago. An ops lead sees ConversationResolution slowly degrade and only finds the cause when a slice-by-DID dashboard shows it is concentrated in a single number. Customers experience it as the bot “not understanding what I called about.”
In 2026 voice AI stacks on LiveKit, Pipecat, Twilio, and Telnyx all expose the inbound DID as call metadata, but few teams pipe it onto the LLM trace as a span attribute. Unlike Twilio Voice Insights, which centers carrier and call-quality telemetry, an LLM reliability trace must preserve DID-to-prompt decisions. Without that, you cannot slice eval-fail-rate by DID and you cannot detect drift.
How FutureAGI handles DID routing
FutureAGI’s approach is to make the inbound DID a first-class span attribute on every voice trace. The traceAI livekit and pipecat integrations capture voice.inbound_did and voice.outbound_did; teams can also tag derived values like campaign_id and prompt_template_version resolved from the DID. ConversationResolution, ASRAccuracy, and Groundedness run on voice spans, while the dashboard renders eval-fail-rate-by-DID so a single drifting number does not hide inside the campaign average. Agent Command Center’s routing-policies resource can condition the model, prompt template, fallback route, or guardrail set on DID, so a legal-collections DID can require a stricter pre-guardrail than a feedback-survey DID.
A concrete example: a multi-product SaaS runs eight inbound campaigns, each on a separate DID pool, all hitting one LLM voice agent. Its FutureAGI workflow registers a DID-to-campaign mapping as a fi.kb.KnowledgeBase entry. Every voice span carries voice.inbound_did plus the resolved campaign_id. The dashboard reveals one DID — a recently added Australian support line — with a ConversationResolution score of 0.42 vs. baseline 0.79. The cause: that DID was missing from the prompt-routing config and was hitting the US-default system prompt. The engineer fixes the config, re-runs the regression eval, and documents the DID mapping change for audit.
How to measure contact center DID routing
DID routing health is measurable per inbound number, not just in aggregate. Track the DID as both raw call metadata and resolved business context so metrics can explain whether the carrier, ASR layer, prompt route, or LLM behavior caused the failure:
ConversationResolutionbyvoice.inbound_did: returns whether the call reached the intended outcome for that number.ASRAccuracyby DID and region: separates wrong routing from transcript loss on local accents or telephony quality.prompt_template_versionper DID: missing, null, or stale values indicate the number is bypassing the intended route.- DID-to-campaign mapping freshness: timestamp the mapping and alert when it is older than the routing-change SLA.
- Eval-fail-rate-by-DID time series: a spike on one number is usually a routing, prompt, or campaign operations change.
- Escalation rate by resolved
campaign_id: human handoffs often rise before aggregate customer-satisfaction metrics move.
Minimal Python:
from fi.evals import ConversationResolution
evaluator = ConversationResolution()
result = evaluator.evaluate(
input="Inbound call on DID +61-2-555-0100, billing campaign",
output=conversation_transcript,
)
print(result.score, result.reason)
Common mistakes
- Hardcoding DID lists in agent code. Marketing and telecom teams add numbers faster than code review; keep DID maps in versioned config with owners.
- Dropping
voice.inbound_didafter the SIP gateway. If the span lacks the number, evals cannot separate routing bugs from model failures. - Using one prompt across all DIDs. Billing, renewals, collections, and support campaigns need distinct instructions, disclaimers, and escalation paths.
- Alerting only on global conversation quality. A single degraded DID can stay invisible when the rest of the call pool performs well.
- Changing DID routing without regression evals. Re-run representative calls before and after a DID-to-prompt change, especially for regulated campaigns.
Frequently Asked Questions
What is Direct Inward Dialing (DID)?
DID is a telephony feature giving external callers a unique inbound number that routes directly to a specific extension, queue, or skill — bypassing a main switchboard. Contact centers use DID pools to route by region, campaign, or language.
How does DID interact with an AI voice agent?
The DID a caller dials carries intent and persona context. The voice agent reads the DID at call start and picks the matching system prompt, knowledge base, and tone — so the bot is correctly configured before the caller says anything.
How do you evaluate DID-driven AI routing?
FutureAGI tags every voice trace with the inbound DID and runs ConversationResolution and ASRAccuracy sliced by DID. Routing regressions surface as a fail-rate spike on a specific DID cohort.