Who is Delta for?
Organizations that run language models in production — without letting governance become the bottleneck. Typical buyers are cross-functional teams spanning engineering, security, and compliance in regulated industries such as finance, insurance, legal, and the public sector. Delta pairs delivery speed with defensible control: engineering keeps shipping while auditors and internal controls get traceability. Book a short demo and we’ll map it to your architecture.
How do we integrate Delta — and how do teams use it day to day?
Fastest path: keep your existing SDK and point `baseURL` at the gateway. Every request passes policies, PII protection, and auditing before it reaches your chosen model provider — no rewrite of your business logic. For a concrete walkthrough, open the product demo or book a session and we’ll align integration and rollout with your environment.
Can we route traffic to a different LLM or provider easily?
Yes — centralized routing is core to Delta. Model choice, failover, and routing rules live in the gateway instead of being duplicated across services. That lets you optimize for quality, cost, latency, or regulatory requirements — and switch cleanly on outages or limits. We define the right rule set for your organization in a pilot or strategy session.
Is there a chat interface?
Yes. The chat UI is for teams that want to work through the gateway directly — for example business units, enablement, or internal support scenarios. Your API remains the primary integration path for products and tools: same models, policies, and evidence for both chat and code. One stack, one governance model — no split between surface and integration.
Can we pass context so the model answers in our company voice or follows internal guardrails?
Yes. Use the usual chat API building blocks: system and context messages plus clear roles in `messages`. Delta adds governance on top: what may leave your perimeter, what gets redacted, and what is logged for audits. Recurring instructions can be centralized as org-wide defaults or templates when you’re ready — we align depth and rollout in conversation.
Can we enforce prompts or response templates?
Yes — mature AI operations need consistent standards, less drift between teams, and clear approvals. That maps naturally to policies and routing: anything enforced at the gateway is centrally controlled and auditable. We scope the detail to your approval workflows; the demo shows the direction we take.
Are controls more granular than what we get from the model vendor alone?
The vendor delivers the model; Delta delivers the controls around it — API keys with clear scope, traceability per request, policy and routing rules, and audit-grade telemetry. That’s where legal and security get the granularity they need while engineering keeps familiar SDK workflows.
Can we build agents or tool-calling workflows?
Yes. Keep agents and orchestration in your stack; Delta is the secured, observable path to models. Streaming, tool use, and function calling pass through while policy, PII protection, and logging apply before the provider. Workflows stay flexible and risk stays manageable.
Where is data processed — do you offer EU or self-hosted deployments?
Processing runs in Frankfurt, Germany. Additional regions and on-premises deployments are available by arrangement. We are GDPR-aligned; details are in our privacy policy.
Isn’t this “just another AI gateway”?
Many vendors can proxy traffic and meter tokens. Delta is policy-first: surface risk early, protect sensitive data, and document decisions in an audit-ready way. Regulated organizations choose us for EU AI Act–oriented telemetry combined with an operational gateway — not a generic proxy.
What does EU AI Act support mean for us in practice?
Delta helps you operationalize risk classification, traceability, and audit trails around model traffic — the technical evidence auditors and internal controls often ask for. It supports your documentation duties; it does not replace legal classification of your system. Scope depends on use case and contracts — align with your legal team, with us contributing the technical substance.
Do you support streaming?
Yes. Streaming and modern API behaviors are part of the OpenAI-compatible path teams already expect — the same governance, PII handling, and telemetry apply along the stream as for non-streaming calls.