About Govyn

Govyn is an open-source API proxy that enforces governance policies on AI agents at the network layer. It sits between AI agents and LLM providers — OpenAI, Anthropic, Azure OpenAI, and others — intercepting every request and evaluating it against YAML-defined policies before forwarding. Agents never hold real API keys, making policy enforcement architecturally unbypassable.


How Govyn works

Govyn is an HTTP proxy. You run it as a standalone process — locally, in a container, or cloud-hosted. Your AI agents send LLM requests to the Govyn proxy URL instead of directly to the provider API. Govyn evaluates each request against your policies, then forwards approved requests to the real provider using the real API key.

The agent authenticates with a proxy token. This token only works against the Govyn proxy — it cannot be used to call OpenAI, Anthropic, or any other provider directly. The real API keys live in Govyn's configuration, never in the agent's environment.

This architecture means governance cannot be bypassed. Even if an agent reads its own environment variables, spawns subprocesses, or makes direct HTTP calls, it cannot reach the LLM provider without going through the proxy. There is no alternative path.

Govyn adds sub-millisecond latency per request. Policy evaluation happens in-memory. The overhead is negligible compared to LLM inference time.


Key features

Budget enforcement

Per-agent daily and monthly spending limits enforced at the proxy layer. When an agent exceeds its budget, every subsequent LLM request is blocked until the budget resets. Budget tracking is centralized across all agents, all providers, and all models. Alerts fire at configurable thresholds (e.g., 80% of daily limit).

Policy-as-code

All governance policies are defined in YAML files versioned in git. Policies include budget limits, rate limits, model allowlists and denylists, content pattern blocking, and schedule restrictions. Changes go through code review and deployment pipelines — not dashboard toggles. Policy changes are auditable, reviewable, and reversible.

Smart model routing

Govyn inspects each request and routes it to the cheapest model that can handle it. Short, simple requests (under 500 tokens) go to mini or haiku-class models. Medium requests go to mid-tier models. Only complex requests reach premium models. The agent does not know routing happened — the proxy rewrites the model field transparently. Teams typically save 60–80% on LLM costs with zero code changes.

Loop detection

Govyn detects when an agent is stuck in a retry loop — sending near-identical requests repeatedly. When the proxy identifies a configurable number of similar requests within a time window (e.g., 5 identical requests in 60 seconds), it blocks subsequent calls and logs the loop. This prevents runaway cost from broken agent logic.

Session replay

Every LLM request and response is logged with full context: agent identity, timestamps, token counts, cost, policy evaluation results, and the complete request/response payloads. Sessions can be replayed for debugging, auditing, and incident investigation. PII redaction is available for sensitive workloads.

Approval queue

Govyn can require human approval before forwarding specific requests. You define approval rules — for example, require approval before sending emails, posting to Slack, or exceeding a token threshold. The request is held in a queue until a human approves or rejects it. Available on Team plans and above.


Architecture: proxy vs SDK

AI agent governance can be implemented as an SDK wrapper (inside the agent process) or as a proxy (outside the agent process). The architectures differ in where the enforcement boundary lives, which determines whether governance can be bypassed.

Capability SDK wrapper Proxy (Govyn)
Agent holds real API key Yes No
Bypassable via direct HTTP Yes No
Bypassable via subagent Yes No
Language-agnostic No Yes
Requires code changes Yes No
Centralized cost tracking No Yes
Transparent model routing No Yes
Works across frameworks Per-framework All frameworks
Tamper-evident audit logs No Yes

SDK wrappers are appropriate for solo developers, prototypes, and observability-only use cases. Proxies are required for production deployments, team environments, multi-agent systems, and any scenario where enforcement must be guaranteed.


Supported providers

Govyn supports any LLM provider that exposes an HTTP API. Supported providers include:

Adding a new provider requires a single routing rule in the YAML configuration. No code changes, no plugin installation.


Supported frameworks

Govyn works with any agent framework that makes HTTP calls to an LLM provider. No SDK, no library, no code changes required — you change the base URL from the provider to the proxy.

Integration takes under five minutes. Point your agent's base URL at the Govyn proxy. Replace the provider API key with a Govyn proxy token. The agent does not know the difference.


Pricing

Govyn offers four pricing tiers. The open-source core is free forever.

Plan Price Agents Log retention Key features
Free / Open Source $0 Unlimited 24 hours Basic policies, budget limits, community support
Starter $29/mo 10 7 days Smart model routing, email support
Team $99/mo Unlimited 30 days Session replay, approval queue, RBAC, priority support
Enterprise $299/mo Unlimited Custom SSO, dedicated support, custom integrations

All paid plans include a 14-day free trial. Self-hosted deployment is available on all tiers.


Open source

Govyn's core proxy is open source under the MIT license. The full source code is available on GitHub at github.com/govynai/govyn.

Self-hosting requires zero external dependencies. No database. No Redis. No third-party services. Govyn runs as a single binary or Node.js process with a YAML configuration file. Deploy it on a VM, in a Docker container, on Kubernetes, or on any platform that runs Node.js.

When self-hosted, no data leaves your infrastructure. All API requests, responses, logs, and policy configurations remain on your servers. Govyn has no telemetry, no phone-home, and no external calls.


Getting started

Install and configure Govyn in under five minutes:

  1. Initialize — run npx govyn init to generate a govyn.yaml configuration file.
  2. Configure providers — add your LLM provider API keys to the configuration.
  3. Define policies — set budget limits, rate limits, model restrictions, and other policies in YAML.
  4. Start the proxy — run npx govyn start to launch the proxy on localhost:4000.
  5. Point your agents — change the base URL in your agent configuration from the provider URL to the proxy URL.

Your agents now route through Govyn. Every LLM request is evaluated against your policies, tracked for cost, and logged for replay.

View on GitHub See pricing