Documentation
Everything you need to set up and run Govyn — from your first proxy in five minutes to production-grade governance policies. Whether you are evaluating Govyn for a single agent prototype or deploying it across a fleet of autonomous systems, these docs cover installation, configuration, policy authoring, API usage, and deployment options.
Getting started
The fastest path to a working Govyn proxy is the Quickstart Guide. It walks you through installing Govyn, adding your first LLM provider, creating a budget policy, and routing your first agent request through the proxy. The entire process takes under five minutes and requires only Node.js 18+ and an LLM provider API key.
If you already have Govyn running and want to explore specific capabilities, use the resource cards below to jump directly to integration guides, policy templates, or the feature reference.
Quickstart Guide
Install Govyn, configure your first policy, and proxy your first LLM request in under five minutes.
Integration Guides
Step-by-step setup for LangChain, CrewAI, OpenAI Agents SDK, Ollama, Azure OpenAI, and more.
Policy Templates
Ready-to-use YAML policies for budget control, loop detection, PII protection, smart routing, and approval workflows.
Feature Reference
Dashboard, approval workflows, semantic caching, anomaly detection, BYOK, team management, and alerts.
Cloud vs Self-Hosted
Compare deployment options. Same governance engine — managed SaaS or self-hosted on your infrastructure.
GitHub Repository
Source code, README, changelog, and issue tracker. MIT licensed.
Core concepts
Understanding five core concepts will help you get the most out of Govyn. These concepts appear throughout the documentation, configuration files, and API responses.
Proxy architecture
Govyn is a network-level proxy that sits between your AI agents and LLM providers like OpenAI and Anthropic. Agents send requests to the Govyn proxy URL instead of directly to the provider API. The proxy evaluates each request against your policies, then forwards approved requests to the real provider using the real API key. The agent authenticates with a proxy token that only works against Govyn — it cannot be used to call the provider directly. This means governance cannot be bypassed, even if the agent reads its own environment, spawns subprocesses, or makes raw HTTP calls.
Policies
Policies are YAML-defined rules that control what agents can and cannot do. Each policy has a name, a type (budget, rate_limit, model_filter, loop_detection, approval, model_route), and a rule configuration block. Policies are evaluated in priority order for every incoming request. When a policy blocks a request, the proxy returns a structured JSON error with the policy name, the reason, and the agent identity. Policies are version-controlled in git alongside your application code — changes go through code review, not dashboard toggles.
Agents
An agent is any application that calls an LLM API through the Govyn proxy. Agents are identified by the X-Govyn-Agent header included with each request. This header value is a string label you choose — for example, support-bot, code-reviewer, or research-agent. Budget limits, rate limits, and activity logs are tracked per agent. On paid plans, an "active agent" is any unique agent name that made at least one request during the current billing period.
Sessions
Sessions group related requests together for debugging and replay. A session is identified by the X-Govyn-Session header. You choose when to start and end sessions — typically one session per user conversation, task execution, or workflow run. Sessions enable timeline visualization, step-by-step replay, and cross-request analysis. Session replay is available on Team plans and above.
API targets
API targets are provider API keys stored in Govyn. When using Govyn Cloud, you bring your own keys (BYOK) through the dashboard or API. Keys are encrypted with AES-256-GCM before storage and only decrypted in memory during request forwarding. No key material is ever logged or exposed in API responses. When self-hosting, provider keys live in your govyn.yaml configuration file on your own infrastructure.
Configuration reference
Govyn is configured through a single govyn.yaml file in your project root. The configuration defines which LLM providers to connect, what policies to enforce, and how the proxy server behaves. Running npx govyn init generates a starter configuration with sensible defaults.
Here is an example configuration that connects two providers and defines two policies:
# govyn.yaml
port: 4000
providers:
openai:
api_key: $OPENAI_API_KEY
anthropic:
api_key: $ANTHROPIC_API_KEY
policies:
- name: budget-limit
type: budget
rule:
daily_limit: 10.00
monthly_limit: 100.00
- name: model-allowlist
type: model_filter
rule:
allowed: [gpt-4o-mini, claude-sonnet-4-20250514]
The providers section maps provider names to their API credentials. Use environment variable references ($OPENAI_API_KEY) to avoid hardcoding secrets. The policies section defines an ordered list of governance rules. Each policy specifies a type that determines what the rule does and a rule block with type-specific configuration.
Available policy types:
- budget — daily and monthly spending limits per agent or globally
- rate_limit — maximum requests per minute, hour, or day
- model_filter — allowlist or denylist of permitted models
- loop_detection — detect and block repetitive request patterns
- approval — require human approval before forwarding certain requests
- model_route — smart routing to cheaper models based on request complexity
See the Policy Templates page for ready-to-use configurations covering common governance scenarios.
API reference
The Govyn proxy exposes two categories of endpoints: proxy endpoints that forward LLM requests to upstream providers, and management endpoints that provide cost data, logs, and health information.
Proxy endpoints
These endpoints accept LLM requests and forward them to the configured provider after policy evaluation. They follow the same request and response format as the upstream provider API, so existing client libraries work without modification.
- POST /:slug/v1/chat/completions — OpenAI-compatible chat completions endpoint. Accepts the same request body as the OpenAI API. Supports streaming via SSE.
- POST /:slug/v1/messages — Anthropic-compatible messages endpoint. Accepts the same request body as the Anthropic API. Supports streaming via SSE.
Include the X-Govyn-Agent header with every request to identify the calling agent. Optionally include X-Govyn-Session to group requests into a session for replay.
Management endpoints
These endpoints provide read-only access to cost data, logs, budget status, and proxy health.
- GET /api/costs?agent=X&period=today — returns cost aggregates for the specified agent and time period. Supported periods:
today,7d,30d. - GET /api/logs?limit=N&agent=X — returns recent action logs, optionally filtered by agent name. Supports pagination via cursor.
- GET /api/budget/status?agent=X — returns the current budget status for the specified agent, including remaining daily and monthly allowance.
- GET /health — returns proxy health status. Use this for load balancer health checks and monitoring.
All management endpoints return JSON responses. Error responses follow a consistent format with type, message, policy, and agent fields.
Need help?
If you run into issues or have questions that are not covered in the documentation, here are the best ways to get help:
- GitHub Issues — open an issue at github.com/govynai/govyn/issues for bug reports, feature requests, and general questions. The maintainers and community are active and responsive.
- GitHub Discussions — for longer-form questions, architecture advice, and community conversation, use GitHub Discussions.
- Paid support — Starter plans include email support. Team plans include priority support. Enterprise plans include dedicated support with SLA. See pricing for details.
Frequently asked questions
What are the system requirements for Govyn?
Govyn requires Node.js 18 or later. It runs on Linux, macOS, and Windows. For self-hosted deployments, any environment that runs Node.js works — VMs, Docker containers, Kubernetes pods, or serverless platforms with long-running process support. No database, Redis, or external services are required.
How do I install Govyn?
Run npx govyn init to generate a govyn.yaml configuration file. Add your LLM provider API keys to the config. Start the proxy with npx govyn start. The proxy starts on localhost:4000 by default. The entire setup takes under five minutes. See the Quickstart Guide for a detailed walkthrough.
What LLM providers does Govyn support?
Govyn supports OpenAI, Anthropic, Azure OpenAI, Google Gemini, Mistral, Cohere, Ollama, and any OpenAI-compatible API including vLLM, llama.cpp, and LocalAI. Adding a new provider requires a single routing rule in govyn.yaml.
Do I need to change my agent code to use Govyn?
No. Govyn is a drop-in proxy. You change two things in your agent configuration: the base URL (point it at the Govyn proxy instead of the provider) and the API key (use a Govyn proxy token instead of the provider key). No code changes, no SDK imports, no library installations.
What is the difference between Govyn Cloud and self-hosted?
Both run the same governance engine with the same policies. Govyn Cloud adds a managed dashboard, team management, RBAC, Stripe billing, anomaly detection alerts, and hosted proxy infrastructure. Self-hosted is free, MIT licensed, and keeps all data on your infrastructure with zero external dependencies. See the Cloud vs Self-Hosted comparison for a detailed breakdown.
How are policies defined in Govyn?
Policies are defined in YAML files that you version in git alongside your application code. Each policy has a name, type, and rule configuration. Types include budget, rate_limit, model_filter, loop_detection, approval, and model_route. Policies are evaluated in priority order for every incoming request. See the Policy Templates page for ready-to-use examples.
Can I use Govyn with multiple LLM providers at the same time?
Yes. You can configure multiple providers in a single govyn.yaml file. The proxy routes requests to the correct provider based on the model name in each request. Policies apply across all providers — a budget limit of $10/day applies to the total spend across OpenAI, Anthropic, and any other configured provider.
Where can I get help with Govyn?
Open an issue on the GitHub repository for bug reports and feature requests. The README includes detailed setup instructions and configuration reference. Paid plans include email support (Starter), priority support (Team), and dedicated support with SLA (Enterprise).