LangChain + Govyn — Govern Your LangChain Agents

LangChain agents calling OpenAI can burn through your API budget in minutes with recursive tool calls. Without a governance layer, a single runaway chain can generate hundreds of completions before you notice — and there's no way to enforce spending limits or model restrictions at the framework level.

How it works

LangChain
Your agents
HTTPS
Govyn Proxy
Policy · Budget · Logs
API
OpenAI API
LLM provider

Step-by-step setup

1

Start the Govyn proxy

bash
npx govyn start --config govyn.yaml
2

Point LangChain at Govyn

python
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    model="gpt-4o",
    openai_api_base="http://localhost:4111/v1",
    openai_api_key="gvn_agent_langchain_01"
)
3

Run your chain as usual

python
from langchain.agents import AgentExecutor, create_openai_tools_agent

agent = create_openai_tools_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools)
result = executor.invoke({"input": "Summarize Q4 revenue"})

Example policy

Define governance rules for your LangChain agents in a simple YAML file.

govyn.yaml
agents:
  langchain_01:
    budget:
      daily: $5.00
      monthly: $100.00
    models:
      allow: [gpt-4o, gpt-4o-mini]
      deny: [gpt-4-32k]
    rate_limit:
      requests_per_minute: 30
    logging:
      replay: true
      redact_pii: true

Why use Govyn with LangChain?

Per-agent daily and monthly budget caps
Model allowlists — block expensive models
Full request/response replay for debugging chains
Rate limiting to prevent runaway tool loops
PII redaction in logged completions
Zero code changes — just swap the base URL

Get started in 5 minutes

Add governance to your LangChain agents with a single config change. No code rewrites.

Read the docs

Frequently asked questions

Do I need to change my LangChain code to use Govyn?
No. You only change the base URL and API key when constructing your ChatOpenAI instance. All LangChain chains, agents, and tools continue to work exactly as before — Govyn is transparent at the HTTP level.
Does Govyn add latency to LangChain agent calls?
Govyn adds sub-millisecond overhead per request. Policy evaluation happens in-memory and the proxy forwards to OpenAI in a single async hop. This is negligible compared to LLM inference time, especially for multi-step chains.
Can I set different budgets for different LangChain agents?
Yes. Each agent gets a unique Govyn API key with its own budget, model restrictions, and rate limits. You can run dozens of LangChain agents through a single proxy, each with independent governance rules.

Related integrations

Explore more