LangChain + Govyn — Govern Your LangChain Agents
LangChain agents calling OpenAI can burn through your API budget in minutes with recursive tool calls. Without a governance layer, a single runaway chain can generate hundreds of completions before you notice — and there's no way to enforce spending limits or model restrictions at the framework level.
How it works
Step-by-step setup
Start the Govyn proxy
npx govyn start --config govyn.yaml Point LangChain at Govyn
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
model="gpt-4o",
openai_api_base="http://localhost:4111/v1",
openai_api_key="gvn_agent_langchain_01"
) Run your chain as usual
from langchain.agents import AgentExecutor, create_openai_tools_agent
agent = create_openai_tools_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools)
result = executor.invoke({"input": "Summarize Q4 revenue"}) Example policy
Define governance rules for your LangChain agents in a simple YAML file.
agents:
langchain_01:
budget:
daily: $5.00
monthly: $100.00
models:
allow: [gpt-4o, gpt-4o-mini]
deny: [gpt-4-32k]
rate_limit:
requests_per_minute: 30
logging:
replay: true
redact_pii: true Why use Govyn with LangChain?
Get started in 5 minutes
Add governance to your LangChain agents with a single config change. No code rewrites.
Read the docsFrequently asked questions
Do I need to change my LangChain code to use Govyn?
Does Govyn add latency to LangChain agent calls?
Can I set different budgets for different LangChain agents?
Related integrations
Add budget controls and policy enforcement to CrewAI multi-agent crews using OpenAI. Govern every agent in your crew independently.
Add governance to any Python AI agent. Works with requests, httpx, and the OpenAI SDK. Budget limits, policy enforcement, full replay.
Govern LangChain agents using Claude models. Enforce budgets, restrict models, and log every completion with Govyn's proxy.
Explore more
SDK wrappers are door locks. Proxies are walls. A deep technical comparison of both governance architectures for AI agents in production.
FROM OUR BLOGHow smart model routing through a proxy cut our OpenAI and Anthropic bill from $2,140/mo to $578/mo. Zero code changes. Just YAML.
POLICY TEMPLATESet daily and monthly spending limits for AI agents. Prevent runaway costs with hard budget caps enforced at the proxy level.
POLICY TEMPLATEAutomatically route AI agent requests to cheaper models when possible. Cut LLM costs by 60-80% with smart model routing policies.