LangChain + Govyn — Govern Your LangChain Agents
LangChain agents powered by Anthropic's Claude models can rack up costs quickly, especially with long-context completions. Without centralized control, you're trusting each agent to self-regulate — and there's no built-in way to enforce spending limits or audit what your agents are sending to Claude.
How it works
Step-by-step setup
Start the Govyn proxy
npx govyn start --config govyn.yaml Point LangChain at Govyn
from langchain_anthropic import ChatAnthropic
llm = ChatAnthropic(
model="claude-sonnet-4-20250514",
anthropic_api_url="http://localhost:4111",
anthropic_api_key="gvn_agent_langchain_claude_01"
) Run your chain as usual
from langchain.agents import AgentExecutor, create_tool_calling_agent
agent = create_tool_calling_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools)
result = executor.invoke({"input": "Draft a contract summary"}) Example policy
Define governance rules for your LangChain agents in a simple YAML file.
agents:
langchain_claude_01:
budget:
daily: $10.00
monthly: $200.00
models:
allow: [claude-sonnet-4-20250514, claude-haiku-4-5-20251001]
deny: [claude-opus-4-20250514]
rate_limit:
requests_per_minute: 20
context:
max_input_tokens: 50000
logging:
replay: true Why use Govyn with LangChain?
Get started in 5 minutes
Add governance to your LangChain agents with a single config change. No code rewrites.
Read the docsFrequently asked questions
Does Govyn support Claude's tool-use API through LangChain?
Can I limit the context window size for cost control?
Can I switch between OpenAI and Anthropic without code changes?
Related integrations
Govern CrewAI multi-agent crews using Claude. Set per-agent budgets, enforce model policies, and replay every conversation.
Add budget limits, policy enforcement, and full replay to LangChain agents using OpenAI. Five-minute setup, zero code changes.
Govern OpenClaw agents using Claude. Add budget enforcement, model policies, and conversation replay to your OpenClaw workflows.
Explore more
SDK wrappers are door locks. Proxies are walls. A deep technical comparison of both governance architectures for AI agents in production.
POLICY TEMPLATESet daily and monthly spending limits for AI agents. Prevent runaway costs with hard budget caps enforced at the proxy level.
POLICY TEMPLATEAutomatically route AI agent requests to cheaper models when possible. Cut LLM costs by 60-80% with smart model routing policies.
COMPARISONCompare Govyn and LiteLLM for AI agent governance. See how a governance-first proxy differs from a multi-provider routing gateway.