Policy Templates

Ready-to-use YAML policies for common governance scenarios. Copy, paste, and customize for your agents.

Budget Control

Enforce hard daily and monthly spending caps on each AI agent. When an agent reaches its budget limit, Govyn rejects further requests before they reach the LLM provider — no overspend is possible. Configure separate limits per agent, team, or environment.

View template
Business Hours Only

Restrict AI agent access to LLM APIs to specific hours and days. Agents outside the allowed window receive a clear rejection — preventing overnight runaway costs and ensuring human oversight is available during all agent operations. Supports timezone-aware scheduling and per-agent overrides.

View template
Compliance Audit

Maintain a complete, tamper-evident audit trail for every AI agent interaction. Log every request, response, policy decision, approval, and budget event with timestamps and agent identity. Essential for SOC 2, GDPR Article 30, HIPAA, and EU AI Act compliance requirements. All logs are stored on your infrastructure with configurable retention.

View template
Emergency Lockdown

Instantly halt all AI agent API access with a single command or API call. The emergency lockdown policy gives you a kill switch that blocks every request at the proxy level — no agent can reach any LLM provider until the lockdown is lifted. Essential for incident response when an agent is behaving unexpectedly.

View template
External Comms Approval

Require human approval before AI agents can trigger external communications like emails, Slack messages, API calls to third-party services, or any action that leaves your system boundary. The agent's request is held in a pending state until a human reviewer approves or rejects it — ensuring no unvetted content reaches customers, partners, or external systems.

View template
Loop Detection

Automatically detect when an AI agent enters an infinite loop or recursive call pattern and stop it before it burns through your budget. Govyn tracks request patterns per agent and triggers a circuit breaker when repetitive or runaway behavior is detected — no changes to your agent code required.

View template
PII Protection

Automatically detect and redact personally identifiable information (PII) from AI agent requests before they reach the LLM provider, and from responses before they're logged. Protect email addresses, phone numbers, social security numbers, and custom patterns. Essential for GDPR, HIPAA, and SOC 2 compliance.

View template
Production Safety

Lock down AI agents in production with strict model allowlists, rate limits, and approval requirements for sensitive operations. This policy ensures agents can only use approved models, stay within safe request rates, and require human approval before performing high-risk actions.

View template
Smart Model Routing

Automatically route AI agent requests to the most cost-effective model based on the task complexity, token count, or agent role. Simple queries go to fast, cheap models while complex tasks use premium models. Cut your LLM API costs by 60-80% without changing a single line of agent code.

View template