豆豆友情提示:这是一个非官方 GitHub 代理镜像,主要用于网络测试或访问加速。请勿在此进行登录、注册或处理任何敏感信息。进行这些操作请务必访问官方网站 github.com。 Raw 内容也通过此代理提供。
Skip to content

Runtime governance guardrails #2775

@imran-siddique

Description

@imran-siddique

Summary

We've built a governance integration for the OpenAI Agents SDK in the Agent Governance Toolkit (MIT, 6,100+ tests). The adapter lives at packages/agentmesh-integrations/openai-agents-trust/.

What it provides (distinct from prompt-level guardrails)

Capability Description
Policy enforcement Deterministic allow/deny rules before tool execution (<0.1ms)
Trust guardrails Cryptographic agent identity with trust scoring (0–1000)
Governance hooks Pre/post execution hooks for policy and audit
Audit logging Hash-chained audit trail for every agent action

This is complementary to the SDK's existing guardrails — those focus on prompt/output safety, while this handles runtime governance (which tools can be called, by which agents, with what permissions).

Integration approach

The adapter wraps the Agent class with governance middleware, intercepting tool calls for policy evaluation. No changes to the SDK needed.

pip install openai-agents-trust

Why this matters

  • Enterprise deployment — runtime policy enforcement and audit trails are prerequisites for production
  • OWASP coverage — addresses OWASP Agentic Top 10 risks at the runtime layer
  • Handoff governance — trust-gated agent handoffs with accountability

Open question

Would there be interest in listing this as a community integration or documenting a governance middleware pattern?

Metadata

Metadata

Assignees

No one assigned

    Labels

    questionQuestion about using the SDK

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions