Skip to content

OpenAI adapter

Wraps the standard tool_calls execution path of OpenAI's function-calling interface. Each call from the model is evaluated by the policy gate before the underlying function runs. Denied calls return a structured error message in the tool result so the loop continues and the model can react.

No changes to your client config, model selection, message history, or prompt — only tool execution is intercepted.

Install

bash
pip install kitelogik[openai]    # or: pip install kitelogik openai

Setup

Same setup as every other adapter — build a PolicyGate and a SessionContext once per session.

python
from kitelogik import OPAClient, PolicyGate, SessionContext
from kitelogik.adapters.openai import OpenAIAdapter

gate = PolicyGate(opa_client=OPAClient())          # default base_url
context = SessionContext(
    session_id="sess_001",
    user_role="support_agent",
    session_scopes=["read_customer", "approve_refund_under_100"],
)

adapter = OpenAIAdapter(gate=gate, context=context)

Register tools

register(name, fn, schema=None, action=None). The schema is OpenAI's function schema — the contents of function inside a tool definition. Only tools that register a schema are returned by openai_tool_schemas().

python
async def get_customer_record(customer_id: str) -> str:
    return f"Customer {customer_id}: Acme Corp"

async def approve_refund(customer_id: str, amount: float) -> str:
    return f"Refunded ${amount:.2f} to {customer_id}"

adapter.register(
    "get_customer_record",
    get_customer_record,
    schema={
        "name": "get_customer_record",
        "description": "Look up a customer by ID.",
        "parameters": {
            "type": "object",
            "properties": {"customer_id": {"type": "string"}},
            "required": ["customer_id"],
        },
    },
)

adapter.register(
    "approve_refund",
    approve_refund,
    schema={
        "name": "approve_refund",
        "description": "Approve a refund for a customer.",
        "parameters": {
            "type": "object",
            "properties": {
                "customer_id": {"type": "string"},
                "amount": {"type": "number"},
            },
            "required": ["customer_id", "amount"],
        },
    },
)

action= overrides the OPA action name when your Python tool name differs from what your Rego policies match against.

Drop into your existing loop

python
import openai

client = openai.AsyncOpenAI()
messages = [{"role": "user", "content": "Refund $50 to cust_001"}]

while True:
    response = await client.chat.completions.create(
        model="gpt-4o",
        messages=messages,
        tools=adapter.openai_tool_schemas(),     # ← only added line
    )
    choice = response.choices[0]

    if choice.finish_reason != "tool_calls":
        break

    messages.append(choice.message)
    tool_results = await adapter.execute_all(choice.message.tool_calls)
    messages.extend(tool_results)

print(response.choices[0].message.content)

The single line that wires governance in is tools=adapter.openai_tool_schemas() plus the existing tool_calls execution running through adapter.execute_all(...).

What happens on a deny

A denied tool call returns an OpenAI-shaped tool result message:

json
{
  "role": "tool",
  "tool_call_id": "call_abc123",
  "content": "{\"blocked\": true, \"reason\": \"Action blocked by governance policy.\"}"
}

The agent loop appends this to messages and continues — the model sees the denial and can either retry with different args, escalate to the user, or give up. To customise the message, pass deny_message= to the adapter constructor:

python
adapter = OpenAIAdapter(
    gate=gate,
    context=context,
    deny_message="Refunds over your authorized scope require manager approval.",
)

Methods at a glance

MethodUse
register(name, fn, schema=None, action=None)Register a tool. Chainable.
openai_tool_schemas()Returns the tools=[…] list for chat.completions.create().
await execute(tool_call)Run one tool call, return the tool-result message.
await execute_all(tool_calls)Run all tool calls concurrently, return list of tool-result messages in order.
execute_sync(tool_call)Synchronous variant. Safe inside Jupyter / FastAPI loops.

Constructor options

ParamDefaultPurpose
gaterequiredThe PolicyGate instance
contextrequiredThe SessionContext for this session
sanitizeTrueRun prompt-injection sanitiser on string returns
deny_message"Action blocked by governance policy."Surfaced to the model on deny

Source

kitelogik/adapters/openai.py on GitHub.

Released under the Apache 2.0 License.