Skip to content

Agent Integration

Learn how to connect your existing AI agents to the Scenario testing framework

Scenario is designed to be framework-agnostic, supporting any agent architecture through simple adapter patterns.

Overview

See Also:

Scenario works with any agent implementation through the AgentAdapter[ts][py] interface. This adapter pattern allows you to:

  • Integrate existing agents without modifying their code
  • Support multiple response formats (strings, OpenAI messages, etc.)
  • Handle both stateless and stateful agents
  • Work with any LLM or agent framework

The AgentAdapter Interface

The AgentAdapter[ts][py] is a simple contract that allows Scenario to work with any agent implementation. You only need to provide:

  • role: The agent’s role in the scenario (default to AGENT)
  • call: An async function that receives the full conversation history and returns the agent’s next response

You can wrap any agent (OpenAI, AgentKit, Mastra, custom, etc.) as an AgentAdapter by implementing this interface. The AgentAdaptor receives an AgentInput[ts][py] and returns AgentReturnTypes[ts][py].

When writing a Scenario test, you pass your agents as AgentAdapter objects. Scenario will call your agent’s call function whenever it’s their turn in the conversation, passing in the latest state and message history.

python
import scenario
 
class MyAgent(scenario.AgentAdapter):
    async def call(self, input: scenario.AgentInput) -> scenario.AgentReturnTypes:
        # Your integration logic here
        pass

AgentInput

python
class AgentInput:
    thread_id: str                                  # Unique conversation ID
    messages: List[ChatCompletionMessageParam]      # Full conversation history
    new_messages: List[ChatCompletionMessageParam]  # New messages since last call
    judgment_request: bool                          # Whether this is a judge request
    scenario_state: ScenarioState                   # Current scenario state
 
    # Convenience methods
    def last_new_user_message(self) -> ChatCompletionUserMessageParam
    def last_new_user_message_str(self) -> str

AgentReturnTypes

python
AgentReturnTypes = Union[
    str,                                   # Simple text response
    ChatCompletionMessageParam,            # Single OpenAI message
    List[ChatCompletionMessageParam],      # Multiple messages
    ScenarioResult                         # Direct test result (for judges)
]

Integration Patterns

1. Simple String Agents

python
def my_simple_agent(question: str) -> str:
    """Your existing agent function"""
    return f"Here's my response to: {question}"
 
class SimpleAgent(scenario.AgentAdapter):
    async def call(self, input: scenario.AgentInput) -> str:
        user_message = input.last_new_user_message_str()
        return my_simple_agent(user_message)

2. OpenAI Message Agents

python
import litellm
 
class LiteLLMAgent(scenario.AgentAdapter):
    def __init__(self, model: str = "openai/gpt-4o-mini"):
        self.model = model
 
    async def call(self, input: scenario.AgentInput) -> scenario.AgentReturnTypes:
        response = await litellm.acompletion(
            model=self.model,
            messages=input.messages
        )
        return response.choices[0].message

3. Stateful Agents

python
class StatefulAgent(scenario.AgentAdapter):
    def __init__(self):
        self.agent = MyAgent()
 
    async def call(self, input: scenario.AgentInput) -> scenario.AgentReturnTypes:
        # We only send new_messages here because this agent keeps the history across calls
        response = await self.agent.call(messages=input.new_messages, thread_id=input.thread_id)
        return response

4. Multi-Step Responses

python
class MultiStepAgent(scenario.AgentAdapter):
    async def call(self, input: scenario.AgentInput) -> scenario.AgentReturnTypes:
        user_request = input.last_new_user_message_str()
 
        if "complex task" in user_request.lower():
            return [
                {"role": "assistant", "content": "I'll help you with that complex task."},
                {"role": "assistant", "content": "Let me break this down into steps..."},
                {"role": "assistant", "content": "Here's your complete solution:"}
            ]
 
        return await self.simple_response(user_request)

5. Error Handling Agents

python
class ErrorAgent(scenario.AgentAdapter):
    async def call(self, input: scenario.AgentInput) -> scenario.AgentReturnTypes:
        raise Exception("Simulated agent failure")

6. Tool-Using Agents

python
# See OpenAI Message Agents above for Python tool usage (litellm)

Caching Integration

python
class CachedAgent(scenario.AgentAdapter):
    @scenario.cache()
    async def call(self, input: scenario.AgentInput) -> scenario.AgentReturnTypes:
        # This call will be cached for deterministic testing
        return await self.expensive_llm_call(input.messages)

Framework Integrations

Scenario provides specific integration guides for popular agent frameworks:

Next Steps

Now that you understand agent integration, explore specific guides: