Skip to content

JSON Response Pattern

Test agents that return complete JSON responses via HTTP POST requests. This is the most common pattern for REST APIs. See Blackbox Testing for the testing philosophy.

When to Use

  • Standard REST APIs
  • Agents that return complete responses (not streaming)
  • Simple request/response pattern
  • Most production deployments

Complete Working Example

This example demonstrates:

  • Creating a test HTTP server with real LLM (OpenAI GPT-4o-mini)
  • Building an adapter that POSTs messages and parses JSON
  • Running a full scenario test with user simulator and judge
  • Proper server setup and teardown
test_testing_remote_agents_json
# Source: https://github.com/langwatch/scenario/blob/main/python/examples/test_testing_remote_agents_json.py
"""
Example: Testing an agent that returns JSON responses
 
This test demonstrates handling agents that return complete JSON responses via HTTP POST.
"""
 
from aiohttp import web
import aiohttp
import pytest
import pytest_asyncio
import scenario
 
# Base URL for the test server (set during server startup)
base_url = ""
 
 
class JsonAgentAdapter(scenario.AgentAdapter):
    """
    Adapter for testing agents that return JSON responses.
 
    This adapter:
    1. Extracts the most recent user message from conversation history
    2. Makes an HTTP POST request to the agent endpoint
    3. Parses the JSON response and returns the agent's message
    """
 
    async def call(self, input: scenario.AgentInput) -> scenario.AgentReturnTypes:
        # Extract the most recent user message content
        last_message = input.messages[-1]
        content = last_message["content"]  # type: ignore[typeddict-item]
 
        # For this example, we assume content is a string
        if not isinstance(content, str):
            raise ValueError("This example only handles string content")
 
        # Make HTTP POST request to your agent's endpoint
        async with aiohttp.ClientSession() as session:
            async with session.post(
                f"{base_url}/chat",
                json={"message": content},
                timeout=aiohttp.ClientTimeout(total=30),
            ) as response:
                # Parse JSON response and return the agent's message
                result = await response.json()
                return result["response"]
 
 
async def chat_handler(request: web.Request) -> web.Response:
    """
    HTTP endpoint that receives a message and returns a response.
 
    This simulates a production agent endpoint.
    """
    data = await request.json()
    message = data["message"]
 
    # In a real application, you would call your LLM here
    # For this example, we return a simple response
    response_text = f"The weather is sunny and 72°F. Your query was: {message}"
 
    # Return JSON response
    return web.json_response({"response": response_text})
 
 
@pytest_asyncio.fixture
async def test_server():
    """
    Start a test HTTP server before tests and shut it down after.
 
    This server simulates a deployed agent endpoint.
    """
    global base_url
 
    # Create web application
    app = web.Application()
    app.router.add_post("/chat", chat_handler)
 
    # Start server on random available port
    runner = web.AppRunner(app)
    await runner.setup()
    site = web.TCPSite(runner, "localhost", 0)
    await site.start()
 
    # Get the actual port assigned
    server = site._server
    assert server is not None
    port = server.sockets[0].getsockname()[1]  # type: ignore[union-attr]
    base_url = f"http://localhost:{port}"
 
    yield
 
    # Cleanup: stop server
    await runner.cleanup()
 
 
@pytest.mark.asyncio
async def test_json_response(test_server):
    """
    Test agent via HTTP endpoint with JSON response.
 
    This test verifies:
    - Adapter correctly calls HTTP endpoint
    - JSON response is properly parsed
    - Agent provides relevant weather information
    - Full scenario flow works end-to-end
    """
    result = await scenario.run(
        name="JSON weather inquiry",
        description="User asks about weather and receives JSON response",
        agents=[
            scenario.UserSimulatorAgent(model="openai/gpt-4o-mini"),
            JsonAgentAdapter(),
            scenario.JudgeAgent(
                model="openai/gpt-4o-mini",
                criteria=[
                    "Agent should provide weather information",
                    "Response should be relevant to the query",
                ],
            ),
        ],
        script=[
            scenario.user("What's the weather like today?"),
            scenario.agent(),
            scenario.judge(),
        ],
        set_id="python-examples",
    )
 
    assert result.success

Key Points

  1. Extract last message: Get the most recent user message from conversation history
  2. Handle content types: Support both string and multipart content (images, files)
  3. HTTP POST: Make standard HTTP request with JSON body
  4. Parse response: Extract the agent's message from JSON response
  5. Adjust field names: Change result.response to match your API (could be result.message, result.text, etc.)

Adapter Pattern

The adapter is the bridge between Scenario and your HTTP API:

const jsonAgentAdapter: AgentAdapter = {
  role: AgentRole.AGENT,
  call: async (input) => {
    // 1. Extract latest message
    const lastMessage = input.messages[input.messages.length - 1];
    const content =
      typeof lastMessage.content === "string"
        ? lastMessage.content
        : (lastMessage.content.find((part) => part.type === "text") as any)
            ?.text || "";
 
    // 2. Make HTTP request
    const response = await fetch(`${baseUrl}/chat`, {
      method: "POST",
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify({ message: content }),
    });
 
    // 3. Parse and return
    const result = await response.json();
    return result.response; // Adjust to match your API
  },
};

Testing Your Own Agent

To test your deployed agent:

  1. Replace the server URL: Change baseUrl to your agent's URL
  2. Adjust request format: Match your API's expected JSON structure
  3. Update response parsing: Extract the message from your API's response format
  4. Add authentication: Include headers like Authorization if needed
const myAgentAdapter: AgentAdapter = {
  role: AgentRole.AGENT,
  call: async (input) => {
    const response = await fetch("https://my-agent.com/api/chat", {
      method: "POST",
      headers: {
        "Content-Type": "application/json",
        Authorization: `Bearer ${process.env.API_KEY}`,
      },
      body: JSON.stringify({
        prompt: input.messages[input.messages.length - 1].content,
      }),
    });
 
    const result = await response.json();
    return result.output; // Your API's response field
  },
};

See Also