# Scenario: Agent Testing with Simulation-Based Workflows > Test AI agents with simulation-based testing. LLM-powered user simulators validate agent behavior, tool calling, and multi-turn conversations in LangGraph, CrewAI, Pydantic AI. ## Docs - [Agent Integration](/agent-integration.md): Learn how to connect your existing AI agents to the Scenario testing framework - [Community & Support](/community-support.md): Join Scenario community to connect with other users, get help, and explore the agent development ecosystem - [Visualizing Simulations](/visualizations.md): The Simulations visualizer in LangWatch provides a powerful way to inspect and analyze the results of your agent tests built with the scenario library. It offers a user-friendly interface to dig into simulation runs, helping you debug your agent’s behavior and collaborate with your team. - [Blackbox Testing](/testing-guides/blackbox-testing.md): Testing via public interfaces - [Fixtures](/testing-guides/fixtures.md): Static test assets for deterministic scenarios - [Mocks](/testing-guides/mocks.md): Simulating external dependencies for deterministic testing - [Testing Tool Calls in Scenarios](/testing-guides/tool-calling.md): Tool calls are a core part of modern agent workflows. This guide covers how to write scenario tests that verify tool usage, how to assert on tool call behavior, and how to mock or script tool call results for robust, deterministic tests. - [Getting Started - Your First Scenario](/introduction/getting-started.md): A scenario is a test for a situation that your agent must be able to handle - [Simulation-Based Testing](/introduction/simulation-based-testing.md): Multi-agent applications require a new approach to testing - [Testing Remote Agents](/examples/testing-remote-agents.md): Adapters for HTTP-deployed agents - [JSON Response Pattern](/examples/testing-remote-agents/json.md): Test [agents](/agent-integration) that return complete JSON responses via HTTP POST requests. This is the most common pattern for REST APIs. See [Blackbox Testing](/testing-guides/blackbox-testing) for the testing philosophy. - [Server-Sent Events (SSE) Pattern](/examples/testing-remote-agents/sse.md): Test [agents](/agent-integration) that stream responses using Server-Sent Events format. SSE uses `data:` prefixed lines and ends with `data: [DONE]`. See [Blackbox Testing](/testing-guides/blackbox-testing) for the testing philosophy. - [Stateful Pattern with Thread ID](/examples/testing-remote-agents/stateful.md): Test [agents](/agent-integration) that maintain conversation history server-side using thread identifiers. The [adapter](/agent-integration) sends only the latest message plus a thread ID, and the server looks up the full history. See [Blackbox Testing](/testing-guides/blackbox-testing) for the testing philosophy. - [Streaming Response Pattern](/examples/testing-remote-agents/streaming.md): Test [agents](/agent-integration) that stream responses in chunks using HTTP chunked transfer encoding. This provides progressive output as the LLM generates text. See [Blackbox Testing](/testing-guides/blackbox-testing) for the testing philosophy. - [Audio → Audio Testing](/examples/multimodal/audio-to-audio.md): Test agents that listen to audio input and reply with audio responses. This pattern is ideal for voice assistants, conversational AI, and any agent that needs to respond in a natural spoken voice. - [Audio → Text Testing](/examples/multimodal/audio-to-text.md): Test agents that receive audio files (like WAV or MP3) and respond with textual answers. This pattern is ideal for transcription services, audio-based Q\&A, or any agent that processes voice input but responds in text. - [Multimodal File Analysis (Coming Soon)](/examples/multimodal/multimodal-files.md): This page will demonstrate how to build Scenario tests where the user provides **files** (PDF, CSV, etc.) as part of the conversation and the agent must parse and respond appropriately. - [Multimodal Image Generation (Coming Soon)](/examples/multimodal/multimodal-image-generation.md): This page will demonstrate how to build Scenario tests where the agent generates **images from text prompts** and the judge evaluates both the prompt quality and the generated image quality. - [Multimodal Image Analysis](/examples/multimodal/multimodal-images.md): Use Case - [Multimodal Use Cases – Overview](/examples/multimodal/overview.md): Many modern agents must process more than just text. Scenario supports tests where your agent receives **images**, **files**, **audio**, and other modalities – individually or combined. - [Testing Voice Agents](/examples/multimodal/testing-voice-agents.md): Scenario lets you write **end-to-end tests** for agents that listen to audio, think, and respond with either text *or* audio. - [Voice-to-Voice Conversation Testing](/examples/multimodal/voice-to-voice.md): Test complete voice conversations where both the user simulator *and* the agent speak over multiple turns. This pattern is perfect for testing complex dialogues, multi-turn reasoning, and natural conversation flow. - [Domain-Driven TDD](/best-practices/domain-driven-tdd.md): Using scenarios to drive implementation - [The Agent Testing Pyramid](/best-practices/the-agent-testing-pyramid.md): Ever since we have given tools in the hands of LLMs, we have been facing a persistent challenge: how do we systematically ensure that our agents really work? How do evals fit the picture? How do we know it’s reliable enough? - [The Vibe-Eval Loop: TDD for Agents](/best-practices/the-vibe-eval-loop.md): The current state of the industry is that most people are building AI agents relying on **vibes-only**. This is great for quick POCs, but super hard to keep evolving past the initial demo stage. The biggest challenge is capturing all the edge cases people identify along the way, plus fixing them and proving that it works better after. - [Cache](/basics/cache.md): Make your scenario tests deterministic and faster by caching LLM calls and other non-deterministic operations - [Scenario Basics](/basics/concepts.md): Learn the core concepts and capabilities of the Scenario agent testing framework - [Configuration](/basics/configuration.md): Scenario supports flexible configuration via several different methods. Not all methods are available in all implementations. The table below shows which methods are available in your implementation: - [Custom Clients](/basics/custom-clients.md): Advanced configuration for custom LLM clients and parameters - [Debug Mode](/basics/debug-mode.md): Step through scenarios interactively - [Scripted Simulations](/basics/scripted-simulations.md): Precise control over the conversation flow - [Test Runner Integration](/basics/test-runner-integration.md): Scenario supports seamless integration with popular test runners in both JavaScript/TypeScript and Python. Please refer to the section for your language below. - [Writing Scenarios](/basics/writing-scenarios.md): Learn how to create effective scenario tests - [Inngest AgentKit Integration](/agent-integration/agentkit.md): Learn how to integrate AgentKit agents with the Scenario testing framework - [Agno Integration](/agent-integration/agno.md): Learn how to integrate Agno agents with the Scenario testing framework - [CrewAI Integration](/agent-integration/crewai.md): Learn how to integrate CrewAI crews with the Scenario testing framework - [Google ADK Integration](/agent-integration/google-adk.md): Learn how to integrate Google ADK agents with the Scenario testing framework - [HTTPS Integration](/agent-integration/https.md): Learn how to test agents deployed as HTTP/HTTPS services - [LangGraph Integration](/agent-integration/langgraph.md): Learn how to integrate LangGraph agents with the Scenario testing framework - [LiteLLM Integration](/agent-integration/litellm.md): Learn how to integrate LiteLLM agents with the Scenario testing framework - [Mastra Integration](/agent-integration/mastra.md): Learn how to integrate Mastra agents with the Scenario testing framework - [OpenAI Integration](/agent-integration/openai.md): Learn how to integrate OpenAI agents with the Scenario testing framework - [Pydantic AI Integration](/agent-integration/pydantic-ai.md): Learn how to integrate Pydantic AI agents with the Scenario testing framework - [Custom Clients](/advanced/custom-clients.md): Advanced configuration for custom LLM clients and parameters