LangGraph Integration
Learn how to integrate LangGraph agents with the Scenario testing framework
LangGraph agents work seamlessly with Scenario through the AgentAdapter[py]
interface. The key is handling LangGraph's streaming responses and message conversion properly.
Basic Integration Pattern
Given this LangGraph agent example:
from langchain.chat_models import init_chat_model
from langgraph.prebuilt import create_react_agent
from langgraph.checkpoint.memory import InMemorySaver
llm = init_chat_model(
"openai:gpt-4.1",
temperature=0,
api_key=os.getenv("OPENAI_API_KEY"),
)
checkpointer = InMemorySaver()
agent = create_react_agent(
model=llm,
prompt=SYSTEM_PROMPT,
tools=[
get_customer_order_history,
get_order_status,
get_company_policy,
get_troubleshooting_guide,
escalate_to_human,
],
checkpointer=checkpointer,
)
You can integrate it with Scenario like this:
import scenario
from langchain_core.messages import (
BaseMessage,
HumanMessage,
convert_to_openai_messages,
)
class LangGraphAgentAdapter(scenario.AgentAdapter):
async def call(self, input: scenario.AgentInput) -> scenario.AgentReturnTypes:
result = []
# Stream the agent response
for task in agent.stream(
{
"messages": [
HumanMessage(content=input.last_new_user_message_str()),
]
},
{
"configurable": {
"thread_id": input.thread_id,
}
},
):
# Collect messages from each task
for _, task_result in task.items():
result += task_result["messages"]
# Convert LangChain messages to OpenAI format for Scenario
return convert_to_openai_messages(result)
Full Example Project
For a complete working example with a customer support agent, including tools, system prompts, and comprehensive tests, check out the create-agent-app project.
Next Steps
- Explore Scenario Basics for advanced testing patterns
- Learn about Scripted Simulations for precise control
- Check out more Agent Integration patterns for other frameworks