Module scenario.config
Configuration module for Scenario.
This module provides all configuration classes for customizing the behavior of the Scenario testing framework, including model settings, scenario execution parameters, and LangWatch integration.
Classes
ModelConfig: Configuration for LLM model settings ScenarioConfig: Main configuration for scenario execution LangWatchSettings: Configuration for LangWatch API integration
Example
from scenario.config import ModelConfig, ScenarioConfig, LangWatchSettings
# Configure LLM model
model_config = ModelConfig(
model="openai/gpt-4.1-mini",
temperature=0.1
)
# Configure scenario execution
scenario_config = ScenarioConfig(
default_model=model_config,
max_turns=15,
verbose=True
)
# Configure LangWatch integration
langwatch_settings = LangWatchSettings() # Reads from environment
Expand source code
"""
Configuration module for Scenario.
This module provides all configuration classes for customizing the behavior
of the Scenario testing framework, including model settings, scenario execution
parameters, and LangWatch integration.
Classes:
ModelConfig: Configuration for LLM model settings
ScenarioConfig: Main configuration for scenario execution
LangWatchSettings: Configuration for LangWatch API integration
Example:
```
from scenario.config import ModelConfig, ScenarioConfig, LangWatchSettings
# Configure LLM model
model_config = ModelConfig(
model="openai/gpt-4.1-mini",
temperature=0.1
)
# Configure scenario execution
scenario_config = ScenarioConfig(
default_model=model_config,
max_turns=15,
verbose=True
)
# Configure LangWatch integration
langwatch_settings = LangWatchSettings() # Reads from environment
```
"""
from .model import ModelConfig
from .scenario import ScenarioConfig
from .langwatch import LangWatchSettings
__all__ = [
"ModelConfig",
"ScenarioConfig",
"LangWatchSettings",
]
Sub-modules
scenario.config.langwatch
-
LangWatch configuration for Scenario …
scenario.config.model
-
Model configuration for Scenario …
scenario.config.scenario
-
Scenario configuration for Scenario …
Classes
class LangWatchSettings (**values: Any)
-
Configuration for LangWatch API integration.
This class handles configuration for connecting to LangWatch services, automatically reading from environment variables with the LANGWATCH_ prefix.
Attributes
endpoint
- LangWatch API endpoint URL
api_key
- API key for LangWatch authentication
Environment Variables: LANGWATCH_ENDPOINT: LangWatch API endpoint (defaults to https://app.langwatch.ai) LANGWATCH_API_KEY: API key for authentication (defaults to empty string)
Example
# Using environment variables # export LANGWATCH_ENDPOINT="https://app.langwatch.ai" # export LANGWATCH_API_KEY="your-api-key" settings = LangWatchSettings() print(settings.endpoint) # <https://app.langwatch.ai> print(settings.api_key) # your-api-key # Or override programmatically settings = LangWatchSettings( endpoint="https://custom.langwatch.ai", api_key="your-api-key" )
Create a new model by parsing and validating input data from keyword arguments.
Raises [
ValidationError
][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.self
is explicitly positional-only to allowself
as a field name.Expand source code
class LangWatchSettings(BaseSettings): """ Configuration for LangWatch API integration. This class handles configuration for connecting to LangWatch services, automatically reading from environment variables with the LANGWATCH_ prefix. Attributes: endpoint: LangWatch API endpoint URL api_key: API key for LangWatch authentication Environment Variables: LANGWATCH_ENDPOINT: LangWatch API endpoint (defaults to https://app.langwatch.ai) LANGWATCH_API_KEY: API key for authentication (defaults to empty string) Example: ``` # Using environment variables # export LANGWATCH_ENDPOINT="https://app.langwatch.ai" # export LANGWATCH_API_KEY="your-api-key" settings = LangWatchSettings() print(settings.endpoint) # https://app.langwatch.ai print(settings.api_key) # your-api-key # Or override programmatically settings = LangWatchSettings( endpoint="https://custom.langwatch.ai", api_key="your-api-key" ) ``` """ model_config = SettingsConfigDict(env_prefix="LANGWATCH_", case_sensitive=False) endpoint: HttpUrl = Field( default=HttpUrl("https://app.langwatch.ai"), description="LangWatch API endpoint URL", ) api_key: str = Field(default="", description="API key for LangWatch authentication")
Ancestors
- pydantic_settings.main.BaseSettings
- pydantic.main.BaseModel
Class variables
var api_key : str
var endpoint : pydantic.networks.HttpUrl
var model_config : ClassVar[pydantic_settings.main.SettingsConfigDict]
class ModelConfig (**data: Any)
-
Configuration for LLM model settings.
This class encapsulates all the parameters needed to configure an LLM model for use with user simulator and judge agents in the Scenario framework.
Attributes
model
- The model identifier (e.g., "openai/gpt-4.1", "anthropic/claude-3-sonnet")
api_base
- Optional base URL where the model is hosted
api_key
- Optional API key for the model provider
temperature
- Sampling temperature for response generation (0.0 = deterministic, 1.0 = creative)
max_tokens
- Maximum number of tokens to generate in responses
Example
model_config = ModelConfig( model="openai/gpt-4.1", api_base="https://api.openai.com/v1", api_key="your-api-key", temperature=0.1, max_tokens=1000 )
Create a new model by parsing and validating input data from keyword arguments.
Raises [
ValidationError
][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.self
is explicitly positional-only to allowself
as a field name.Expand source code
class ModelConfig(BaseModel): """ Configuration for LLM model settings. This class encapsulates all the parameters needed to configure an LLM model for use with user simulator and judge agents in the Scenario framework. Attributes: model: The model identifier (e.g., "openai/gpt-4.1", "anthropic/claude-3-sonnet") api_base: Optional base URL where the model is hosted api_key: Optional API key for the model provider temperature: Sampling temperature for response generation (0.0 = deterministic, 1.0 = creative) max_tokens: Maximum number of tokens to generate in responses Example: ``` model_config = ModelConfig( model="openai/gpt-4.1", api_base="https://api.openai.com/v1", api_key="your-api-key", temperature=0.1, max_tokens=1000 ) ``` """ model: str api_base: Optional[str] = None api_key: Optional[str] = None temperature: float = 0.0 max_tokens: Optional[int] = None
Ancestors
- pydantic.main.BaseModel
Class variables
var api_base : str | None
var api_key : str | None
var max_tokens : int | None
var model : str
var model_config
var temperature : float
class ScenarioConfig (**data: Any)
-
Global configuration class for the Scenario testing framework.
This class allows users to set default behavior and parameters that apply to all scenario executions, including the LLM model to use for simulator and judge agents, execution limits, and debugging options.
Attributes
default_model
- Default LLM model configuration for agents (can be string or ModelConfig)
max_turns
- Maximum number of conversation turns before scenario times out
verbose
- Whether to show detailed output during execution (True/False or verbosity level)
cache_key
- Key for caching scenario results to ensure deterministic behavior
debug
- Whether to enable debug mode with step-by-step interaction
Example
# Configure globally for all scenarios scenario.configure( default_model="openai/gpt-4.1-mini", max_turns=15, verbose=True, cache_key="my-test-suite-v1", debug=False ) # Or create a specific config instance config = ScenarioConfig( default_model=ModelConfig( model="openai/gpt-4.1", temperature=0.2 ), max_turns=20 )
Create a new model by parsing and validating input data from keyword arguments.
Raises [
ValidationError
][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.self
is explicitly positional-only to allowself
as a field name.Expand source code
class ScenarioConfig(BaseModel): """ Global configuration class for the Scenario testing framework. This class allows users to set default behavior and parameters that apply to all scenario executions, including the LLM model to use for simulator and judge agents, execution limits, and debugging options. Attributes: default_model: Default LLM model configuration for agents (can be string or ModelConfig) max_turns: Maximum number of conversation turns before scenario times out verbose: Whether to show detailed output during execution (True/False or verbosity level) cache_key: Key for caching scenario results to ensure deterministic behavior debug: Whether to enable debug mode with step-by-step interaction Example: ``` # Configure globally for all scenarios scenario.configure( default_model="openai/gpt-4.1-mini", max_turns=15, verbose=True, cache_key="my-test-suite-v1", debug=False ) # Or create a specific config instance config = ScenarioConfig( default_model=ModelConfig( model="openai/gpt-4.1", temperature=0.2 ), max_turns=20 ) ``` """ default_model: Optional[Union[str, ModelConfig]] = None max_turns: Optional[int] = 10 verbose: Optional[Union[bool, int]] = True cache_key: Optional[str] = None debug: Optional[bool] = False default_config: ClassVar[Optional["ScenarioConfig"]] = None @classmethod def configure( cls, default_model: Optional[str] = None, max_turns: Optional[int] = None, verbose: Optional[Union[bool, int]] = None, cache_key: Optional[str] = None, debug: Optional[bool] = None, ) -> None: """ Set global configuration settings for all scenario executions. This method allows you to configure default behavior that will be applied to all scenarios unless explicitly overridden in individual scenario runs. Args: default_model: Default LLM model identifier for user simulator and judge agents max_turns: Maximum number of conversation turns before timeout (default: 10) verbose: Enable verbose output during scenario execution cache_key: Cache key for deterministic scenario behavior across runs debug: Enable debug mode for step-by-step execution with user intervention Example: ``` import scenario # Set up default configuration scenario.configure( default_model="openai/gpt-4.1-mini", max_turns=15, verbose=True, debug=False ) # All subsequent scenario runs will use these defaults result = await scenario.run( name="my test", description="Test scenario", agents=[my_agent, scenario.UserSimulatorAgent(), scenario.JudgeAgent()] ) ``` """ existing_config = cls.default_config or ScenarioConfig() cls.default_config = existing_config.merge( ScenarioConfig( default_model=default_model, max_turns=max_turns, verbose=verbose, cache_key=cache_key, debug=debug, ) ) def merge(self, other: "ScenarioConfig") -> "ScenarioConfig": """ Merge this configuration with another configuration. Values from the other configuration will override values in this configuration where they are not None. Args: other: Another ScenarioConfig instance to merge with Returns: A new ScenarioConfig instance with merged values Example: ``` base_config = ScenarioConfig(max_turns=10, verbose=True) override_config = ScenarioConfig(max_turns=20) merged = base_config.merge(override_config) # Result: max_turns=20, verbose=True ``` """ return ScenarioConfig( **{ **self.items(), **other.items(), } ) def items(self): """ Get configuration items as a dictionary. Returns: Dictionary of configuration key-value pairs, excluding None values Example: ``` config = ScenarioConfig(max_turns=15, verbose=True) items = config.items() # Result: {"max_turns": 15, "verbose": True} ``` """ return {k: getattr(self, k) for k in self.model_dump(exclude_none=True).keys()}
Ancestors
- pydantic.main.BaseModel
Class variables
var cache_key : str | None
var debug : bool | None
var default_config : ClassVar[ScenarioConfig | None]
var default_model : str | ModelConfig | None
var max_turns : int | None
var model_config
var verbose : bool | int | None
Static methods
def configure(default_model: str | None = None, max_turns: int | None = None, verbose: bool | int | None = None, cache_key: str | None = None, debug: bool | None = None) ‑> None
-
Set global configuration settings for all scenario executions.
This method allows you to configure default behavior that will be applied to all scenarios unless explicitly overridden in individual scenario runs.
Args
default_model
- Default LLM model identifier for user simulator and judge agents
max_turns
- Maximum number of conversation turns before timeout (default: 10)
verbose
- Enable verbose output during scenario execution
cache_key
- Cache key for deterministic scenario behavior across runs
debug
- Enable debug mode for step-by-step execution with user intervention
Example
import scenario # Set up default configuration scenario.configure( default_model="openai/gpt-4.1-mini", max_turns=15, verbose=True, debug=False ) # All subsequent scenario runs will use these defaults result = await scenario.run( name="my test", description="Test scenario", agents=[my_agent, scenario.UserSimulatorAgent(), scenario.JudgeAgent()] )
Methods
def items(self)
-
Get configuration items as a dictionary.
Returns
Dictionary of configuration key-value pairs, excluding None values
Example
config = ScenarioConfig(max_turns=15, verbose=True) items = config.items() # Result: {"max_turns": 15, "verbose": True}
Expand source code
def items(self): """ Get configuration items as a dictionary. Returns: Dictionary of configuration key-value pairs, excluding None values Example: ``` config = ScenarioConfig(max_turns=15, verbose=True) items = config.items() # Result: {"max_turns": 15, "verbose": True} ``` """ return {k: getattr(self, k) for k in self.model_dump(exclude_none=True).keys()}
def merge(self, other: ScenarioConfig) ‑> ScenarioConfig
-
Merge this configuration with another configuration.
Values from the other configuration will override values in this configuration where they are not None.
Args
other
- Another ScenarioConfig instance to merge with
Returns
A new ScenarioConfig instance with merged values
Example
base_config = ScenarioConfig(max_turns=10, verbose=True) override_config = ScenarioConfig(max_turns=20) merged = base_config.merge(override_config) # Result: max_turns=20, verbose=True
Expand source code
def merge(self, other: "ScenarioConfig") -> "ScenarioConfig": """ Merge this configuration with another configuration. Values from the other configuration will override values in this configuration where they are not None. Args: other: Another ScenarioConfig instance to merge with Returns: A new ScenarioConfig instance with merged values Example: ``` base_config = ScenarioConfig(max_turns=10, verbose=True) override_config = ScenarioConfig(max_turns=20) merged = base_config.merge(override_config) # Result: max_turns=20, verbose=True ``` """ return ScenarioConfig( **{ **self.items(), **other.items(), } )