@langwatch/scenario
    Preparing search index...

    Interface TestingAgentInferenceConfig

    Configuration for the inference parameters of a testing agent.

    interface TestingAgentInferenceConfig {
        maxTokens?: number;
        model?: LanguageModelV1;
        temperature?: number;
    }

    Hierarchy (View Summary)

    Index

    Properties

    maxTokens?: number

    The maximum number of tokens to generate.

    model?: LanguageModelV1

    The language model to use for generating responses. If not provided, a default model will be used.

    temperature?: number

    The temperature for the language model. Defaults to 0.