Configuration for the inference parameters of a testing agent.
Optional
The maximum number of tokens to generate.
The language model to use for generating responses. If not provided, a default model will be used.
The temperature for the language model. Defaults to 0.
Configuration for the inference parameters of a testing agent.