Skip to main content

What is the LLM Tab?

The LLM Tab is where you select and configure the intelligence behind your voice AI agent. Choose your language model provider, adjust response parameters, and connect knowledge bases for enhanced conversations.
LLM Tab showing model selection, parameters, and knowledge base options

Configuration Options

Choose LLM Model

Select your AI provider and model for conversation intelligence.
Choose LLM model with Azure provider and gpt-4.1-mini cluster selected

Provider Selection

Choose from Azure, OpenAI, Anthropic, Groq, and more

Model Selection

Pick the specific model (e.g., gpt-4.1-mini cluster)
Connect your own provider keys in Providers to reduce costs and access more models.

Model Parameters

Fine-tune how your agent generates responses.
Model Parameters section showing Tokens and Temperature sliders with Knowledge Base dropdown
ParameterDescriptionRecommended
Tokens GeneratedMax tokens per LLM output300-500 for concise responses
TemperatureControls creativity/randomness0.3-0.5 for balanced responses
Keep temperature low (0.3-0.5) if you want consistent, controlled responses. Higher temperature increases creativity but may cause deviation from your prompt instructions.

Add Knowledge Base

Connect your knowledge bases to give your agent accurate, contextual information.
Knowledge base dropdown showing connected URLs and PDFs with Add new knowledgebase option
1

Click the Dropdown

Open the “Select knowledge bases” multi-select dropdown.
2

Select Knowledge Bases

Check one or more knowledge bases (PDFs, URLs) to connect.
3

Create New (Optional)

Click “Add new knowledgebase” to create and upload new content.
Knowledge bases enable your agent to answer questions with accurate, up-to-date information from your documents and URLs. Connect multiple knowledge bases for comprehensive coverage.
Create knowledge bases in the Knowledge Base section by uploading PDFs or adding URLs.

Add FAQs & Guardrails

Create structured responses and safety controls for your agent.
Add FAQs and Guardrails section with button to add new blocks

FAQs

Pre-defined answers to common questions that bypass LLM generation for faster, consistent responses

Guardrails

Safety rules that control inappropriate content and maintain professional boundaries
Click “Add a new block for FAQs & Guardrails” to open the configuration modal:
Modal for adding FAQs and Guardrails with Name, Response, Threshold, and Utterances fields
1

Name Your Block

Give a descriptive name (e.g., “Pricing Questions”, “Off-Topic Deflection”).
2

Set the Response

Define the forced response when this rule triggers.
3

Configure Threshold

Set matching sensitivity (0.9 = strict, lower = more matches but may trigger unintentionally).
4

Add Utterances

Add up to 20 example phrases that should trigger this response.
Lower thresholds increase matching likelihood but may cause false triggers. Start with 0.8-0.9 and adjust based on testing.
Learn more about Guardrails → to understand how to maintain professionalism, ensure compliance, and protect your brand during AI conversations.

Next Steps