curl --request GET \
--url https://api.bolna.ai/agent/all \
--header 'Authorization: Bearer <token>'[
{
"id": "3c90c3cc-0d44-4b50-8888-8dd25736052a",
"agent_name": "Alfred",
"agent_type": "other",
"agent_status": "processed",
"created_at": "2024-01-23T01:14:37Z",
"updated_at": "2024-01-29T18:31:22Z",
"tasks": [
{
"task_type": "conversation",
"tools_config": {
"llm_agent": {
"model": "gpt-3.5-turbo",
"max_tokens": 100,
"agent_flow_type": "streaming",
"family": "openai",
"provider": "openai",
"base_url": "https://api.openai.com/v1",
"temperature": 0.1,
"request_json": false,
"routes": {
"embedding_model": "snowflake/snowflake-arctic-embed-m",
"routes": [
{
"route_name": "politics",
"utterances": [
"Who do you think will win the elections?",
"Whom would you vote for?"
],
"response": "Hey, thanks but I do not have opinions on politics",
"score_threshold": 0.9
}
]
}
},
"synthesizer": {
"provider": "polly",
"provider_config": {
"voice": "Matthew",
"engine": "generative",
"language": "en-US",
"sampling_rate": "8000"
},
"stream": true,
"buffer_size": 150,
"audio_format": "wav"
},
"transcriber": {
"provider": "deepgram",
"model": "nova-2",
"language": "en",
"stream": true,
"sampling_rate": 16000,
"encoding": "linear16",
"endpointing": 100
},
"input": {
"provider": "twilio",
"format": "wav"
},
"output": {
"provider": "twilio",
"format": "wav"
},
"api_tools": null
},
"toolchain": {
"execution": "parallel",
"pipelines": [
[
"transcriber",
"llm",
"synthesizer"
]
]
},
"task_config": {
"hangup_after_silence": 10,
"incremental_delay": 400,
"number_of_words_for_interruption": 2,
"hangup_after_LLMCall": false,
"call_cancellation_prompt": null,
"backchanneling": false,
"backchanneling_message_gap": 5,
"backchanneling_start_delay": 5,
"ambient_noise": false,
"ambient_noise_track": "office-ambience",
"call_terminate": 90,
"voicemail": false,
"inbound_limit": -1,
"whitelist_phone_numbers": "<array>",
"disallow_unknown_numbers": false
}
}
],
"agent_prompts": {
"task_1": {
"system_prompt": "What is the Ultimate Question of Life, the Universe, and Everything?"
}
}
}
]List all Voice AI agents under your account, along with their names, statuses, and creation dates, using Bolna APIs.
curl --request GET \
--url https://api.bolna.ai/agent/all \
--header 'Authorization: Bearer <token>'[
{
"id": "3c90c3cc-0d44-4b50-8888-8dd25736052a",
"agent_name": "Alfred",
"agent_type": "other",
"agent_status": "processed",
"created_at": "2024-01-23T01:14:37Z",
"updated_at": "2024-01-29T18:31:22Z",
"tasks": [
{
"task_type": "conversation",
"tools_config": {
"llm_agent": {
"model": "gpt-3.5-turbo",
"max_tokens": 100,
"agent_flow_type": "streaming",
"family": "openai",
"provider": "openai",
"base_url": "https://api.openai.com/v1",
"temperature": 0.1,
"request_json": false,
"routes": {
"embedding_model": "snowflake/snowflake-arctic-embed-m",
"routes": [
{
"route_name": "politics",
"utterances": [
"Who do you think will win the elections?",
"Whom would you vote for?"
],
"response": "Hey, thanks but I do not have opinions on politics",
"score_threshold": 0.9
}
]
}
},
"synthesizer": {
"provider": "polly",
"provider_config": {
"voice": "Matthew",
"engine": "generative",
"language": "en-US",
"sampling_rate": "8000"
},
"stream": true,
"buffer_size": 150,
"audio_format": "wav"
},
"transcriber": {
"provider": "deepgram",
"model": "nova-2",
"language": "en",
"stream": true,
"sampling_rate": 16000,
"encoding": "linear16",
"endpointing": 100
},
"input": {
"provider": "twilio",
"format": "wav"
},
"output": {
"provider": "twilio",
"format": "wav"
},
"api_tools": null
},
"toolchain": {
"execution": "parallel",
"pipelines": [
[
"transcriber",
"llm",
"synthesizer"
]
]
},
"task_config": {
"hangup_after_silence": 10,
"incremental_delay": 400,
"number_of_words_for_interruption": 2,
"hangup_after_LLMCall": false,
"call_cancellation_prompt": null,
"backchanneling": false,
"backchanneling_message_gap": 5,
"backchanneling_start_delay": 5,
"ambient_noise": false,
"ambient_noise_track": "office-ambience",
"call_terminate": 90,
"voicemail": false,
"inbound_limit": -1,
"whitelist_phone_numbers": "<array>",
"disallow_unknown_numbers": false
}
}
],
"agent_prompts": {
"task_1": {
"system_prompt": "What is the Ultimate Question of Life, the Universe, and Everything?"
}
}
}
]Bearer authentication header of the form Bearer <token>, where <token> is your auth token.
List of agents
Unique identifier for the agent
Human-readable agent name
"Alfred"
Type of agent
"other"
Current status of the agent
seeding, processed "processed"
Timestamp of agent creation
"2024-01-23T01:14:37Z"
Timestamp of last update for the agent
"2024-01-29T18:31:22Z"
An array of tasks that the agent can perform
Show child attributes
Type of task
conversation, extraction, summarization, webhook Configuration of multiple tools that form a task
Show child attributes
Configuration of LLM model for the agent task
Show child attributes
"gpt-3.5-turbo"
streaming, preprocessed "openai"
"openai"
"https://api.openai.com/v1"
0.1
Semantic routing layer
Show child attributes
Since we use fastembed all models supported by fastembed are supported by us
"snowflake/snowflake-arctic-embed-m"
These are predefined routes that can be used to answer FAQs, or set basic guardrails, or do a static function call.
Show child attributes
"politics"
This is an array of utterances which when spoken you want to send a static response
[
"Who do you think will win the elections?",
"Whom would you vote for?"
]It can be a stand alone string or array of responses. If it's an array the length should be same as number of utterances and a particular index will be matched before returning
"Hey, thanks but I do not have opinions on politics"
Similarity score threshold
0.9
Configuration of Synthesizer model for the agent task
Show child attributes
polly, elevenlabs, deepgram, styletts Show child attributes
Name of voice
Matthew Engine of voice
generative Language of voice
en-US Sampling rate of voice
8000, 16000 150
wav Configuration of Transcriber model for the agent task
Show child attributes
Identification provider for Deepgram
deepgram nova-2, nova-2-meeting, nova-2-phonecall, nova-2-finance, nova-2-conversationalai, nova-2-medical, nova-2-drivethru, nova-2-automotive "nova-2"
en, hi, es, fr "en"
16000
linear16 100
Api tools you'd like the agents to have access to
Show child attributes
Description of all the tools you'd like to add to the agent. It needs to be a JSON string as this will be passed to LLM.
Show child attributes
Any unique name for this function tool
"transfer_call_support"
transfer_call Use this tool to transfer the call
"Use this tool to transfer the call"
Show child attributes
"object"
Show child attributes
call_sid [["call_sid"]]Parameters for each tool, where keys must match the name field in the tools array.
Show child attributes
Show child attributes
Type of request
POST, GET "POST"
Link of the URL to control the transferring of call
null
API Token in case the URL needs authentication
null
Stringified JSON of the tool schema
"{\"call_transfer_number\": \"+19876543210\",\"call_sid\": \"%(call_sid)s\"}"
Agent will execute these tools in the specified order
Should be used onkly in conversation task for now and it consists of all the required configuration for conversational nuances
Show child attributes
Time to wait in seconds before hanging up in case user doesn't speak a thing
10
Since we work with interim results, this will dictate the linear delay to add before speaking everytime we get a partial transcript from ASR
400
To avoid accidental interruption, how many words should we wait for before interrupting
2
Weather to use LLM prompt to hang up or not. Pretty soon this will be replaced by predefined function
false
null
This will enable agent to acknowledge when user is speaking long sentences
false
Gap between every successive acknowledgement. We will also add a random jitter to this value to make it more random
5
Basic delay after which we should start with backchanneling
5
Toggle to add ambient noise to the call to add more naturalism
false
Track for ambient noise can be coffee-shop, call-center, office-ambience
office-ambience, coffee-shop, call-center The call automatically disconnects reaching this limit
90
Enable voicemail detection. Agent will automatically disconnect the call if voicemail is detected
Set the number of times each phone number is allowed to call. Put -1 to allow unlimited calls.
Add phone numbers here that should never be restricted by the call limits (ideal for internal or testing numbers).
Only allow incoming calls from the numbers you've sourced using IngestSourceConfig.
Prompts to be provided to the agent. It can have multiple tasks of the form task_<task_id>
Was this page helpful?