API Reference¶
This page provides the complete API reference for openwebui-chat-client, automatically generated from the source code docstrings.
OpenWebUIClient¶
The main client class for synchronous operations.
OpenWebUIClient(base_url, token, default_model_id, skip_model_refresh=False)
¶
An intelligent, stateful Python client for the Open WebUI API. Supports single/multi-model chats, tagging, and RAG with both direct file uploads and knowledge base collections, matching the backend format.
This refactored version uses a modular architecture with specialized managers while maintaining 100% backward compatibility with the original API.
Initialize the OpenWebUI client with modular architecture.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
base_url
|
str
|
The base URL of the OpenWebUI instance |
required |
token
|
str
|
Authentication token |
required |
default_model_id
|
str
|
Default model identifier to use |
required |
skip_model_refresh
|
bool
|
If True, skip initial model refresh (useful for testing) |
False
|
chat(question, chat_title, model_id=None, folder_name=None, image_paths=None, tags=None, rag_files=None, rag_collections=None, tool_ids=None, enable_follow_up=False, enable_auto_tagging=False, enable_auto_titling=False)
¶
Send a chat message with a single model.
stream_chat(question, chat_title, model_id=None, folder_name=None, image_paths=None, tags=None, rag_files=None, rag_collections=None, tool_ids=None, enable_follow_up=False, cleanup_placeholder_messages=False, placeholder_pool_size=30, min_available_messages=10, wait_before_request=10.0, enable_auto_tagging=False, enable_auto_titling=False)
¶
Initiates a streaming chat session. Yields content chunks as they are received. At the end of the stream, returns the full response content, sources, and follow-up suggestions.
parallel_chat(question, chat_title, model_ids, folder_name=None, image_paths=None, tags=None, rag_files=None, rag_collections=None, tool_ids=None, enable_follow_up=False, enable_auto_tagging=False, enable_auto_titling=False)
¶
Send a chat message to multiple models in parallel.
continuous_chat(initial_question, num_questions, chat_title, model_id=None, folder_name=None, image_paths=None, tags=None, rag_files=None, rag_collections=None, tool_ids=None, enable_auto_tagging=False, enable_auto_titling=False)
¶
Perform continuous conversation with automatic follow-up questions.
continuous_parallel_chat(initial_question, num_questions, chat_title, model_ids, folder_name=None, image_paths=None, tags=None, rag_files=None, rag_collections=None, tool_ids=None, enable_auto_tagging=False, enable_auto_titling=False)
¶
Perform continuous conversation with multiple models in parallel.
continuous_stream_chat(initial_question, num_questions, chat_title, model_id=None, folder_name=None, image_paths=None, tags=None, rag_files=None, rag_collections=None, tool_ids=None, enable_auto_tagging=False, enable_auto_titling=False)
¶
Perform continuous conversation with streaming responses.
deep_research(topic, chat_title=None, num_steps=3, general_models=None, search_models=None)
¶
Performs an advanced, autonomous, multi-step research process on a given topic using intelligent model routing.
The agent will iteratively plan questions and decide which type of model to use (general vs. search-capable), with the entire process being visible as a multi-turn chat in the UI.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
topic
|
str
|
The topic to be researched. |
required |
chat_title
|
Optional[str]
|
Optional title for the research chat. If not provided, it will be generated from the topic. |
None
|
num_steps
|
int
|
The number of research steps (plan -> execute cycles). |
3
|
general_models
|
Optional[List[str]]
|
A list of model IDs for general reasoning and summarization. If not provided, the client's default model will be used. |
None
|
search_models
|
Optional[List[str]]
|
A list of model IDs with search capabilities. If not provided, the agent will not have the option to use a search model. |
None
|
Returns:
| Type | Description |
|---|---|
Optional[Dict[str, Any]]
|
A dictionary containing the research results and chat information, or None if it fails. |
process_task(question, model_id, tool_server_ids, knowledge_base_name=None, max_iterations=25, summarize_history=False, decision_model_id=None)
¶
Processes a task using an AI model and a tool server in a multi-step process.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
question
|
str
|
The task to process. |
required |
model_id
|
str
|
The ID of the model to use for task execution. |
required |
tool_server_ids
|
Union[str, List[str]]
|
The ID(s) of the tool server(s) to use. |
required |
knowledge_base_name
|
Optional[str]
|
The name of the knowledge base to use. |
None
|
max_iterations
|
int
|
The maximum number of iterations to attempt. |
25
|
summarize_history
|
bool
|
If True, the conversation history will be summarized. |
False
|
decision_model_id
|
Optional[str]
|
Optional model ID for automatic decision-making when the AI presents multiple options. If provided, this model will analyze the options and select the best one automatically, eliminating the need for user input when choices arise. |
None
|
Returns:
| Type | Description |
|---|---|
Optional[Dict[str, Any]]
|
A dictionary containing: - solution: The final answer or error message - conversation_history: Either the full message list or a summarized string (if summarize_history=True) - todo_list: The last parsed to-do list from the agent's thought process |
Optional[Dict[str, Any]]
|
Returns None if initialization fails. |
stream_process_task(question, model_id, tool_server_ids, knowledge_base_name=None, max_iterations=25, summarize_history=False, decision_model_id=None)
¶
Processes a task in a streaming fashion, yielding results for each step.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
question
|
str
|
The task to process. |
required |
model_id
|
str
|
The ID of the model to use for task execution. |
required |
tool_server_ids
|
Union[str, List[str]]
|
The ID(s) of the tool server(s) to use. |
required |
knowledge_base_name
|
Optional[str]
|
The name of the knowledge base to use. |
None
|
max_iterations
|
int
|
The maximum number of iterations to attempt. |
25
|
summarize_history
|
bool
|
If True, the conversation history will be summarized. |
False
|
decision_model_id
|
Optional[str]
|
Optional model ID for automatic decision-making when the AI presents multiple options. If provided, this model will analyze the options and select the best one automatically. |
None
|
rename_chat(chat_id, new_title)
¶
Rename an existing chat.
set_chat_tags(chat_id, tags)
¶
Set tags for a chat conversation.
update_chat_metadata(chat_id, regenerate_tags=False, regenerate_title=False, title=None, tags=None, folder_name=None)
¶
Regenerates and updates the tags and/or title for an existing chat based on its history.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
chat_id
|
str
|
The ID of the chat to update. |
required |
regenerate_tags
|
bool
|
If True, new tags will be generated and applied. |
False
|
regenerate_title
|
bool
|
If True, a new title will be generated and applied. |
False
|
title
|
Optional[str]
|
Direct title to set (alternative to regenerate_title) |
None
|
tags
|
Optional[List[str]]
|
Direct tags to set (alternative to regenerate_tags) |
None
|
folder_name
|
Optional[str]
|
Folder to move chat to |
None
|
Returns:
| Type | Description |
|---|---|
Optional[Dict[str, Any]]
|
A dictionary containing the 'suggested_tags' and/or 'suggested_title' that were updated, |
Optional[Dict[str, Any]]
|
or None if the chat could not be found or no action was requested. |
switch_chat_model(chat_id, model_ids)
¶
Switch the model(s) for an existing chat.
list_chats(page=None)
¶
List all chats for the current user.
get_chats_by_folder(folder_id)
¶
Get all chats in a specific folder.
archive_chat(chat_id)
¶
Archive a chat conversation.
archive_chats_by_age(days_since_update=30, folder_name=None, dry_run=False)
¶
Archive chats that haven't been updated for a specified number of days.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
days_since_update
|
int
|
Number of days since last update (default: 30) |
30
|
folder_name
|
Optional[str]
|
Optional folder name to filter chats. If None, only archives chats NOT in folders. If provided, only archives chats IN that folder. |
None
|
dry_run
|
bool
|
If True, only shows what would be archived without actually archiving |
False
|
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dictionary with archive results including counts and details |
delete_all_chats()
¶
Delete ALL chat conversations for the current user.
⚠️ WARNING: This is a DESTRUCTIVE operation! This method will permanently delete ALL chats associated with the current user account. This action CANNOT be undone. Use with extreme caution.
This method is useful for: - Cleaning up test data after integration tests - Resetting an account to a clean state - Bulk cleanup operations
Returns:
| Type | Description |
|---|---|
bool
|
True if deletion was successful, False otherwise |
create_folder(name)
¶
Create a new folder for organizing chats.
get_folder_id_by_name(name, suppress_log=False)
¶
Get folder ID by folder name.
move_chat_to_folder(chat_id, folder_id)
¶
Move a chat to a specific folder.
list_models()
¶
Lists all available models for the user, including base models and user-created custom models. Excludes disabled base models. This corresponds to the model list shown in the top left of the chat page.
list_base_models()
¶
Lists all base models that can be used to create variants. Includes disabled base models. Corresponds to the model list in the admin settings page, including PIPE type models.
list_custom_models()
¶
Lists custom models that users can use or have created (not base models). A list of custom models available in the user's workspace.
list_groups()
¶
Lists all available groups from the Open WebUI instance.
get_model(model_id)
¶
Fetches the details of a specific model by its ID.
create_model(model_id, name, base_model_id=None, description=None, params=None, permission_type='public', group_identifiers=None, user_ids=None, profile_image_url='/static/favicon.png', suggestion_prompts=None, tags=None, capabilities=None, is_active=True)
¶
Creates a new model configuration with detailed metadata. This method delegates directly to the ModelManager.
update_model(model_id, name=None, base_model_id=None, description=None, params=None, permission_type=None, group_identifiers=None, user_ids=None, profile_image_url=None, suggestion_prompts=None, tags=None, capabilities=None, is_active=None)
¶
Updates an existing model configuration with detailed metadata. This method delegates directly to the ModelManager.
delete_model(model_id)
¶
Deletes a model configuration.
batch_update_model_permissions(models=None, permission_type='public', group_identifiers=None, user_ids=None, max_workers=5, model_identifiers=None, model_keyword=None)
¶
Updates permissions for multiple models in parallel.
get_knowledge_base_by_name(name)
¶
Get a knowledge base by its name.
create_knowledge_base(name, description='')
¶
Create a new knowledge base.
add_file_to_knowledge_base(file_path, knowledge_base_name)
¶
Add a file to a knowledge base.
delete_knowledge_base(kb_id)
¶
Deletes a knowledge base by its ID.
delete_all_knowledge_bases()
¶
Deletes all knowledge bases for the current user.
delete_knowledge_bases_by_keyword(keyword, case_sensitive=False)
¶
Deletes knowledge bases whose names contain a specific keyword.
create_knowledge_bases_with_files(kb_configs, max_workers=3)
¶
Creates multiple knowledge bases with files in parallel.
get_notes()
¶
Get all notes for the current user.
get_notes_list()
¶
Get a simplified list of notes with only id, title, and timestamps.
create_note(title, data=None, meta=None, access_control=None)
¶
Create a new note.
get_note_by_id(note_id)
¶
Get a specific note by its ID.
update_note_by_id(note_id, title=None, data=None, meta=None, access_control=None)
¶
Update an existing note by its ID.
delete_note_by_id(note_id)
¶
Delete a note by its ID.
get_prompts()
¶
Get all prompts for the current user.
get_prompts_list()
¶
Get a detailed list of prompts with user information.
create_prompt(command, title, content, access_control=None)
¶
Create a new prompt.
get_prompt_by_command(command)
¶
Get a specific prompt by its command.
update_prompt_by_command(command, title=None, content=None, access_control=None)
¶
Update an existing prompt by its command (title/content only).
replace_prompt_by_command(old_command, new_command, title, content, access_control=None)
¶
Replace a prompt completely including command (delete + recreate).
delete_prompt_by_command(command)
¶
Delete a prompt by its command.
search_prompts(query=None, by_command=False, by_title=True, by_content=False)
¶
Search prompts by various criteria.
extract_variables(content)
¶
Extract variable names from prompt content.
substitute_variables(content, variables, system_variables=None)
¶
Substitute variables in prompt content.
get_system_variables()
¶
Get current system variables for substitution.
batch_create_prompts(prompts_data, continue_on_error=True)
¶
Create multiple prompts in batch.
batch_delete_prompts(commands, continue_on_error=True)
¶
Delete multiple prompts by their commands.
get_users(skip=0, limit=50)
¶
Get a list of all users.
get_user_by_id(user_id)
¶
Get a specific user by their ID.
update_user_role(user_id, role)
¶
Update a user's role (admin/user).
delete_user(user_id)
¶
Delete a user.
AsyncOpenWebUIClient¶
The async client class for asynchronous operations.
AsyncOpenWebUIClient(base_url, token, default_model_id, timeout=60.0, **kwargs)
¶
Asynchronous Python client for the Open WebUI API.
close()
async
¶
Close the client.
delete_all_chats()
async
¶
Delete ALL chat conversations for the current user.
⚠️ WARNING: This is a DESTRUCTIVE operation! This method will permanently delete ALL chats associated with the current user account. This action CANNOT be undone. Use with extreme caution.
This method is useful for: - Cleaning up test data after integration tests - Resetting an account to a clean state - Bulk cleanup operations
Returns:
| Type | Description |
|---|---|
bool
|
True if deletion was successful, False otherwise |
Return Value Examples¶
Chat Operations¶
{
"response": "Generated response text",
"chat_id": "chat-uuid-string",
"message_id": "message-uuid-string",
"sources": [...] # For RAG operations
}
Parallel Chat¶
{
"responses": {
"model-1": "Response from model 1",
"model-2": "Response from model 2"
},
"chat_id": "chat-uuid-string",
"message_ids": {
"model-1": "message-uuid-1",
"model-2": "message-uuid-2"
}
}
Knowledge Base / Notes¶
{
"id": "resource-uuid",
"name": "Resource Name",
"created_at": "2024-01-01T00:00:00Z",
"updated_at": "2024-01-01T00:00:00Z",
...
}
Process Task¶
{
"solution": "Final solution text",
"conversation_history": [...], # or summarized string
"todo_list": [
{"task": "Research topic", "status": "completed"},
{"task": "Write summary", "status": "completed"}
]
}
Quick Reference Tables¶
Chat Operations¶
| Method | Description |
|---|---|
chat() |
Single-model conversation with optional features |
stream_chat() |
Streaming conversation with real-time updates |
parallel_chat() |
Multi-model parallel conversation |
continuous_chat() |
Continuous conversation with follow-ups |
process_task() |
Autonomous multi-step task processing |
deep_research() |
Multi-step research agent |
Chat Management¶
| Method | Description |
|---|---|
rename_chat() |
Rename an existing chat |
set_chat_tags() |
Apply tags to a chat |
update_chat_metadata() |
Regenerate tags and/or title |
switch_chat_model() |
Switch model for existing chat |
list_chats() |
Get list of user's chats |
archive_chat() |
Archive a specific chat |
archive_chats_by_age() |
Bulk archive old chats |
create_folder() |
Create a chat folder |
Model Management¶
| Method | Description |
|---|---|
list_models() |
List available models |
list_base_models() |
List base models |
list_custom_models() |
List custom models |
get_model() |
Get model details |
create_model() |
Create a custom model |
update_model() |
Update a model |
delete_model() |
Delete a model |
batch_update_model_permissions() |
Batch update permissions |
Knowledge Base Operations¶
| Method | Description |
|---|---|
create_knowledge_base() |
Create a knowledge base |
add_file_to_knowledge_base() |
Add file to KB |
get_knowledge_base_by_name() |
Get KB by name |
delete_knowledge_base() |
Delete a KB |
delete_all_knowledge_bases() |
Delete all KBs |
create_knowledge_bases_with_files() |
Batch create KBs |
Notes API¶
| Method | Description |
|---|---|
get_notes() |
Get all notes |
create_note() |
Create a note |
get_note_by_id() |
Get note by ID |
update_note_by_id() |
Update a note |
delete_note_by_id() |
Delete a note |
Prompts API¶
| Method | Description |
|---|---|
get_prompts() |
Get all prompts |
create_prompt() |
Create a prompt |
get_prompt_by_command() |
Get prompt by command |
update_prompt_by_command() |
Update a prompt |
delete_prompt_by_command() |
Delete a prompt |
extract_variables() |
Extract prompt variables |
substitute_variables() |
Replace variables |
User Management¶
| Method | Description |
|---|---|
get_users() |
List users |
get_user_by_id() |
Get user details |
update_user_role() |
Update user role |
delete_user() |
Delete a user |