跳转至

API 参考

本页面提供 openwebui-chat-client 的完整 API 参考,从源代码文档字符串自动生成。


OpenWebUIClient

用于同步操作的主客户端类。

OpenWebUIClient(base_url, token, default_model_id, skip_model_refresh=False)

An intelligent, stateful Python client for the Open WebUI API. Supports single/multi-model chats, tagging, and RAG with both direct file uploads and knowledge base collections, matching the backend format.

This refactored version uses a modular architecture with specialized managers while maintaining 100% backward compatibility with the original API.

Initialize the OpenWebUI client with modular architecture.

Parameters:

Name Type Description Default
base_url str

The base URL of the OpenWebUI instance

required
token str

Authentication token

required
default_model_id str

Default model identifier to use

required
skip_model_refresh bool

If True, skip initial model refresh (useful for testing)

False

chat(question, chat_title, model_id=None, folder_name=None, image_paths=None, tags=None, rag_files=None, rag_collections=None, tool_ids=None, enable_follow_up=False, enable_auto_tagging=False, enable_auto_titling=False)

Send a chat message with a single model.

stream_chat(question, chat_title, model_id=None, folder_name=None, image_paths=None, tags=None, rag_files=None, rag_collections=None, tool_ids=None, enable_follow_up=False, cleanup_placeholder_messages=False, placeholder_pool_size=30, min_available_messages=10, wait_before_request=10.0, enable_auto_tagging=False, enable_auto_titling=False)

Initiates a streaming chat session. Yields content chunks as they are received. At the end of the stream, returns the full response content, sources, and follow-up suggestions.

parallel_chat(question, chat_title, model_ids, folder_name=None, image_paths=None, tags=None, rag_files=None, rag_collections=None, tool_ids=None, enable_follow_up=False, enable_auto_tagging=False, enable_auto_titling=False)

Send a chat message to multiple models in parallel.

continuous_chat(initial_question, num_questions, chat_title, model_id=None, folder_name=None, image_paths=None, tags=None, rag_files=None, rag_collections=None, tool_ids=None, enable_auto_tagging=False, enable_auto_titling=False)

Perform continuous conversation with automatic follow-up questions.

continuous_parallel_chat(initial_question, num_questions, chat_title, model_ids, folder_name=None, image_paths=None, tags=None, rag_files=None, rag_collections=None, tool_ids=None, enable_auto_tagging=False, enable_auto_titling=False)

Perform continuous conversation with multiple models in parallel.

continuous_stream_chat(initial_question, num_questions, chat_title, model_id=None, folder_name=None, image_paths=None, tags=None, rag_files=None, rag_collections=None, tool_ids=None, enable_auto_tagging=False, enable_auto_titling=False)

Perform continuous conversation with streaming responses.

deep_research(topic, chat_title=None, num_steps=3, general_models=None, search_models=None)

Performs an advanced, autonomous, multi-step research process on a given topic using intelligent model routing.

The agent will iteratively plan questions and decide which type of model to use (general vs. search-capable), with the entire process being visible as a multi-turn chat in the UI.

Parameters:

Name Type Description Default
topic str

The topic to be researched.

required
chat_title Optional[str]

Optional title for the research chat. If not provided, it will be generated from the topic.

None
num_steps int

The number of research steps (plan -> execute cycles).

3
general_models Optional[List[str]]

A list of model IDs for general reasoning and summarization. If not provided, the client's default model will be used.

None
search_models Optional[List[str]]

A list of model IDs with search capabilities. If not provided, the agent will not have the option to use a search model.

None

Returns:

Type Description
Optional[Dict[str, Any]]

A dictionary containing the research results and chat information, or None if it fails.

process_task(question, model_id, tool_server_ids, knowledge_base_name=None, max_iterations=25, summarize_history=False, decision_model_id=None)

Processes a task using an AI model and a tool server in a multi-step process.

Parameters:

Name Type Description Default
question str

The task to process.

required
model_id str

The ID of the model to use for task execution.

required
tool_server_ids Union[str, List[str]]

The ID(s) of the tool server(s) to use.

required
knowledge_base_name Optional[str]

The name of the knowledge base to use.

None
max_iterations int

The maximum number of iterations to attempt.

25
summarize_history bool

If True, the conversation history will be summarized.

False
decision_model_id Optional[str]

Optional model ID for automatic decision-making when the AI presents multiple options. If provided, this model will analyze the options and select the best one automatically, eliminating the need for user input when choices arise.

None

Returns:

Type Description
Optional[Dict[str, Any]]

A dictionary containing: - solution: The final answer or error message - conversation_history: Either the full message list or a summarized string (if summarize_history=True) - todo_list: The last parsed to-do list from the agent's thought process

Optional[Dict[str, Any]]

Returns None if initialization fails.

stream_process_task(question, model_id, tool_server_ids, knowledge_base_name=None, max_iterations=25, summarize_history=False, decision_model_id=None)

Processes a task in a streaming fashion, yielding results for each step.

Parameters:

Name Type Description Default
question str

The task to process.

required
model_id str

The ID of the model to use for task execution.

required
tool_server_ids Union[str, List[str]]

The ID(s) of the tool server(s) to use.

required
knowledge_base_name Optional[str]

The name of the knowledge base to use.

None
max_iterations int

The maximum number of iterations to attempt.

25
summarize_history bool

If True, the conversation history will be summarized.

False
decision_model_id Optional[str]

Optional model ID for automatic decision-making when the AI presents multiple options. If provided, this model will analyze the options and select the best one automatically.

None

rename_chat(chat_id, new_title)

Rename an existing chat.

set_chat_tags(chat_id, tags)

Set tags for a chat conversation.

update_chat_metadata(chat_id, regenerate_tags=False, regenerate_title=False, title=None, tags=None, folder_name=None)

Regenerates and updates the tags and/or title for an existing chat based on its history.

Parameters:

Name Type Description Default
chat_id str

The ID of the chat to update.

required
regenerate_tags bool

If True, new tags will be generated and applied.

False
regenerate_title bool

If True, a new title will be generated and applied.

False
title Optional[str]

Direct title to set (alternative to regenerate_title)

None
tags Optional[List[str]]

Direct tags to set (alternative to regenerate_tags)

None
folder_name Optional[str]

Folder to move chat to

None

Returns:

Type Description
Optional[Dict[str, Any]]

A dictionary containing the 'suggested_tags' and/or 'suggested_title' that were updated,

Optional[Dict[str, Any]]

or None if the chat could not be found or no action was requested.

switch_chat_model(chat_id, model_ids)

Switch the model(s) for an existing chat.

list_chats(page=None)

List all chats for the current user.

get_chats_by_folder(folder_id)

Get all chats in a specific folder.

archive_chat(chat_id)

Archive a chat conversation.

archive_chats_by_age(days_since_update=30, folder_name=None, dry_run=False)

Archive chats that haven't been updated for a specified number of days.

Parameters:

Name Type Description Default
days_since_update int

Number of days since last update (default: 30)

30
folder_name Optional[str]

Optional folder name to filter chats. If None, only archives chats NOT in folders. If provided, only archives chats IN that folder.

None
dry_run bool

If True, only shows what would be archived without actually archiving

False

Returns:

Type Description
Dict[str, Any]

Dictionary with archive results including counts and details

delete_all_chats()

Delete ALL chat conversations for the current user.

⚠️ WARNING: This is a DESTRUCTIVE operation! This method will permanently delete ALL chats associated with the current user account. This action CANNOT be undone. Use with extreme caution.

This method is useful for: - Cleaning up test data after integration tests - Resetting an account to a clean state - Bulk cleanup operations

Returns:

Type Description
bool

True if deletion was successful, False otherwise

Example
# ⚠️ WARNING: This will delete ALL your chats!
success = client.delete_all_chats()
if success:
    print("All chats have been permanently deleted")

create_folder(name)

Create a new folder for organizing chats.

get_folder_id_by_name(name, suppress_log=False)

Get folder ID by folder name.

move_chat_to_folder(chat_id, folder_id)

Move a chat to a specific folder.

list_models()

Lists all available models for the user, including base models and user-created custom models. Excludes disabled base models. This corresponds to the model list shown in the top left of the chat page.

list_base_models()

Lists all base models that can be used to create variants. Includes disabled base models. Corresponds to the model list in the admin settings page, including PIPE type models.

list_custom_models()

Lists custom models that users can use or have created (not base models). A list of custom models available in the user's workspace.

list_groups()

Lists all available groups from the Open WebUI instance.

get_model(model_id)

Fetches the details of a specific model by its ID.

create_model(model_id, name, base_model_id=None, description=None, params=None, permission_type='public', group_identifiers=None, user_ids=None, profile_image_url='/static/favicon.png', suggestion_prompts=None, tags=None, capabilities=None, is_active=True)

Creates a new model configuration with detailed metadata. This method delegates directly to the ModelManager.

update_model(model_id, name=None, base_model_id=None, description=None, params=None, permission_type=None, group_identifiers=None, user_ids=None, profile_image_url=None, suggestion_prompts=None, tags=None, capabilities=None, is_active=None)

Updates an existing model configuration with detailed metadata. This method delegates directly to the ModelManager.

delete_model(model_id)

Deletes a model configuration.

batch_update_model_permissions(models=None, permission_type='public', group_identifiers=None, user_ids=None, max_workers=5, model_identifiers=None, model_keyword=None)

Updates permissions for multiple models in parallel.

get_knowledge_base_by_name(name)

Get a knowledge base by its name.

create_knowledge_base(name, description='')

Create a new knowledge base.

add_file_to_knowledge_base(file_path, knowledge_base_name)

Add a file to a knowledge base.

delete_knowledge_base(kb_id)

Deletes a knowledge base by its ID.

delete_all_knowledge_bases()

Deletes all knowledge bases for the current user.

delete_knowledge_bases_by_keyword(keyword, case_sensitive=False)

Deletes knowledge bases whose names contain a specific keyword.

create_knowledge_bases_with_files(kb_configs, max_workers=3)

Creates multiple knowledge bases with files in parallel.

get_notes()

Get all notes for the current user.

get_notes_list()

Get a simplified list of notes with only id, title, and timestamps.

create_note(title, data=None, meta=None, access_control=None)

Create a new note.

get_note_by_id(note_id)

Get a specific note by its ID.

update_note_by_id(note_id, title=None, data=None, meta=None, access_control=None)

Update an existing note by its ID.

delete_note_by_id(note_id)

Delete a note by its ID.

get_prompts()

Get all prompts for the current user.

get_prompts_list()

Get a detailed list of prompts with user information.

create_prompt(command, title, content, access_control=None)

Create a new prompt.

get_prompt_by_command(command)

Get a specific prompt by its command.

update_prompt_by_command(command, title=None, content=None, access_control=None)

Update an existing prompt by its command (title/content only).

replace_prompt_by_command(old_command, new_command, title, content, access_control=None)

Replace a prompt completely including command (delete + recreate).

delete_prompt_by_command(command)

Delete a prompt by its command.

search_prompts(query=None, by_command=False, by_title=True, by_content=False)

Search prompts by various criteria.

extract_variables(content)

Extract variable names from prompt content.

substitute_variables(content, variables, system_variables=None)

Substitute variables in prompt content.

get_system_variables()

Get current system variables for substitution.

batch_create_prompts(prompts_data, continue_on_error=True)

Create multiple prompts in batch.

batch_delete_prompts(commands, continue_on_error=True)

Delete multiple prompts by their commands.

get_users(skip=0, limit=50)

Get a list of all users.

get_user_by_id(user_id)

Get a specific user by their ID.

update_user_role(user_id, role)

Update a user's role (admin/user).

delete_user(user_id)

Delete a user.


AsyncOpenWebUIClient

用于异步操作的异步客户端类。

AsyncOpenWebUIClient(base_url, token, default_model_id, timeout=60.0, **kwargs)

Asynchronous Python client for the Open WebUI API.

close() async

Close the client.

delete_all_chats() async

Delete ALL chat conversations for the current user.

⚠️ WARNING: This is a DESTRUCTIVE operation! This method will permanently delete ALL chats associated with the current user account. This action CANNOT be undone. Use with extreme caution.

This method is useful for: - Cleaning up test data after integration tests - Resetting an account to a clean state - Bulk cleanup operations

Returns:

Type Description
bool

True if deletion was successful, False otherwise

Example
# ⚠️ WARNING: This will delete ALL your chats!
success = await client.delete_all_chats()
if success:
    print("All chats have been permanently deleted")

返回值示例

聊天操作

{
    "response": "生成的响应文本",
    "chat_id": "chat-uuid-string",
    "message_id": "message-uuid-string",
    "sources": [...]  # 用于 RAG 操作
}

并行聊天

{
    "responses": {
        "model-1": "模型 1 的响应",
        "model-2": "模型 2 的响应"
    },
    "chat_id": "chat-uuid-string",
    "message_ids": {
        "model-1": "message-uuid-1",
        "model-2": "message-uuid-2"
    }
}

知识库 / 笔记

{
    "id": "resource-uuid",
    "name": "资源名称",
    "created_at": "2024-01-01T00:00:00Z",
    "updated_at": "2024-01-01T00:00:00Z",
    ...
}

任务处理

{
    "solution": "最终解决方案文本",
    "conversation_history": [...],  # 或摘要字符串
    "todo_list": [
        {"task": "研究主题", "status": "completed"},
        {"task": "撰写摘要", "status": "completed"}
    ]
}

快速参考表

聊天操作

方法 描述
chat() 带可选功能的单模型对话
stream_chat() 带实时更新的流式对话
parallel_chat() 多模型并行对话
continuous_chat() 带后续问题的连续对话
process_task() 自主多步骤任务处理
deep_research() 多步骤研究代理

聊天管理

方法 描述
rename_chat() 重命名现有聊天
set_chat_tags() 为聊天应用标签
update_chat_metadata() 重新生成标签和/或标题
switch_chat_model() 切换现有聊天的模型
list_chats() 获取用户的聊天列表
archive_chat() 归档特定聊天
archive_chats_by_age() 批量归档旧聊天
create_folder() 创建聊天文件夹

模型管理

方法 描述
list_models() 列出可用模型
list_base_models() 列出基础模型
list_custom_models() 列出自定义模型
get_model() 获取模型详情
create_model() 创建自定义模型
update_model() 更新模型
delete_model() 删除模型
batch_update_model_permissions() 批量更新权限

知识库操作

方法 描述
create_knowledge_base() 创建知识库
add_file_to_knowledge_base() 向知识库添加文件
get_knowledge_base_by_name() 按名称获取知识库
delete_knowledge_base() 删除知识库
delete_all_knowledge_bases() 删除所有知识库
create_knowledge_bases_with_files() 批量创建知识库

笔记 API

方法 描述
get_notes() 获取所有笔记
create_note() 创建笔记
get_note_by_id() 按 ID 获取笔记
update_note_by_id() 更新笔记
delete_note_by_id() 删除笔记

提示词 API

方法 描述
get_prompts() 获取所有提示词
create_prompt() 创建提示词
get_prompt_by_command() 按命令获取提示词
update_prompt_by_command() 更新提示词
delete_prompt_by_command() 删除提示词
extract_variables() 提取提示词变量
substitute_variables() 替换变量

用户管理

方法 描述
get_users() 列出用户
get_user_by_id() 获取用户详情
update_user_role() 更新用户角色
delete_user() 删除用户