TUUI is a desktop MCP client designed as tool unitary utility integration, accelerating AI adoption through Model Context Protocol (MCP) and enabling cross-vendor LLM API orchestration. Zero accounts, full control, open source, download and run. Repository essentially LLM chat desktop application based on MCP, representing bold experiment in creating complete project using AI. Many components directly converted/generated from prototype project through AI. Employs strict syntax checks and naming conventions; further development requires using linting tools to check/auto-fix syntax issues. Features include accelerate AI tool integration via MCP, orchestrate cross-vendor LLM APIs through dynamic configuring, automated application testing support, TypeScript support, multilingual support, basic layout manager, global state management through Pinia store, quick support through GitHub community and official documentation.
Getting started options: explore at TUUI.com, download from GitHub Releases, developer setup via Getting Started guide (English/δΈζ), ask AI at TUUI@DeepWiki. Core requirements: LLM backend (ChatGPT, Claude, Qwen, or self-hosted) supporting tool/function calling, Node.js for NPX/NODE-based servers (JavaScript/TypeScript tools), Python + UV library for UV/UVX-based servers, DockerHub for Docker-based servers, macOS/Linux systems may need to modify default MCP configuration (CLI paths/permissions). LLM configuration template supports JSON object (single chatbot) or JSON array (multiple chatbots) with fields: name, apiKey, url, urlList, path, model, modelList, maxTokensValue, mcp. Additional configurations: llm.json (LLM endpoints), mcp.json (MCP servers), startup.json (startup screen news), popup.json (popup prompts). Modify built release configs in resources/assets/config/ or clear from Tray Menu > Clear Storage. Remote MCP server via mcp-remote on Cloudflare. MCP server issues: ensure commands run on system (uv/uvx, npx), spawn ENOENT errors resolved with absolute paths, connection timeout due to slow pip/npm repository (manual install first). Apache 2.0 license.
Use Cases:
Zero accounts Full control Open source Download and Run(https://img.shields.io/badge/Windows-blue?logo=icloud) (https://img.shields.io/badge/Linux-orange?logo=linux) (https://img.shields.io/badge/macOS-lightgrey?logo=apple)
(https://camo.githubusercontent.com/077907eb137aa9b2d46ca4af30b77714cb69225eb8be49ad89f3e0ae668c90ca/68747470733a2f2f62616467652e6d6370782e6465763f747970653d636c69656e74) (https://img.shields.io/badge/Vue3-brightgreen.svg) (https://img.shields.io/badge/Vuetify-blue.svg) LICENSE (https://img.shields.io/github/license/AI-QL/tuui) Ask DeepWiki (https://deepwiki.com/badge.svg)
This repository is essentially an LLM chat desktop application based on MCP. It also represents a bold experiment in creating a complete project using AI. Many components within the project have been directly converted or generated from the prototype project through AI.
Given the considerations regarding the quality and safety of AI-generated content, this project employs strict syntax checks and naming conventions. Therefore, for any further development, please ensure that you use the linting tools I've set up to check and automatically fix syntax issues.
You can quickly get started with the project through a variety of options tailored to your role and needs:
To explore the project, visit the wiki page: TUUI.com (https://www.tuui.com)
To download and use the application directly, go to the releases page: Releases (https://github.com/AI-QL/tuui/releases/latest)
For developer setup, refer to the installation guide: Getting Started (English) | εΏ«ιε
₯ι¨ (δΈζ)
To ask the AI directly about the project, visit: TUUI@DeepWiki (https://deepwiki.com/AI-QL/tuui)
To use MCP-related features, ensure the following preconditions are met for your environment:
Set up an LLM backend (e.g., ChatGPT, Claude, Qwen or self-hosted) that supports tool/function calling.
For NPX/NODE-based servers: Install Node.js to execute JavaScript/TypeScript tools.
For UV/UVX-based servers: Install Python and the UV library.
For Docker-based servers: Install DockerHub.
For macOS/Linux systems: Modify the default MCP configuration (e.g., adjust CLI paths or permissions).
Refer to the MCP Server Issue documentation for guidance
For guidance on configuring the LLM, refer to the template(i.e.: Qwen):
{
"name": "Qwen",
"apiKey": "",
"url": "https://dashscope.aliyuncs.com/compatible-mode",
"path": "/v1/chat/completions",
"model": "qwen-turbo",
"modelList": ["qwen-turbo", "qwen-plus", "qwen-max"],
"maxTokensValue": "",
"mcp": true
}
The configuration accepts either a JSON object (for a single chatbot) or a JSON array (for multiple chatbots):
[
{
"name": "Openrouter && Proxy",
"apiKey": "",
"url": "https://api3.aiql.com",
"urlList": ["https://api3.aiql.com", "https://openrouter.ai/api"],
"path": "/v1/chat/completions",
"model": "openai/gpt-4.1-mini",
"modelList": [
"openai/gpt-4.1-mini",
"openai/gpt-4.1",
"anthropic/claude-sonnet-4",
"google/gemini-2.5-pro-preview"
],
"maxTokensValue": "",
"mcp": true
},
{
"name": "DeepInfra",
"apiKey": "",
"url": "https://api.deepinfra.com",
"path": "/v1/openai/chat/completions",
"model": "Qwen/Qwen3-32B",
"modelList": [
"Qwen/Qwen3-32B",
"Qwen/Qwen3-235B-A22B",
"meta-llama/Meta-Llama-3.1-70B-Instruct"
],
"mcp": true
}
]
| Configuration | Description | Location | Note |
|---|---|---|---|
| LLM Endpoints | Default LLM Chatbots config | llm.json | Full config types could be found in llm.d.ts |
| MCP Servers | Default MCP servers configs | mcp.json | For configuration syntax, see MCP Servers (https://github.com/modelcontextprotocol/servers?tab=readme-ov-file#using-an-mcp-client) |
| Startup Screen | Default News on Startup Screen | startup.json | |
| Popup Screen | Default Prompts on Startup Screen | popup.json |
For the decomposable package, you can also modify the default configuration of the built release:
For example, src/main/assets/config/llm.json will be located in resources/assets/config/llm.json
Once you modify or import the configurations, it will be stored in your localStorage by default.
Alternatively, you can clear all configurations from the Tray Menu by selecting Clear Storage.
You can utilize Cloudflare's recommended mcp-remote (https://github.com/geelen/mcp-remote) to implement the full suite of remote MCP server functionalities (including Auth). For example, simply add the following to your mcp.json file:
{
"mcpServers": {
"cloudflare": {
"command": "npx",
"args": ["-y", "mcp-remote", "https://YOURDOMAIN.com/sse"]
}
}
}
In this example, I have provided a test remote server: https://YOURDOMAIN.com on Cloudflare (https://blog.cloudflare.com/remote-model-context-protocol-servers-mcp/). This server will always approve your authentication requests.
If you encounter any issues (please try to maintain OAuth auto-redirect to prevent callback delays that might cause failures), such as the common HTTP 400 error. You can resolve them by clearing your browser cache on the authentication page and then attempting verification again:
When launching the MCP server, if you encounter any issues, first ensure that the corresponding command can run on your current system β for example, uv/uvx, npx, etc.
When launching the MCP server, if you encounter spawn errors like ENOENT, try running the corresponding MCP server locally and invoking it using an absolute path.
If the command works but MCP initialization still returns spawn errors, this may be a known issue:
Windows: The MCP SDK includes a workaround specifically for Windows systems, as documented in ISSUE 101 (https://github.com/modelcontextprotocol/typescript-sdk/issues/101).
Details: ISSUE 40 - MCP servers fail to connect with npx on Windows (https://github.com/modelcontextprotocol/servers/issues/40) (fixed)
mscOS: The issue remains unresolved on other platforms, specifically macOS. Although several workarounds are available, this ticket consolidates the most effective ones and highlights the simplest method: How to configure MCP on macOS (https://github.com/AI-QL/tuui/issues/2).
Details: ISSUE 64 - MCP Servers Don't Work with NVM (https://github.com/modelcontextprotocol/servers/issues/64) (still open)
If initialization takes too long and triggers the 90-second timeout protection, it may be because the uv/uvx/npx runtime libraries are being installed or updated for the first time.
When your connection to the respective pip or npm repository is slow, installation can take a long time.
In such cases, first complete the installation manually with pip or npm in the relevant directory, and then start the MCP server again.
Fast SQL client for PostgreSQL, MySQL and SQL Server with AI assistant that converts natural language to queries. Features Monaco editor, ERD diagrams, query plans and inline editing. Built with Electron and React.
Convert websites into desktop apps with Electron. Features multi-account support, global hotkey switching, custom JavaScript injection and portable packaging for Windows, macOS and Linux.
Open-source AI meeting assistant built with Tauri at 10MB. Features real-time transcription with OpenAI Whisper, GPT-4, Claude, Gemini and Grok support, translucent overlay, and undetectable in video calls.
Cross-platform M3U8/MPD video downloader built with PySide6 and QFluentWidgets featuring multi-threaded downloads, task management, fluent design GUI, FFmpeg and N_m3u8DL-RE integration, Python 3.11 conda environment, and deployment support for Windows/macOS/Linux with GPL-3.0 license.
Flutter AI voice assistant for Android and iOS with real-time conversation, Live2D characters, echo cancellation, multi-service support for Xiaozhi, Dify and OpenAI, and image messaging.
GitHub starred repository manager with AI-powered auto-sync, semantic search, automatic categorization, release tracking, one-click downloads, smart asset filters, bilingual wiki integration, and cross-platform Electron client for Windows/macOS/Linux with 100% local data storage and MIT license.