🔥 1Panel provides an intuitive web interface and MCP Server to manage websites, files, containers, databases, and LLMs on a Linux server.

super-agent-party
Transform LLM into agent that can access knowledge repositories, networking, MCP services, deep thinking, deep research, and can also be used directly through Openai API calls or on the web or desktop
3 years
Works with Finder
6
Github Watches
0
Github Forks
6
Github Stars
Introduction
If you want to transform a large model into an intelligent agent that can access knowledge bases, connect to the internet, utilize MCP services,A2A services, perform deep thinking and in-depth research, and also be usable via OpenAI API calls or directly through web and desktop applications, then this project is for you.
Demo
https://github.com/user-attachments/assets/1118302b-139a-4b33-ac08-adbde647f573
Features
- Knowledge Base: Enables large models to answer based on information within the knowledge base. If there are multiple knowledge bases, the model will proactively query the relevant one based on the question.
- Internet Connectivity: Allows large models to proactively search for information online based on question requirements. Currently supports:
- duckduckgo (completely free, not accessible in China's network environment)
- searxng (can be deployed locally with Docker)
- tavily (requires applying for an API key)
- jina (can be used without an API key for web scraping)
- crawl4ai (can be deployed locally with Docker for web scraping).
- MCP Service, enabling large models to proactively invoke MCP services based on questioning needs. Currently, it supports three invocation methods: standard input/output, Server-Sent Events (SSE), and websocket.
- A2A Service, enabling large models to proactively invoke A2A services based on questioning needs.
- Deep Thinking: Transplants the reasoning capabilities of reasoning models into tool-invoking or multimodal models so that large models can use reasoning models for analysis before invoking tools. For example, if deepseek-V3 can invoke tools but the reasoning model deepseek-R1 cannot, the reasoning capability of deepseek-R1 can be transplanted into deepseek-V3 to allow it to reason using deepseek-R1 before invoking tools.
- In-depth Research: Converts user questions into tasks, gradually analyzes and reasons, invokes tools, checks the output results, and continues analyzing and invoking tools until the task is completed.
Usage
Windows Desktop Installation
If you are using a Windows system, you can directly click here to download the Windows desktop version and follow the prompts to install.
Docker Deployment
- Obtain Docker Image (choose one):
- Pull the official image from DockerHub:
docker pull ailm32442/super-agent-party:latest
docker run -d -p 3456:3456 ailm32442/super-agent-party:latest
- Generate image from source code:
git clone https://github.com/heshengtao/super-agent-party.git
cd super-agent-party
docker pull python:3.12-slim
docker build -t super-agent-party .
docker run -d -p 3456:3456 super-agent-party:latest
- Access at http://localhost:3456/
Source Code Deployment
- Download Repository:
git clone https://github.com/heshengtao/super-agent-party.git
cd super-agent-party
- Install Dependencies (choose one):
- Windows: Click
install.bat
script - MacOS/Linux: Click
install.sh
script - Or manually execute the following commands to install dependencies:
python -m venv super
super\Scripts\activate # Windows
# source super/bin/activate # MacOS/Linux
pip install -r requirements.txt
npm install
Configuration
- Click on the System Settings in the left sidebar to set language options, system themes, and open this application in web mode.
- Navigate to the Tools interface in the left sidebar to configure various utilities including current time, in-depth research, and pseudo-reasoning capabilities. If you wish to fix the language used by the agent, you can configure it here.
- Access the Model Services interface from the left sidebar to configure your preferred cloud service providers such as OpenAI, DeepSeek, etc. Select your model service provider and enter the corresponding API key. Then click the magnifying glass button at the top right corner to fetch the list of models available from that provider, select the desired model to complete the setup.
- Go to the Agents interface in the left sidebar to configure the system prompt for intelligent agents. The system prompt dictates the behavior of the agent and can be customized according to your needs. When creating an agent, it will snapshot all current configurations including model services, knowledge base, internet access features, MCP services, tools, system prompts, etc.
- By clicking on the Primary Model and Inference Model interfaces in the left sidebar, you can configure your models more precisely. By default, the first model from the model service provider is selected, but you can choose others. Note! The primary model should have tool invocation capabilities (most inference models do not have these capabilities), while inference models need to have reasoning capabilities.
- Enter the MCP Services interface from the left sidebar to configure MCP services. Currently, two calling methods are supported: standard input/output and Server-Sent Events (SSE). The standard input/output method requires configuring various parameters of the MCP server; if errors occur, ensure that the local environment has the necessary package managers installed (e.g., uv, npm, etc.). The SSE method requires setting up the MCP server's address.
- Use the Internet Access Features interface in the left sidebar to configure internet search engines and webpage-to-markdown tools. It currently supports three search engines—DuckDuckGo, SearxNG, Tavily—and two webpage-to-markdown tools—Jina, Crawl4AI. DuckDuckGo requires no configuration, SearxNG requires a Docker image URL, Tavily needs an API key, Jina requires no setup, and Crawl4AI necessitates a Docker image URL.
- Access the Knowledge Base interface from the left sidebar to configure the knowledge base. Before configuring the knowledge base, you need to complete the configuration of the word embedding model in the Model Services interface on the left sidebar.
- Click on the Invocation Methods interface in the left sidebar, you can use the OpenAI format to invoke intelligent agents created within this application. If the model name is
super-model
, it will invoke the currently configured intelligent agent. If the model name corresponds to an Agent ID created in the Agents interface, then it will invoke the specific intelligent agent that you have created.
Disclaimer:
This open-source project and its contents (hereinafter referred to as the "Project") are provided for reference only and do not imply any explicit or implicit warranty. Project contributors are not responsible for the completeness, accuracy, reliability, or applicability of the Project. Any reliance on the content of the Project is undertaken at your own risk. Under no circumstances will the project contributors be liable for any indirect, special, or consequential damages arising out of the use of the Project content.
Support:
Join Community
If there are issues with the plugin or if you have other questions, feel free to join our community.
- QQ Group:
931057213

-
WeChat Group:
we_glm
(Join the group after adding the assistant's WeChat) -
Discord:Discord Link
Follow Us
- To stay updated with the latest features of this project, follow the Bilibili account: 派酱
Donate Support
If my work has brought you value, please consider buying me a coffee! Your support not only energizes the project but also warms the creator's heart.☕💖 Every cup counts!


相关推荐
Easily create LLM tools and agents using plain Bash/JavaScript/Python functions.
😎简单易用、🧩丰富生态 - 大模型原生即时通信机器人平台 | 适配 QQ / 微信(企业微信、个人微信)/ 飞书 / 钉钉 / Discord / Telegram / Slack 等平台 | 支持 ChatGPT、DeepSeek、Dify、Claude、Gemini、xAI、PPIO、Ollama、LM Studio、阿里云百炼、火山方舟、SiliconFlow、Qwen、Moonshot、ChatGLM、SillyTraven、MCP 等 LLM 的机器人 / Agent | LLM-based instant messaging bots platform, supports Discord, Telegram, WeChat, Lark, DingTalk, QQ, Slack
Artifact2MCP Generator allows generation of MCP server automatically & dynamically given smart contract's compiled artifact (chain‑agnostic)
📦 Repomix (formerly Repopack) is a powerful tool that packs your entire repository into a single, AI-friendly file. Perfect for when you need to feed your codebase to Large Language Models (LLMs) or other AI tools like Claude, ChatGPT, DeepSeek, Perplexity, Gemini, Gemma, Llama, Grok, and more.