Cover image
Try Now
2025-04-14

3 years

Works with Finder

18

Github Watches

4

Github Forks

10

Github Stars

Keboola MCP Server

CI codecov Keboola Explorer Server MCP server smithery badge

A Model Context Protocol (MCP) server for interacting with Keboola Connection. This server provides tools for listing and accessing data from Keboola Storage API.

Requirements

  • Python 3.10 or newer
  • Keboola Storage API token
  • Snowflake or BigQuery Read Only Workspace

Installation

Installing via Smithery

To install Keboola Explorer for Claude Desktop automatically via Smithery:

npx -y @smithery/cli install keboola-mcp-server --client claude

Manual Installation

First, clone the repository and create a virtual environment:

git clone https://github.com/keboola/keboola-mcp-server.git
cd keboola-mcp-server
python3 -m venv .venv
source .venv/bin/activate
pip3 install -U pip 

Install the package in development mode:

pip3 install -e .

For development dependencies:

pip3 install -e ".[dev]"

Claude Desktop Setup

To use this server with Claude Desktop, follow these steps:

  1. Create or edit the Claude Desktop configuration file:

    • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
    • Windows: %APPDATA%\Claude\claude_desktop_config.json
  2. Add the following configuration (adjust paths according to your setup):

{
  "mcpServers": {
    "keboola": {
      "command": "/path/to/keboola-mcp-server/.venv/bin/python",
      "args": [
        "-m",
        "keboola_mcp_server",
        "--api-url",
        "https://connection.YOUR_REGION.keboola.com"
      ],
      "env": {
        "KBC_STORAGE_TOKEN": "your-keboola-storage-token",
        "KBC_WORKSPACE_SCHEMA": "your-workspace-schema"
      }
    }
  }
}

Replace:

  • /path/to/keboola-mcp-server with your actual path to the cloned repository
  • YOUR_REGION with your Keboola region (e.g., north-europe.azure, etc.). You can remove it if your region is just connection explicitly
  • your-keboola-storage-token with your Keboola Storage API token
  • your-workspace-schema with your Snowflake schema or BigQuery dataset of your workspace

Note: If you are using a specific version of Python (e.g. 3.11 due to some package compatibility issues), you'll need to update the command into using that specific version, e.g. /path/to/keboola-mcp-server/.venv/bin/python3.11

Note: The Workspace can be created in your Keboola project. It is the same project where you got your Storage Token. The workspace will provide all the necessary connection parameters including the schema or dataset name.

  1. After updating the configuration:
    • Completely quit Claude Desktop (don't just close the window)
    • Restart Claude Desktop
    • Look for the hammer icon in the bottom right corner, indicating the server is connected

Troubleshooting

If you encounter connection issues:

  1. Check the logs in Claude Desktop for any error messages
  2. Verify your Keboola Storage API token is correct
  3. Ensure all paths in the configuration are absolute paths
  4. Confirm the virtual environment is properly activated and all dependencies are installed

Cursor AI Setup

To use this server with Cursor AI, you have two options for configuring the transport method: Server-Sent Events (SSE) or Standard I/O (stdio).

  1. Create or edit the Cursor AI configuration file:

    • Location: ~/.cursor/mcp.json
  2. Add one of the following configurations (or all) based on your preferred transport method:

Option 1: Using Server-Sent Events (SSE)

{
  "mcpServers": {
    "keboola": {
      "url": "http://localhost:8000/sse?storage_token=YOUR-KEBOOLA-STORAGE-TOKEN&workspace_schema=YOUR-WORKSPACE-SCHEMA"
    }
  }
}

Option 2a: Using Standard I/O (stdio)

{
  "mcpServers": {
    "keboola": {
      "command": "/path/to/keboola-mcp-server/.venv/bin/python",
      "args": [
        "-m",
        "keboola_mcp_server",
        "--transport",
        "stdio",
         "--api-url",
         "https://connection.YOUR_REGION.keboola.com"
      ],
      "env": {
        "KBC_STORAGE_TOKEN": "your-keboola-storage-token", 
        "KBC_WORKSPACE_SCHEMA": "your-workspace-schema"         
      }
    }
  }
}

Option 2b: Using WSL Standard I/O (wsl stdio)

When running the MCP server from Windows Subsystem for Linux with Cursor AI, use this.

{
  "mcpServers": {
    "keboola": {
      "command": "wsl.exe",
      "args": [
        "bash",
        "-c",
        "'source /wsl_path/to/keboola-mcp-server/.env",
        "&&",
        "/wsl_path/to/keboola-mcp-server/.venv/bin/python -m keboola_mcp_server.cli --transport stdio'"
      ]
    }
  }
}
  • where /wsl_path/to/keboola-mcp-server/.env file contains environment variables:
export KBC_STORAGE_TOKEN="your-keboola-storage-token"
export KBC_WORKSPACE_SCHEMA="your-workspace-schema"

Replace:

  • /path/to/keboola-mcp-server with your actual path to the cloned repository
  • YOUR_REGION with your Keboola region (e.g., north-europe.azure, etc.). You can remove it if your region is just connection explicitly
  • your-keboola-storage-token with your Keboola Storage API token
  • your-workspace-schema with your Snowflake schema or BigQuery dataset of your workspace

After updating the configuration:

  1. Restart Cursor AI
  2. If you use the sse transport make sure to start your MCP server. You can do so by running this in the activated virtual environment where you built the server:
    /path/to/keboola-mcp-server/.venv/bin/python -m keboola_mcp_server --transport sse --api-url https://connection.YOUR_REGION.keboola.com
    
  3. Cursor AI should be automatically detect your MCP server and enable it.

BigQuery support

If your Keboola project uses BigQuery backend you will need to set GOOGLE_APPLICATION_CREDENTIALS environment variable in addition to KBC_STORAGE_TOKEN and KBC_WORKSPACE_SCHEMA.

  1. Go to your Keboola BigQuery workspace and display its credentials (click Connect button).
  2. Download the credentials file to your local disk. It is a plain JSON file.
  3. Set the full path of the downloaded JSON credentials file to GOOGLE_APPLICATION_CREDENTIALS environment variable.

This will give your MCP server instance permissions to access your BigQuery workspace in Google Cloud.

Available Tools

The server provides the following tools for interacting with Keboola Connection:

  • List buckets and tables
  • Get bucket and table information
  • Preview table data
  • Export table data to CSV
  • List components and configurations

Development

Run tests:

pytest

Format code:

black .
isort .

Type checking:

mypy .

License

MIT License - see LICENSE file for details.

相关推荐

  • NiKole Maxwell
  • I craft unique cereal names, stories, and ridiculously cute Cereal Baby images.

  • Emmet Halm
  • Converts Figma frames into front-end code for various mobile frameworks.

  • Khalid kalib
  • Write professional emails

  • https://suefel.com
  • Latest advice and best practices for custom GPT development.

  • momi
  • Provides initial medical assessments and advice.

  • Yasir Eryilmaz
  • AI scriptwriting assistant for short, engaging video content.

  • Daren White
  • A supportive coach for mastering all Spanish tenses.

  • https://maiplestudio.com
  • Find Exhibitors, Speakers and more

  • Yusuf Emre Yeşilyurt
  • I find academic articles and books for research and literature reviews.

  • Carlos Ferrin
  • Encuentra películas y series en plataformas de streaming.

  • huahuayu
  • A unified API gateway for integrating multiple etherscan-like blockchain explorer APIs with Model Context Protocol (MCP) support for AI assistants.

  • deemkeen
  • control your mbot2 with a power combo: mqtt+mcp+llm

  • zhaoyunxing92
  • 本项目是一个钉钉MCP(Message Connector Protocol)服务,提供了与钉钉企业应用交互的API接口。项目基于Go语言开发,支持员工信息查询和消息发送等功能。

  • ShrimpingIt
  • Micropython I2C-based manipulation of the MCP series GPIO expander, derived from Adafruit_MCP230xx

  • justmywyw
  • Short and sweet example MCP server / client implementation for Tools, Resources and Prompts.

    Reviews

    2 (1)
    Avatar
    user_hCp5tkoE
    2025-04-17

    As a dedicated user of keboola-mcp-server, I can confidently say this tool is a game-changer for managing data pipelines. Its seamless integration and user-friendly interface make data processing efficient and hassle-free. The active community and consistent updates from the authors ensure a robust experience. Highly recommend it!