Cover image
pinecone-vector-db-mcp-server
Public

pinecone-vector-db-mcp-server

Try Now
2025-04-08

3 years

Works with Finder

1

Github Watches

0

Github Forks

0

Github Stars

MCP Pinecone Vector Database Server

This project implements a Model Context Protocol (MCP) server that allows reading and writing vectorized information to a Pinecone vector database. It's designed to work with both RAG-processed PDF data and Confluence data.

Features

  • Search for similar documents using text queries
  • Add new vectors to the database with custom metadata
  • Process and upload Confluence data in batch
  • Delete vectors by ID
  • Basic database statistics (temporarily disabled)

Prerequisites

  • Bun runtime
  • Pinecone API key
  • OpenAI API key (for generating embeddings)

Installation

  1. Clone this repository

  2. Install dependencies:

    bun install
    
  3. Create a .env file with the following content:

    PINECONE_API_KEY=your-pinecone-api-key
    OPENAI_API_KEY=your-openai-api-key
    PINECONE_HOST=your-pinecone-host
    PINECONE_INDEX_NAME=your-index-name
    DEFAULT_NAMESPACE=your-namespace
    

Usage

Running the MCP Server

Start the server:

bun src/index.ts

The server will start and listen for MCP commands via stdio.

Running the Example Client

Test the server with the example client:

bun examples/client.ts

Processing Confluence Data

The Confluence processing script provides detailed logging and verification:

bun src/scripts/process-confluence.ts <file-path> [collection] [scope]

Parameters:

  • file-path: Path to your Confluence JSON file (required)
  • collection: Document collection name (defaults to "documentation")
  • scope: Document scope (defaults to "documentation")

Example:

bun src/scripts/process-confluence.ts ./data/confluence-export.json "tech-docs" "engineering"

The script will:

  1. Validate input parameters
  2. Process and vectorize the content
  3. Upload vectors in batches
  4. Verify successful upload
  5. Provide detailed logs of the process

Available Tools

The server provides the following tools:

  1. search-vectors - Search for similar documents with parameters:

    • query: string (search query text)
    • topK: number (1-100, default: 5)
    • filter: object (optional filter criteria)
  2. add-vector - Add a single document with parameters:

    • text: string (content to vectorize)
    • metadata: object (vector metadata)
    • id: string (optional custom ID)
  3. process-confluence - Process Confluence JSON data with parameters:

    • filePath: string (path to JSON file)
    • namespace: string (optional, defaults to "capella-document-search")
  4. delete-vectors - Delete vectors with parameters:

    • ids: string[] (list of vector IDs)
    • namespace: string (optional, defaults to "capella-document-search")
  5. get-stats - Get database statistics (temporarily disabled)

Database Configuration

The server requires a Pinecone vector database. Configure the connection details in your .env file:

PINECONE_API_KEY=your-api-key
PINECONE_HOST=your-host
PINECONE_INDEX_NAME=your-index
DEFAULT_NAMESPACE=your-namespace

Metadata Schema

Confluence Documents

ID: confluence-[page-id]-[item-id]
title: [title]
pageId: [page-id]
spaceKey: [space-key]
type: [type]
content: [text-content]
author: [author-name]
source: "confluence"
collection: "documentation"
scope: "documentation"
...

Contributing

  1. Fork the repository
  2. Create your feature branch: git checkout -b feature/my-new-feature
  3. Commit your changes: git commit -am 'Add some feature'
  4. Push to the branch: git push origin feature/my-new-feature
  5. Submit a pull request

License

MIT

相关推荐

  • https://maiplestudio.com
  • Find Exhibitors, Speakers and more

  • Yusuf Emre Yeşilyurt
  • I find academic articles and books for research and literature reviews.

  • https://suefel.com
  • Latest advice and best practices for custom GPT development.

  • Carlos Ferrin
  • Encuentra películas y series en plataformas de streaming.

  • https://zenepic.net
  • Embark on a thrilling diplomatic quest across a galaxy on the brink of war. Navigate complex politics and alien cultures to forge peace and avert catastrophe in this immersive interstellar adventure.

  • Joshua Armstrong
  • Confidential guide on numerology and astrology, based of GG33 Public information

  • Emmet Halm
  • Converts Figma frames into front-end code for various mobile frameworks.

  • Elijah Ng Shi Yi
  • Advanced software engineer GPT that excels through nailing the basics.

  • 林乔安妮
  • A fashion stylist GPT offering outfit suggestions for various scenarios.

  • 田中 楓太
  • A virtual science instructor for engaging and informative lessons.

  • 1Panel-dev
  • 💬 MaxKB is a ready-to-use AI chatbot that integrates Retrieval-Augmented Generation (RAG) pipelines, supports robust workflows, and provides advanced MCP tool-use capabilities.

  • ShrimpingIt
  • Micropython I2C-based manipulation of the MCP series GPIO expander, derived from Adafruit_MCP230xx

  • open-webui
  • User-friendly AI Interface (Supports Ollama, OpenAI API, ...)

  • GLips
  • MCP server to provide Figma layout information to AI coding agents like Cursor

  • adafruit
  • Python code to use the MCP3008 analog to digital converter with a Raspberry Pi or BeagleBone black.

  • Dhravya
  • Collection of apple-native tools for the model context protocol.

    Reviews

    3 (1)
    Avatar
    user_uMh5nin9
    2025-04-17

    I recently started using the pinecone-vector-db-mcp-server and it has been a game changer for managing my vector databases. The seamless integration and efficient performance provided by zx8086's solution is commendable. Highly recommend it to anyone looking for a robust vector DB management tool! Check it out at GitHub.