Cover image
Try Now
2025-04-06

MCP Server leveraging crawl4ai for web scraping and LLM-based content extraction (Markdown, text snippets, smart extraction). Designed for AI agent integration.

3 years

Works with Finder

1

Github Watches

0

Github Forks

0

Github Stars

Crawl4AI Web Scraper MCP Server

License: MIT

This project provides an MCP (Model Context Protocol) server that uses the crawl4ai library to perform web scraping and intelligent content extraction tasks. It allows AI agents (like Claude, or agents built with LangChain/LangGraph) to interact with web pages, retrieve content, search for specific text, and perform LLM-based extraction based on natural language instructions.

This server uses:

  • FastMCP: For creating the MCP server endpoint.
  • crawl4ai: For the core web crawling and extraction logic.
  • dotenv: For managing API keys via a .env file.
  • (Optional) Docker: For containerized deployment, bundling Python and dependencies.

Features

  • Exposes MCP tools for web interaction:
    • scrape_url: Get the full content of a webpage in Markdown format.
    • extract_text_by_query: Find specific text snippets on a page based on a query.
    • smart_extract: Use an LLM (currently Google Gemini) to extract structured information based on instructions.
  • Configurable via environment variables (API keys).
  • Includes Docker configuration (Dockerfile) for easy, self-contained deployment.
  • Communicates over Server-Sent Events (SSE) on port 8002 by default.

Exposed MCP Tools

scrape_url

Scrape a webpage and return its content in Markdown format.

Arguments:

  • url (str, required): The URL of the webpage to scrape.

Returns:

  • (str): The webpage content in Markdown format, or an error message.

extract_text_by_query

Extract relevant text snippets from a webpage that contain a specific search query. Returns up to the first 5 matches found.

Arguments:

  • url (str, required): The URL of the webpage to search within.
  • query (str, required): The text query to search for (case-insensitive).
  • context_size (int, optional): The number of characters to include before and after the matched query text in each snippet. Defaults to 300.

Returns:

  • (str): A formatted string containing the found text snippets or a message indicating no matches were found, or an error message.

smart_extract

Intelligently extract specific information from a webpage using the configured LLM (currently requires Google Gemini API key) based on a natural language instruction.

Arguments:

  • url (str, required): The URL of the webpage to analyze and extract from.
  • instruction (str, required): Natural language instruction specifying what information to extract (e.g., "List all the speakers mentioned on this page", "Extract the main contact email address", "Summarize the key findings").

Returns:

  • (str): The extracted information (often formatted as JSON or structured text based on the instruction) or a message indicating no relevant information was found, or an error message (including if the required API key is missing).

Setup and Running

You can run this server either locally or using the provided Docker configuration.

Option 1: Running with Docker (Recommended for Deployment)

This method bundles Python and all necessary libraries. You only need Docker installed on the host machine.

  1. Install Docker: Download and install Docker Desktop for your OS. Start Docker Desktop.
  2. Clone Repository:
    git clone https://github.com/your-username/your-repo-name.git # Replace with your repo URL
    cd your-repo-name
    
  3. Create .env File: Create a file named .env in the project root directory and add your API keys:
    # Required for the smart_extract tool
    GOOGLE_API_KEY=your_google_ai_api_key_here
    
    # Optional, checked by server but not currently used by tools
    # OPENAI_API_KEY=your_openai_key_here
    # MISTRAL_API_KEY=your_mistral_key_here
    
  4. Build the Docker Image:
    docker build -t crawl4ai-mcp-server .
    
  5. Run the Container: This starts the server, making port 8002 available on your host machine. It uses --env-file to securely pass the API keys from your local .env file into the container's environment.
    docker run -it --rm -p 8002:8002 --env-file .env crawl4ai-mcp-server
    
    • -it: Runs interactively.
    • --rm: Removes container on exit.
    • -p 8002:8002: Maps host port 8002 to container port 8002.
    • --env-file .env: Loads environment variables from your local .env file into the container. Crucial for API keys.
    • crawl4ai-mcp-server: The name of the image you built.
  6. Server is Running: Logs will appear, indicating the server is listening on SSE (http://0.0.0.0:8002).
  7. Connecting Client: Configure your MCP client (e.g., LangChain agent) to connect to http://127.0.0.1:8002/sse with transport: "sse".

Option 2: Running Locally

This requires Python and manual installation of dependencies on your host machine.

  1. Install Python: Ensure Python >= 3.9 (check crawl4ai requirements if needed, 3.10+ recommended).
  2. Clone Repository:
    git clone https://github.com/your-username/your-repo-name.git # Replace with your repo URL
    cd your-repo-name
    
  3. Create Virtual Environment (Recommended):
    python -m venv venv
    source venv/bin/activate # Linux/macOS
    # venv\Scripts\activate # Windows
    
    (Or use Conda: conda create --name crawl4ai-env python=3.11 -y && conda activate crawl4ai-env)
  4. Install Dependencies:
    pip install -r requirements.txt
    
  5. Create .env File: Create a file named .env in the project root directory and add your API keys (same content as in Docker setup step 3).
  6. Run the Server:
    python your_server_script_name.py # e.g., python webcrawl_mcp_server.py
    
  7. Server is Running: It will listen on http://127.0.0.1:8002/sse.
  8. Connecting Client: Configure your MCP client to connect to http://127.0.0.1:8002/sse.

Environment Variables

The server uses the following environment variables, typically loaded from an .env file:

  • GOOGLE_API_KEY: Required for the smart_extract tool to function (uses Google Gemini). Get one from Google AI Studio.
  • OPENAI_API_KEY: Checked for existence but not currently used by any tool in this version.
  • MISTRAL_API_KEY: Checked for existence but not currently used by any tool in this version.

Example Agent Interaction

# Example using the agent CLI from the previous setup

You: scrape_url https://example.com
Agent: Thinking...
[Agent calls scrape_url tool]
Agent: [Markdown content of example.com]
------------------------------
You: extract text from https://en.wikipedia.org/wiki/Web_scraping using the query "ethical considerations"
Agent: Thinking...
[Agent calls extract_text_by_query tool]
Agent: Found X matches for 'ethical considerations' on the page. Here are the relevant sections:
Match 1:
... text snippet ...
---
Match 2:
... text snippet ...
------------------------------
You: Use smart_extract on https://blog.google/technology/ai/google-gemini-ai/ to get the main points about Gemini models
Agent: Thinking...
[Agent calls smart_extract tool with Google API Key]
Agent: Successfully extracted information based on your instruction:
{
  "main_points": [
    "Gemini is Google's most capable AI model family (Ultra, Pro, Nano).",
    "Designed to be multimodal, understanding text, code, audio, image, video.",
    "Outperforms previous models on various benchmarks.",
    "Being integrated into Google products like Bard and Pixel."
  ]
}

Files

  • your_server_script_name.py: The main Python script for the MCP server (e.g., webcrawl_mcp_server.py).
  • Dockerfile: Instructions for building the Docker container image.
  • requirements.txt: Python dependencies.
  • .env.example: (Recommended) An example environment file showing needed keys. Do not commit your actual .env file.
  • .gitignore: Specifies intentionally untracked files for Git (should include .env).
  • README.md: This file.

Contributing

(Add contribution guidelines if desired)

License

(Specify your license, e.g., MIT License)

相关推荐

  • https://maiplestudio.com
  • Find Exhibitors, Speakers and more

  • Yusuf Emre Yeşilyurt
  • I find academic articles and books for research and literature reviews.

  • https://suefel.com
  • Latest advice and best practices for custom GPT development.

  • Carlos Ferrin
  • Encuentra películas y series en plataformas de streaming.

  • Joshua Armstrong
  • Confidential guide on numerology and astrology, based of GG33 Public information

  • https://zenepic.net
  • Embark on a thrilling diplomatic quest across a galaxy on the brink of war. Navigate complex politics and alien cultures to forge peace and avert catastrophe in this immersive interstellar adventure.

  • Emmet Halm
  • Converts Figma frames into front-end code for various mobile frameworks.

  • 林乔安妮
  • A fashion stylist GPT offering outfit suggestions for various scenarios.

  • Elijah Ng Shi Yi
  • Advanced software engineer GPT that excels through nailing the basics.

  • 田中 楓太
  • A virtual science instructor for engaging and informative lessons.

  • 1Panel-dev
  • 💬 MaxKB is a ready-to-use AI chatbot that integrates Retrieval-Augmented Generation (RAG) pipelines, supports robust workflows, and provides advanced MCP tool-use capabilities.

  • ShrimpingIt
  • Micropython I2C-based manipulation of the MCP series GPIO expander, derived from Adafruit_MCP230xx

  • open-webui
  • User-friendly AI Interface (Supports Ollama, OpenAI API, ...)

  • GLips
  • MCP server to provide Figma layout information to AI coding agents like Cursor

  • adafruit
  • Python code to use the MCP3008 analog to digital converter with a Raspberry Pi or BeagleBone black.

  • Dhravya
  • Collection of apple-native tools for the model context protocol.

    Reviews

    3 (1)
    Avatar
    user_Ki6X1XLT
    2025-04-17

    I've been using the WEB-SCRAPING-MCP by MaitreyaM and it has significantly streamlined my data extraction processes. The tool is intuitive and versatile, handling complex websites with ease. The detailed documentation and welcoming interface add to its appeal. Highly recommended for anyone involved in web scraping!