Cover image
Try Now
2025-04-14

hydra-ai

3 years

Works with Finder

36

Github Watches

22

Github Forks

220

Github Stars

HYDRA-AI Frontend

HYDRA-AI is a modern AI chat application built with React, TypeScript, and Tailwind CSS. The project utilizes the latest web technology stack to provide a smooth, responsive user experience for blockchain project analysis and AI assistant interactions.

Features

  • 🔐 User authentication system with token-based authorization and refresh functionality
  • 💬 Real-time AI chat functionality with SSE (Server-Sent Events) streaming responses
  • 🔍 Blockchain project analysis capabilities through specialized AI assistants
  • 📊 Multi-stage processing visualization for complex analysis tasks
  • 💾 Local chat history management with persistence
  • 🔔 Toast notifications and error handling system
  • 🎨 Modern UI using Tailwind CSS and Shadcn components
  • 📱 Responsive design with specialized layouts for different devices (including iOS)
  • 🚀 Built with Vite, providing a fast development experience
  • 👛 Wallet connectivity features for blockchain interactions

Tech Stack

  • Framework: React 18
  • Routing: React Router v7
  • State Management: React Context API
  • Styling: Tailwind CSS
  • UI Components: Radix UI / Shadcn
  • Build Tool: Vite
  • Package Manager: pnpm
  • Language: TypeScript
  • Animation: Framer Motion
  • API Communication: Fetch API with SSE support

Development Environment Setup

Prerequisites

  • Node.js (Recommended v18+)
  • pnpm (v10+)

Installation

  1. Clone the repository
git clone https://github.com/hydra-mcp/hydra-mcp-solana.git
cd hydra-mcp-solana
  1. Install dependencies
pnpm install
  1. Configure environment variables

Create a .env.local file (or edit the existing one):

VITE_API_BASE_URL=your_api_endpoint
  1. Start the development server
pnpm dev

The application will run on http://localhost:5173.

Application Structure

The application is organized into the following key components:

Core Components

  • Chat Interface: A full-featured chat UI with message history, streaming responses, and context management
  • Authentication System: Login page with token-based authentication
  • Wallet Integration: Connection to blockchain wallets for crypto interactions
  • Error Handling: Global error boundary and API error handling system

Pages

  • Home: Landing page for the application
  • ChatPage: Main chat interface with AI assistant
  • WalletFinder: Interface for connecting to blockchain wallets
  • IOSDesktop: Specialized interface for iOS devices

Build and Deployment

Production Build

pnpm build

The built files will be located in the dist directory.

Deployment Process

Using Caddy Server (Recommended)

  1. Install Caddy Server

    Please refer to the Caddy official documentation for installation.

  2. Configure Caddyfile

    Create or edit the Caddyfile:

    your-domain.com {
      root * /path/to/hydra-front/dist
    
      # Set up SPA routing
      try_files {path} {path}/ /index.html
    
      # Define static resource matcher
      @static {
        path *.css *.js *.ico *.gif *.jpg *.jpeg *.png *.svg *.webp *.woff *.woff2
      }
    
      # Static resource cache settings
      header @static Cache-Control "public, max-age=31536000, immutable"
    
      # HTML file cache settings
      @html {
        path *.html
      }
      header @html Cache-Control "no-cache, no-store, must-revalidate"
    
      # API proxy settings (if needed)
      reverse_proxy /api/* your_backend_api_server
    
      # Enable file server
      file_server
    }
    
  3. Start Caddy Server

    caddy run
    

Using Docker Deployment

  1. Use Dockerfile

    The project already includes a Dockerfile, which can be built directly:

    docker build -t hydra-front .
    docker run -d -p 80:80 hydra-front
    
  2. Use docker-compose

    The project provides a docker-compose.yml file, which can be used to deploy both the frontend and backend:

    # Start the service
    docker-compose up -d
    
    # View logs
    docker-compose logs -f
    
    # Stop the service
    docker-compose down
    

    Note: Please adjust the configuration in docker-compose.yml according to your actual situation before using it.

Environment Variables

  • VITE_API_BASE_URL: API server base URL
  • VITE_BASE_URL: Optional alternative API base URL (for development/testing)

Project Structure

src/
├── components/        # Reusable UI components
│   ├── chat/          # Chat-related components
│   ├── ui/            # Core UI components (Shadcn)
│   ├── phantom/       # Wallet connection components
│   └── ios/           # iOS-specific components
├── contexts/          # React contexts for state management
├── hooks/             # Custom React hooks
├── layouts/           # Layout components
├── lib/               # Utility functions and API clients
│   ├── api.ts         # API communication layer
│   ├── sse.ts         # Server-Sent Events implementation
│   └── utils.ts       # General utility functions
├── pages/             # Application pages
├── types/             # TypeScript type definitions
├── App.tsx            # Main application component with routes
├── Login.tsx          # Authentication page
├── index.css          # Global styles
└── main.tsx           # Application entry point

API Documentation

HYDRA-AI frontend uses the /agent/chat/completions API endpoint to interact with the AI assistant, implementing blockchain project analysis functionality. This API is similar to the structure of OpenAI's Chat Completions API, suitable for frontend developers familiar with LLM APIs. For the complete API documentation, please refer to API documentation.

Endpoint

POST /agent/chat/completions

Authentication

  • Requires an authenticated user session
  • Uses JWT authentication (managed by the get_current_active_user dependency)

Request Format

{
  "model": "gpt-4",
  "messages": [
    {
      "role": "system",
      "content": "You are a helpful assistant."
    },
    {
      "role": "user",
      "content": "Analyze the project at address 0x123..."
    }
  ],
  "stream": true,
  "temperature": 0.7,
  "max_tokens": 1024,
  "project_context": {
    "additional_context": "any relevant context"
  }
}

Response Types

API provides two response modes:

  1. Non-streaming response - Full response returned once
  2. Streaming response - Returned in Server-Sent Events (SSE) format, containing the following event types:
    • Stage event - Represents different stages of the analysis process
    • Content event - Passes actual content blocks
    • Error event - Passes error information
    • Done event - Represents the end of the stream

Contribution

Welcome to contribute! Please follow the following steps:

  1. Fork the repository
  2. Create your feature branch: git checkout -b feature/amazing-feature
  3. Commit your changes: git commit -m 'Add some amazing feature'
  4. Push to the branch: git push origin feature/amazing-feature
  5. Open a Pull Request

License

Apache 2.0

相关推荐

  • https://maiplestudio.com
  • Find Exhibitors, Speakers and more

  • Yusuf Emre Yeşilyurt
  • I find academic articles and books for research and literature reviews.

  • https://suefel.com
  • Latest advice and best practices for custom GPT development.

  • Joshua Armstrong
  • Confidential guide on numerology and astrology, based of GG33 Public information

  • https://zenepic.net
  • Embark on a thrilling diplomatic quest across a galaxy on the brink of war. Navigate complex politics and alien cultures to forge peace and avert catastrophe in this immersive interstellar adventure.

  • Emmet Halm
  • Converts Figma frames into front-end code for various mobile frameworks.

  • 林乔安妮
  • A fashion stylist GPT offering outfit suggestions for various scenarios.

  • Elijah Ng Shi Yi
  • Advanced software engineer GPT that excels through nailing the basics.

  • 田中 楓太
  • A virtual science instructor for engaging and informative lessons.

  • https://thisdifference.com
  • Friendly, engaging chat companion

  • 1Panel-dev
  • 💬 MaxKB is a ready-to-use AI chatbot that integrates Retrieval-Augmented Generation (RAG) pipelines, supports robust workflows, and provides advanced MCP tool-use capabilities.

  • GLips
  • MCP server to provide Figma layout information to AI coding agents like Cursor

  • adafruit
  • Python code to use the MCP3008 analog to digital converter with a Raspberry Pi or BeagleBone black.

  • open-webui
  • A simple, secure MCP-to-OpenAPI proxy server

  • Dhravya
  • Collection of apple-native tools for the model context protocol.

  • idosal
  • Put an end to hallucinations! GitMCP is a free, open-source, remote MCP server for any GitHub project

  • Mintplex-Labs
  • The all-in-one Desktop & Docker AI application with built-in RAG, AI agents, No-code agent builder, MCP compatibility, and more.

    Reviews

    4 (1)
    Avatar
    user_2b6yDDWh
    2025-04-17

    Hydra-mcp-solana is an exceptional tool for anyone looking to integrate Solana into their projects. The robust and high-quality code provided by Hydra-mcp ensures reliability and efficiency. With clear documentation and seamless integration, it simplifies the development process on the Solana blockchain. Highly recommend checking out their GitHub for more details: https://github.com/hydra-mcp/hydra-mcp-solana. A great addition to the MCP ecosystem!