
lara-mcp
3 years
Works with Finder
7
Github Watches
2
Github Forks
41
Github Stars
Lara Translate MCP Server
A Model Context Protocol (MCP) Server for Lara Translate API, enabling powerful translation capabilities with support for language detection, context-aware translations and translation memories.
📚 Table of Contents
- 📖 Introduction
- 🛠 Available Tools
- 🚀 Getting Started
- 🧩 Installation Engines
- 💻 Popular Clients that supports MCPs
- 🆘 Support
📖 Introduction
What is MCP?
Model Context Protocol (MCP) is an open standardized communication protocol that enables AI applications to connect with external tools, data sources, and services. Think of MCP like a USB-C port for AI applications - just as USB-C provides a standardized way to connect devices to various peripherals, MCP provides a standardized way to connect AI models to different data sources and tools.
Lara Translate MCP Server enables AI applications to access Lara Translate's powerful translation capabilities through this standardized protocol.
More info about Model Context Protocol on: https://modelcontextprotocol.io/
How Lara Translate MCP Works
Lara Translate MCP Server implements the Model Context Protocol to provide seamless translation capabilities to AI applications. The integration follows this flow:
- Connection Establishment: When an MCP-compatible AI application starts, it connects to configured MCP servers, including the Lara Translate MCP Server
- Tool & Resource Discovery: The AI application discovers available translation tools and resources provided by the Lara Translate MCP Server
-
Request Processing: When translation needs are identified:
- The AI application formats a structured request with text to translate, language pairs, and optional context
- The MCP server validates the request and transforms it into Lara Translate API calls
- The request is securely sent to Lara Translate's API using your credentials
- Translation & Response: Lara Translate processes the translation using advanced AI models
- Result Integration: The translation results are returned to the AI application, which can then incorporate them into its response
This integration architecture allows AI applications to access professional-grade translations without implementing the API directly, while maintaining the security of your API credentials and offering flexibility to adjust translation parameters through natural language instructions.
Why to use Lara inside an LLM
Integrating Lara with LLMs creates a powerful synergy that significantly enhances translation quality for non-English languages.
Why General LLMs Fall Short in Translation
While large language models possess broad linguistic capabilities, they often lack the specialized expertise and up-to-date terminology required for accurate translations in specific domains and languages.
Lara’s Domain-Specific Advantage
Lara overcomes this limitation by leveraging Translation Language Models (T-LMs) trained on billions of professionally translated segments. These models provide domain-specific machine translation that captures cultural nuances and industry terminology that generic LLMs may miss. The result: translations that are contextually accurate and sound natural to native speakers.
Designed for Non-English Strength
Lara has a strong focus on non-English languages, addressing the performance gap found in models such as GPT-4. The dominance of English in datasets such as Common Crawl and Wikipedia results in lower quality output in other languages. Lara helps close this gap by providing higher quality understanding, generation, and restructuring in a multilingual context.
Faster, Smarter Multilingual Performance
By offloading complex translation tasks to specialized T-LMs, Lara reduces computational overhead and minimizes latency—a common issue for LLMs handling non-English input. Its architecture processes translations in parallel with the LLM, enabling for real-time, high-quality output without compromising speed or efficiency.
Cost-Efficient Translation at Scale
Lara also lowers the cost of using models like GPT-4 in non-English workflows. Since tokenization (and pricing) is optimized for English, using Lara allows translation to take place before hitting the LLM, meaning that only the translated English content is processed. This improves cost efficiency and supports competitive scalability for global enterprises.
🛠 Available Tools
Translation Tools
translate - Translate text between languages
Inputs:
-
text
(array): An array of text blocks to translate, each with:-
text
(string): The text content -
translatable
(boolean): Whether this block should be translated
-
-
source
(optional string): Source language code (e.g., 'en-EN') -
target
(string): Target language code (e.g., 'it-IT') -
context
(optional string): Additional context to improve translation quality -
instructions
(optional string[]): Instructions to adjust translation behavior -
source_hint
(optional string): Guidance for language detection
Returns: Translated text blocks maintaining the original structure
Translation Memories Tools
list_memories - List saved translation memories
Returns: Array of memories and their details
create_memory - Create a new translation memory
Inputs:
-
name
(string): Name of the new memory -
external_id
(optional string): ID of the memory to import from MyMemory (e.g., 'ext_my_[MyMemory ID]')
Returns: Created memory data
update_memory - Update translation memory name
Inputs:
-
id
(string): ID of the memory to update -
name
(string): The new name for the memory
Returns: Updated memory data
delete_memory - Delete a translation memory
Inputs:
-
id
(string): ID of the memory to delete
Returns: Deleted memory data
add_translation - Add a translation unit to memory
Inputs:
-
id
(string | string[]): ID or IDs of memories where to add the translation unit -
source
(string): Source language code -
target
(string): Target language code -
sentence
(string): The source sentence -
translation
(string): The translated sentence -
tuid
(optional string): Translation Unit unique identifier -
sentence_before
(optional string): Context sentence before -
sentence_after
(optional string): Context sentence after
Returns: Added translation details
delete_translation - Delete a translation unit from memory
Inputs:
-
id
(string): ID of the memory -
source
(string): Source language code -
target
(string): Target language code -
sentence
(string): The source sentence -
translation
(string): The translated sentence -
tuid
(optional string): Translation Unit unique identifier -
sentence_before
(optional string): Context sentence before -
sentence_after
(optional string): Context sentence after
Returns: Removed translation details
import_tmx - Import a TMX file into a memory
Inputs:
-
id
(string): ID of the memory to update -
tmx
(file path): The path of the TMX file to upload -
gzip
(boolean): Indicates if the file is compressed (.gz)
Returns: Import details
check_import_status - Checks the status of a TMX file import
Inputs:
-
id
(string): The ID of the import job
Returns: Import details
🚀 Getting Started
📋 Requirements
- Lara Translate API Credentials
- To get them you can refer to the Official Documentation
- An LLM client that supports Model Context Protocol (MCP), such as Claude Desktop, Cursors, or GitHub Copilot
- NPX or Docker (depending on your preferred installation method)
🔌 Installation
Introduction
The installation process is standardized across all MCP clients. It involves manually adding a configuration object to your client's MCP configuration JSON file.
If you're unsure how to configure an MCP with your client, please refer to your MCP client's official documentation.
Lara Translate MCP supports multiple installation methods, including NPX and Docker.
Below, we'll use NPX as an example.
Installation & Configuration
Step 1: Open your client's MCP configuration JSON file with a text editor, then copy and paste the following snippet:
{
"mcpServers": {
"lara-translate": {
"command": "npx",
"args": [
"-y",
"@translated/lara-mcp@latest"
],
"env": {
"LARA_ACCESS_KEY_ID": "<YOUR_ACCESS_KEY_ID>",
"LARA_ACCESS_KEY_SECRET": "<YOUR_ACCESS_KEY_SECRET>"
}
}
}
}
Step 2: Replace <YOUR_ACCESS_KEY_ID>
and <YOUR_ACCESS_KEY_SECRET>
with your Lara Translate API credentials (refer to the Official Documentation for details).
Step 3: Restart your MCP client.
Verify Installation
After restarting your MCP client, you should see Lara Translate MCP in the list of available MCPs.
The method for viewing installed MCPs varies by client. Please consult your MCP client's documentation.
To verify that Lara Translate MCP is working correctly, try translating with a simple prompt:
Translate with Lara "Hello world" to Spanish
Your MCP client will begin generating a response. If Lara Translate MCP is properly installed and configured, your client will either request approval for the action or display a notification that Lara Translate is being used.
🧩 Installation Engines
Option 1: Using NPX
This option requires Node.js to be installed on your system.
- Add the following to your MCP configuration file:
{
"mcpServers": {
"lara-translate": {
"command": "npx",
"args": ["-y", "@translated/lara-mcp@latest"],
"env": {
"LARA_ACCESS_KEY_ID": "<YOUR_ACCESS_KEY_ID>",
"LARA_ACCESS_KEY_SECRET": "<YOUR_ACCESS_KEY_SECRET>"
}
}
}
}
- Replace
<YOUR_ACCESS_KEY_ID>
and<YOUR_ACCESS_KEY_SECRET>
with your actual Lara API credentials.
Option 2: Using Docker
This option requires Docker to be installed on your system.
- Add the following to your MCP configuration file:
{
"mcpServers": {
"lara-translate": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"LARA_ACCESS_KEY_ID",
"-e",
"LARA_ACCESS_KEY_SECRET",
"translatednet/lara-mcp:latest"
],
"env": {
"LARA_ACCESS_KEY_ID": "<YOUR_ACCESS_KEY_ID>",
"LARA_ACCESS_KEY_SECRET": "<YOUR_ACCESS_KEY_SECRET>"
}
}
}
}
- Replace
<YOUR_ACCESS_KEY_ID>
and<YOUR_ACCESS_KEY_SECRET>
with your actual Lara API credentials.
Option 3: Building from Source
Using Node.js
- Clone the repository:
git clone https://github.com/translated/lara-mcp.git
cd lara-mcp
- Install dependencies and build:
# Install dependencies
pnpm install
# Build
pnpm run build
- Add the following to your MCP configuration file:
{
"mcpServers": {
"lara-translate": {
"command": "node",
"args": ["<FULL_PATH_TO_PROJECT_FOLDER>/dist/index.js"],
"env": {
"LARA_ACCESS_KEY_ID": "<YOUR_ACCESS_KEY_ID>",
"LARA_ACCESS_KEY_SECRET": "<YOUR_ACCESS_KEY_SECRET>"
}
}
}
}
- Replace:
-
<FULL_PATH_TO_PROJECT_FOLDER>
with the absolute path to your project folder -
<YOUR_ACCESS_KEY_ID>
and<YOUR_ACCESS_KEY_SECRET>
with your actual Lara API credentials.
-
Building a Docker Image
- Clone the repository:
git clone https://github.com/translated/lara-mcp.git
cd lara-mcp
- Build the Docker image:
docker build -t lara-mcp .
- Add the following to your MCP configuration file:
{
"mcpServers": {
"lara-translate": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"LARA_ACCESS_KEY_ID",
"-e",
"LARA_ACCESS_KEY_SECRET",
"lara-mcp"
],
"env": {
"LARA_ACCESS_KEY_ID": "<YOUR_ACCESS_KEY_ID>",
"LARA_ACCESS_KEY_SECRET": "<YOUR_ACCESS_KEY_SECRET>"
}
}
}
}
- Replace
<YOUR_ACCESS_KEY_ID>
and<YOUR_ACCESS_KEY_SECRET>
with your actual credentials.
💻 Popular Clients that supports MCPs
For a complete list of MCP clients and their feature support, visit the official MCP clients page.
Client | Description |
---|---|
Claude Desktop | Desktop application for Claude AI |
Cursor | AI-first code editor |
Cline for VS Code | VS Code extension for AI assistance |
GitHub Copilot MCP | VS Code extension for GitHub Copilot MCP integration |
Windsurf | AI-powered code editor and development environment |
🆘 Support
- For issues with Lara Translate API: Contact Lara Support
- For issues with this MCP Server: Open an issue on GitHub
相关推荐
I find academic articles and books for research and literature reviews.
Converts Figma frames into front-end code for various mobile frameworks.
Confidential guide on numerology and astrology, based of GG33 Public information
Embark on a thrilling diplomatic quest across a galaxy on the brink of war. Navigate complex politics and alien cultures to forge peace and avert catastrophe in this immersive interstellar adventure.
Advanced software engineer GPT that excels through nailing the basics.
Delivers concise Python code and interprets non-English comments
💬 MaxKB is a ready-to-use AI chatbot that integrates Retrieval-Augmented Generation (RAG) pipelines, supports robust workflows, and provides advanced MCP tool-use capabilities.
Micropython I2C-based manipulation of the MCP series GPIO expander, derived from Adafruit_MCP230xx
The all-in-one Desktop & Docker AI application with built-in RAG, AI agents, No-code agent builder, MCP compatibility, and more.
Reviews

user_jm7IcX2i
I've been using lara-mcp for a while now, and it has greatly enhanced my workflow. The tool, created by translated, is user-friendly and powerful, making complicated processes much simpler. I highly recommend lara-mcp for anyone looking to streamline their tasks. Check it out here: https://github.com/translated/lara-mcp.