Use Lina and local HY-MT1.5-1.8B to translate localization entries
AI EmployeeCommunity Edition+This guide describes a localization translation practice: deploy a translation-specific small model locally, expose it as an OpenAI-compatible service, and configure it for Lina to translate localization entries in batches.
This approach is suitable for translating many system entries, plugin text, menus, collection titles, and field labels. Compared with online models, local models are not affected by external API RPM, TPM, or concurrency limits, and concurrency can be tuned according to machine and model capability.
Overview
This guide uses:
- Model:
tencent/HY-MT1.5-1.8B-GGUF - Inference service:
llama-server - Integration: OpenAI-compatible API
- AI Employee: Lina
- Entry point: Localization Management page
HY-MT1.5-1.8B is a translation-specific small model. It is more suitable for short entries, UI text, and batch translation. General chat models are not recommended as the first choice for localization tasks.
Prerequisites
Before starting, prepare:
- The Localization Management plugin is enabled.
- Target language is enabled.
- Localization entries have been synchronized.
- The local machine or server can run
llama-server. - The NocoBase service can access the HTTP address of
llama-server.
Deploy HY-MT GGUF
Install llama.cpp
On macOS, you can install it with Homebrew:
You can also use a prebuilt llama.cpp binary or build it from source. The final requirement is that llama-server is available.
Start an OpenAI-compatible service
Start the service with the GGUF model from Hugging Face:
If server resources are limited, start with -np 1 or -np 2, then increase gradually after verifying stability.
Test the Model Service
After llama-server starts, check service health:
Then test translation through the OpenAI-compatible API:
If you start from a local model file, change model to the actual model name returned or configured by the service.
If a request does not respond for a long time, the model may be too slow, concurrency may be too high, or context may be too large. Lower -np and NocoBase translation concurrency first, then observe response time.
Configure an LLM Service in NocoBase
Go to System Settings -> AI Employees -> LLM service and add an LLM service.
Example configuration:
After configuration, use Test flight to verify the model.
If NocoBase runs in Docker, 127.0.0.1 points to the container itself and may not access the host service. Use the host IP, container network address, or host.docker.internal.
Configure Lina's Dedicated Model
Go to System Settings -> AI Employees -> AI employees, open Lina, and switch to Model settings.
- Enable
Enable dedicated model configuration. - Select the HY-MT local model in
Models. - Save the configuration.
After this, Lina uses this model for localization translation tasks, preventing users or tasks from switching to general chat models.
For details, see Configure AI Employee Models.
Configure Translation Concurrency
Localization translation task concurrency is controlled by AI_LOCALIZATION_CONCURRENCY:
Rules:
- Default:
10 - Minimum:
1 - Maximum:
20 - Values outside the range use the default
The best concurrency depends on CPU, GPU, memory, model quantization, and llama-server -np. If the default concurrency causes issues:
- Start with
AI_LOCALIZATION_CONCURRENCY=1and verify single-entry translation. - Set both
llama-server -npandAI_LOCALIZATION_CONCURRENCYto2or4. - Observe response time, CPU/GPU usage, and task progress.
- Increase concurrency gradually only if stable.
Do not set concurrency too high at the beginning. If concurrency exceeds actual model capacity, tasks may become slower due to queuing, timeout, or service stalls.
Execute Localization Translation
Go to System Management -> Localization Management.
- Switch to the target language.
- Click
Synchronizeto ensure entries are synchronized. - Click Lina's avatar.
- Choose a task scope:
Incremental translation: translate entries without translations.Selected translation: translate selected entries in the table.Full translation: translate all entries in the current language.
- Check entry count, provider, and model in the confirmation dialog.
- Confirm to create the async task.
- Wait for completion, review translations, and publish.
Start with Selected translation for a few entries to verify output style and speed before running incremental or full translation.
How Lina Builds Translation Requests
Lina builds requests from entries and reference translations. For short entries, existing references are used to improve consistency:
- Built-in entries prefer Chinese translations as references.
- Non-built-in entries prefer the system default language as references.
- If an English reference exists, English is used as source text.
- Translation results are written to the target language but are not published automatically.
Prompt semantics are similar to:
Troubleshooting
No progress after creating a task
Check whether llama-server received requests. View service logs or call /v1/chat/completions with curl.
If the model receives requests but does not return, reduce:
AI_LOCALIZATION_CONCURRENCYllama-server -npllama-server -c
The model returns explanations instead of translations
Local translation models are usually more stable than general chat models. If explanations still appear, test the same prompt with curl first to verify the model's output style.
You can also translate shorter entries first or reduce sampling parameters such as temperature.
NocoBase cannot connect to the model service
Check:
- Whether Base URL includes
/v1. - Whether the NocoBase runtime environment can access the address.
- Whether firewall or container networking blocks the port.
- Whether
llama-serveris still running.
Review Before Publishing
After AI translation finishes, review before publishing:
- Filter by module and check short entries such as menus, buttons, field names, and statuses.
- Check variables, placeholders, HTML tags, and formatting symbols.
- Check key business terminology consistency.
- If built-in entry translations are overwritten, resynchronize in Localization Management and select
Reset system built-in entry translationsto restore defaults. To contribute default translations for the system and official plugins, see Translation Contribution. - Publish in a test environment first, then sync to production.

