Plugin Development
NocoBase ClusterEnterprise Edition+Background
In a single-node environment, plugins can typically fulfill requirements through in-process state, events, or tasks. However, in a cluster mode, the same plugin may run on multiple instances simultaneously, facing the following typical issues:
- State consistency: If configuration or runtime data is stored only in memory, it is difficult to synchronize between instances, leading to dirty reads or duplicate executions.
- Task scheduling: Without a clear queuing and confirmation mechanism, long-running tasks can be executed concurrently by multiple instances.
- Race conditions: Operations involving schema changes or resource allocation need to be serialized to avoid conflicts caused by concurrent writes.
The NocoBase core provides various middleware interfaces at the application layer to help plugins reuse unified capabilities in a cluster environment. The following sections will introduce the usage and best practices of caching, synchronous messaging, message queues, and distributed locks, with source code references.
Solutions
Cache Component
For data that needs to be stored in memory, it is recommended to use the system's built-in cache component for management.
- Get the default cache instance via
app.cache. Cacheprovides basic operations likeset/get/del/reset, and also supportswrapandwrapWithConditionto encapsulate caching logic, as well as batch methods likemset/mget/mdel.- When deploying in a cluster, it is recommended to place shared data in a persistent storage (like Redis) and set a reasonable
ttlto prevent cache loss upon instance restart.
Example: Cache initialization and usage in plugin-auth
Sync Message Manager
If in-memory state cannot be managed with a distributed cache (e.g., it cannot be serialized), then when the state changes due to user actions, the change needs to be broadcast to other instances via a sync signal to maintain state consistency.
- The plugin base class has implemented
sendSyncMessage, which internally callsapp.syncMessageManager.publishand automatically adds an application-level prefix to the channel to avoid conflicts. publishcan specify atransaction, and the message will be sent after the database transaction is committed, ensuring state and message synchronization.- Use
handleSyncMessageto process messages from other instances. Subscribing during thebeforeLoadphase is very suitable for scenarios like configuration changes and schema synchronization.
Example: plugin-data-source-main uses sync messages to maintain schema consistency across multiple nodes
Pub/Sub Manager
Message broadcasting is the underlying component of sync signals and can also be used directly. When you need to broadcast messages between instances, you can use this component.
app.pubSubManager.subscribe(channel, handler, { debounce })can be used to subscribe to a channel across instances; thedebounceoption is used to prevent frequent callbacks caused by repeated broadcasts.publishsupportsskipSelf(default is true) andonlySelfto control whether the message is sent back to the current instance.- An adapter (like Redis, RabbitMQ, etc.) must be configured before the application starts; otherwise, it will not connect to an external messaging system by default.
Example: plugin-async-task-manager uses PubSub to broadcast task cancellation events
Event Queue Component
The message queue is used to schedule asynchronous tasks, suitable for handling long-running or retryable operations.
- Declare a consumer with
app.eventQueue.subscribe(channel, { idle, process, concurrency }).processreturns aPromise, and you can useAbortSignal.timeoutto control timeouts. publishautomatically adds the application name prefix and supports options liketimeoutandmaxRetries. It defaults to an in-memory queue adapter but can be switched to extended adapters like RabbitMQ as needed.- In a cluster, ensure all nodes use the same adapter to avoid task fragmentation between nodes.
Example: plugin-async-task-manager uses EventQueue to schedule tasks
Distributed Lock Manager
When you need to avoid race conditions, you can use a distributed lock to serialize access to a resource.
- By default, it provides a process-based
localadapter. You can register distributed implementations like Redis. Useapp.lockManager.runExclusive(key, fn, ttl)oracquire/tryAcquireto control concurrency. ttlis used as a safeguard to release the lock, preventing it from being held indefinitely in exceptional cases.- Common scenarios include: schema changes, preventing duplicate tasks, rate limiting, etc.
Example: plugin-data-source-main uses a distributed lock to protect the field deletion process
Development Recommendations
- In-memory state consistency: Try to avoid using in-memory state during development. Instead, use caching or synchronous messages to maintain state consistency.
- Prioritize reusing built-in interfaces: Use unified capabilities like
app.cacheandapp.syncMessageManagerto avoid reimplementing cross-node communication logic in plugins. - Pay attention to transaction boundaries: Operations with transactions should use
transaction.afterCommit(syncMessageManager.publishhas this built-in) to ensure data and message consistency. - Develop a backoff strategy: For queue and broadcast tasks, set reasonable
timeout,maxRetries, anddebouncevalues to prevent new traffic spikes in exceptional situations. - Use complementary monitoring and logging: Make good use of application logs to record channel names, message payloads, lock keys, etc., to facilitate troubleshooting of intermittent issues in a cluster.
With these capabilities, plugins can safely share state, synchronize configurations, and schedule tasks across different instances, meeting the stability and consistency requirements of cluster deployment scenarios.

