Zero Downtime Migration
Zero Downtime Migration allows database services to remain operational throughout the migration, providing uninterrupted access to the database. It consists of these stages:
-
Initialization: Select the source cluster and create a new target cluster.
-
Migration: Migrate existing data and sync incremental data.
-
Finalization: Stop sync when lag is under 10 seconds and switch over to the target cluster.
Zero Downtime Migration is in Private Preview. If you encounter any issues or want to learn about associated costs, contact Zilliz Cloud support.
Migration capabilities
Cluster compatibility
The following table outlines the migration capabilities and constraints between clusters of different plans:
Source Cluster Plan | Target Cluster Plan | Migration Scope |
---|---|---|
Dedicated | New Dedicated cluster | Migrates every database from the source cluster. Partially migrating specific databases is not supported. |
Serverless / Free | New Dedicated cluster | Migrate a single database from the source cluster, as Serverless/Free clusters contain at most one database. |
You cannot migrate to a lower-tier cluster plan. In other words, the target cluster’s plan must be the same as or higher than the source cluster’s plan.
Migration scope options
Migration Type | Description | Use Cases |
---|---|---|
Within same project | Migrate between existing clusters in the same Zilliz Cloud project | Cluster upgrades, performance optimization, data consolidation |
Cross-project or organization | Migrate between existing clusters in different Zilliz Cloud projects or organizations | Company mergers, department transfers, multi-tenant scenarios |
Direct data transfer
Zero Downtime Migration performs direct data replication between Zilliz Cloud clusters with the following characteristics:
-
Schema preservation: Source schema transferred unchanged to target cluster
-
No field modifications: Cannot rename fields, change data types, or modify field attributes during migration
-
Automatic indexing: AUTOINDEX automatically created for vector fields in target cluster
Limits
-
During migration, you cannot perform any of the following operations on the source cluster: AlterCollection, AlterCollectionField, CreateAlias, DropAlias, AlterAlias, RenameCollection, AlterDatabase, Import.
-
Canceling an ongoing Zero Downtime Migration job is not supported. This functionality will be available in future releases.
-
Zero Downtime Migration requires approximately 10 seconds of downtime for data sync to stop and the cluster transition to complete.
Prerequisites
Before starting your offline migration, ensure you meet these requirements:
General requirements
Requirement | Details |
---|---|
User permissions | Organization Owner or Project Admin role |
Source cluster access | Source cluster must be accessible from the public internet |
Target cluster capacity | Sufficient CU size to accommodate source data (use the CU calculator) |
Cross-project or organization migration requirements
Requirement | Details |
---|---|
Connection credentials | Public endpoint, API key, or cluster username and password for source cluster |
Network access | Ability to connect to source cluster from target organization |
Getting started
The zero downtime migration process consists of three main phases that require your attention and action:
Phase 1: Initialize
The following demo shows how to set up and start your zero downtime migration:
Once you click Migrate, the source cluster will enter Locked state and cannot be deleted during migration.
Phase 2: Monitor
After starting the migration, you'll be taken to the target cluster details page where you need to actively monitor the migration progress.
Stage 1: Prepare the target cluster and migrate the existing data
This stage migrates the existing data from the source cluster to the target cluster. The duration depends on the volume of data being transferred and may take several hours for large datasets.
If the process is taking a while, feel free to leave this page and work on other tasks. You can return at any time to continue monitoring the incremental data syncing progress.
Stage 2: Sync incremental data
During this stage, the system continuously syncs new data inserted into the source cluster to the target cluster. The target cluster will display a Syncing state, indicating that it does not accept external data writes. At this stage, follow these steps:
-
Monitor sync lag
-
Track the Lag Behind Source (in seconds) to monitor sync progress. This indicator shows the time difference between the most recent data in the source and target clusters.
-
When the Lag Behind Source drops below 10s, you'll receive an email notification indicating you can proceed with stopping data sync.
-
Important: If the sync lag never reaches below 10s after a reasonable waiting period, contact Zilliz Cloud support.
-
-
Stop data sync
-
Before proceeding, stop all writes to the source cluster and plan for approximately 10 seconds of maintenance window for sync stopping and cluster switchover.
-
When the Lag Behind Source reaches an acceptable threshold, select the checkbox: I confirm that I have stopped writes to the source cluster and then click Stop Data Sync.
📘NotesIf you do not manually stop data sync, Zilliz Cloud will continue syncing for up to 7 days. After this period, the system will automatically stop sync to prevent resource wastage, resulting in the migration job failing.
-
Phase 3: Switch
When you receive the email notification that the sync lag has dropped below 10 seconds, you're ready for the final switchover. For instructions on connecting to a cluster, refer to Connect to Cluster.
After migration, the source cluster will not be deleted automatically. We recommend keeping it for a period to verify data consistency before manual deletion.
The migrated collections are not immediately available for search or query operations. You must manually load the collections in Zilliz Cloud to enable search and query functionalities. For details, refer to Load & Release.