Set Collection TTL
Zilliz Cloud can automatically expire entities through a Time-to-Live (TTL) policy. Expired entities stop appearing in query and search results immediately, and are physically removed from storage on the next compaction cycle — typically within 24 hours.
There are two TTL modes:
-
Collection-level TTL — one retention window shared by every entity, set through the
collection.ttl.secondsproperty. -
Entity-level TTL — each entity carries its own absolute expiration time in a dedicated
TIMESTAMPTZfield, marked as the TTL field through thettl_fieldproperty.
This feature applies only to managed collections.
Limits
-
The two TTL modes are mutually exclusive. A collection cannot have both
collection.ttl.secondsandttl_fieldset at the same time. To switch, see Migrate between the two modes. -
Collection-level TTL applies one window to the whole collection. If a single row needs a different lifetime, use entity-level TTL.
-
The field for entity-level TTL must be
TIMESTAMPTZ. Other types are rejected. -
One TTL field per collection. The schema may contain multiple
TIMESTAMPTZfields, but only one can be named inttl_field. -
Dropping
ttl_fielddoes not resurface expired entities. To restore an expired entity, upsert it with aNULLor future expiration timestamp.
Overview
Expand
When to use TTL
TTL is the right tool when retention is a policy — you know ahead of time that certain entities should eventually go away, and you want the cluster to enforce it without you writing a cron job.
Typical scenarios:
-
Time-windowed datasets. Keep only the last N days of logs, metrics, events, or short-lived feature caches.
-
Multi-tenant collections. Different tenants have different retention windows in the same collection.
-
Per-record retention policies. Per-document lifetime in IoT pipelines, document stores, or MLOps feature stores.
-
Hot / cold data mix. Short-lived entities coexist with long-term ones in the same collection.
-
Compliance-driven expiration. GDPR-style data minimization where each record carries its own "delete by" date.
-
Business-time expiration. An entity represents a record that is only valid until some absolute moment (a campaign ending, a session expiring).
Expired entities will not appear in any search or query results. However, they may stay in the storage until the subsequent data compaction, which should be carried out within the next 24 hours.
TTL modes
The two modes answer different retention questions:
-
Collection-level TTL applies a single retention duration to every entity. Each entity expires at
insert_ts + ttl_seconds. -
Entity-level TTL lets every entity store its own absolute expiration time in a
TIMESTAMPTZfield. ANULLin that field means the entity never expires.
A collection uses one mode at a time — the two are mutually exclusive. Switching between them is a multi-step operation; see Migrate between the two modes.
Use this table to pick a mode:
If your situation is… | Use |
|---|---|
Every entity in the collection should follow the same retention window | Collection-level TTL |
Retention is "from the moment of insert, keep N seconds" | Collection-level TTL |
Different entities need different lifetimes in the same collection (per-tenant, hot/cold, per-document) | Entity-level TTL |
Retention is an absolute wall-clock time (for example, 2027-01-01T00:00:00Z) | Entity-level TTL |
Retention is driven by a business timestamp, not the insert timestamp | Entity-level TTL |
You want to refresh or extend an entity's lifetime after insert | Entity-level TTL |
Some entities should never expire while others should | Entity-level TTL (use NULL for the immortal ones) |
Set collection-level TTL
Use collection-level TTL when every entity in the collection should follow the same retention window.
Enable on a new collection
Pass collection.ttl.seconds (integer, in seconds) through the properties map at creation time.
- Python
- Java
- NodeJS
- Go
- cURL
from pymilvus import MilvusClient, DataType
client = MilvusClient(uri="YOUR_CLUSTER_ENDPOINT")
schema = client.create_schema(auto_id=False, enable_dynamic_field=False)
schema.add_field("id", DataType.INT64, is_primary=True, auto_id=False)
schema.add_field("vector", DataType.FLOAT_VECTOR, dim=128)
index_params = client.prepare_index_params()
index_params.add_index(
field_name="vector", index_type="AUTOINDEX", metric_type="COSINE"
)
client.create_collection(
collection_name="my_collection",
schema=schema,
index_params=index_params,
properties={
"collection.ttl.seconds": 1209600 # 14 days
},
)
import java.util.Collections;
import java.util.HashMap;
import java.util.Map;
import io.milvus.v2.client.ConnectConfig;
import io.milvus.v2.client.MilvusClientV2;
import io.milvus.v2.common.DataType;
import io.milvus.v2.common.IndexParam;
import io.milvus.v2.service.collection.request.AddFieldReq;
import io.milvus.v2.service.collection.request.CreateCollectionReq;
MilvusClientV2 client = new MilvusClientV2(ConnectConfig.builder()
.uri("YOUR_CLUSTER_ENDPOINT")
.build());
CreateCollectionReq.CollectionSchema schema = CreateCollectionReq.CollectionSchema.builder().build();
schema.addField(AddFieldReq.builder().fieldName("id").dataType(DataType.Int64)
.isPrimaryKey(true).autoID(false).build());
schema.addField(AddFieldReq.builder().fieldName("vector").dataType(DataType.FloatVector)
.dimension(128).build());
IndexParam indexParam = IndexParam.builder().fieldName("vector")
.indexType(IndexParam.IndexType.AUTOINDEX)
.metricType(IndexParam.MetricType.COSINE).build();
Map<String, String> properties = new HashMap<>();
properties.put("collection.ttl.seconds", "1209600"); // 14 days
client.createCollection(CreateCollectionReq.builder()
.collectionName("my_collection")
.collectionSchema(schema)
.indexParams(Collections.singletonList(indexParam))
.properties(properties)
.build());
const { MilvusClient, DataType } = require("@zilliz/milvus2-sdk-node");
const client = new MilvusClient({ address: "YOUR_CLUSTER_ENDPOINT" });
await client.createCollection({
collection_name: "my_collection",
fields: [
{ name: "id", data_type: DataType.Int64, is_primary_key: true, autoID: false },
{ name: "vector", data_type: DataType.FloatVector, dim: 128 },
],
index_params: [
{ field_name: "vector", index_type: "AUTOINDEX", metric_type: "COSINE" },
],
properties: {
"collection.ttl.seconds": 1209600, // 14 days
},
});
err = client.CreateCollection(ctx, milvusclient.NewCreateCollectionOption("my_collection", schema).
WithProperty(common.CollectionTTLConfigKey, 1209600)) // TTL in seconds
if err != nil {
fmt.Println(err.Error())
// handle error
}
export params='{
"ttlSeconds": 1209600
}'
export CLUSTER_ENDPOINT="YOUR_CLUSTER_ENDPOINT"
export TOKEN="YOUR_CLUSTER_TOKEN"
curl --request POST \
--url "${CLUSTER_ENDPOINT}/v2/vectordb/collections/create" \
--header "Authorization: Bearer ${TOKEN}" \
--header "Content-Type: application/json" \
-d "{
\"collectionName\": \"my_collection\",
\"schema\": $schema,
\"params\": $params
}"
Enable on an existing collection
Call alter_collection_properties with collection.ttl.seconds in the properties map to apply TTL to a collection that is already in use.
- Python
- Java
- NodeJS
- Go
- cURL
from pymilvus import MilvusClient, DataType
client = MilvusClient(uri="YOUR_CLUSTER_ENDPOINT")
# Assumes "my_collection" was created earlier without TTL
schema = client.create_schema(auto_id=False, enable_dynamic_field=False)
schema.add_field("id", DataType.INT64, is_primary=True, auto_id=False)
schema.add_field("vector", DataType.FLOAT_VECTOR, dim=128)
index_params = client.prepare_index_params()
index_params.add_index(
field_name="vector", index_type="AUTOINDEX", metric_type="COSINE"
)
if not client.has_collection("my_collection"):
client.create_collection(
collection_name="my_collection",
schema=schema,
index_params=index_params,
)
client.alter_collection_properties(
collection_name="my_collection",
properties={"collection.ttl.seconds": 1209600},
)
import java.util.HashMap;
import java.util.Map;
import io.milvus.v2.client.ConnectConfig;
import io.milvus.v2.client.MilvusClientV2;
import io.milvus.v2.service.collection.request.AlterCollectionPropertiesReq;
MilvusClientV2 client = new MilvusClientV2(ConnectConfig.builder()
.uri("YOUR_CLUSTER_ENDPOINT")
.build());
// Assumes "my_collection" was created earlier without TTL.
Map<String, String> properties = new HashMap<>();
properties.put("collection.ttl.seconds", "1209600");
client.alterCollectionProperties(AlterCollectionPropertiesReq.builder()
.collectionName("my_collection")
.properties(properties)
.build());
const { MilvusClient } = require("@zilliz/milvus2-sdk-node");
const client = new MilvusClient({ address: "YOUR_CLUSTER_ENDPOINT" });
// Assumes "my_collection" was created earlier without TTL.
await client.alterCollectionProperties({
collection_name: "my_collection",
properties: { "collection.ttl.seconds": 1209600 },
});
err = client.AlterCollectionProperties(ctx, milvusclient.NewAlterCollectionPropertiesOption("my_collection").
WithProperty(common.CollectionTTLConfigKey, 60))
if err != nil {
fmt.Println(err.Error())
// handle error
}
curl --request POST \
--url "${CLUSTER_ENDPOINT}/v2/vectordb/collections/alter_properties" \
--header "Authorization: Bearer ${TOKEN}" \
--header "Content-Type: application/json" \
-d "{
\"collectionName\": \"my_collection\",
\"properties\": {
\"collection.ttl.seconds\": 1209600
}
}"
Drop the TTL setting
If you decide to keep the data in a collection indefinitely, you can simply drop the TTL setting from that collection.
- Python
- Java
- NodeJS
- Go
- cURL
from pymilvus import MilvusClient
client = MilvusClient(uri="YOUR_CLUSTER_ENDPOINT")
client.drop_collection_properties(
collection_name="my_collection",
property_keys=["collection.ttl.seconds"],
)
import java.util.Collections;
import io.milvus.v2.client.ConnectConfig;
import io.milvus.v2.client.MilvusClientV2;
import io.milvus.v2.service.collection.request.DropCollectionPropertiesReq;
MilvusClientV2 client = new MilvusClientV2(ConnectConfig.builder()
.uri("YOUR_CLUSTER_ENDPOINT")
.build());
client.dropCollectionProperties(DropCollectionPropertiesReq.builder()
.collectionName("my_collection")
.propertyKeys(Collections.singletonList("collection.ttl.seconds"))
.build());
const { MilvusClient } = require("@zilliz/milvus2-sdk-node");
const client = new MilvusClient({ address: "YOUR_CLUSTER_ENDPOINT" });
await client.dropCollectionProperties({
collection_name: "my_collection",
properties: ["collection.ttl.seconds"],
});
err = client.DropCollectionProperties(ctx, milvusclient.NewDropCollectionPropertiesOption("my_collection", common.CollectionTTLConfigKey))
if err != nil {
fmt.Println(err.Error())
// handle error
}
curl --request POST \
--url "${CLUSTER_ENDPOINT}/v2/vectordb/collections/drop_properties" \
--header "Authorization: Bearer ${TOKEN}" \
--header "Content-Type: application/json" \
-d "{
\"collectionName\": \"my_collection\",
\"propertyKeys\": [
\"collection.ttl.seconds\"
]
}"
Set entity-level TTLPrivate Preview
Entity-level TTL lets each entity carry its own absolute expiration time. The time is stored in a dedicated TIMESTAMPTZ column that you declare in the schema, and you mark that column as the TTL field through the ttl_field collection property.
Enable on a new collection
Enabling entity-level TTL at creation time takes two additions in the same create_collection call: a TIMESTAMPTZ field in the schema, and the ttl_field property pointing to that field.
- Python
- Java
- NodeJS
- Go
- cURL
from pymilvus import MilvusClient, DataType
client = MilvusClient(uri="YOUR_CLUSTER_ENDPOINT")
schema = client.create_schema(enable_dynamic_field=False)
schema.add_field("id", DataType.INT64, is_primary=True, auto_id=False)
schema.add_field("expire_at", DataType.TIMESTAMPTZ, nullable=True)
schema.add_field("vector", DataType.FLOAT_VECTOR, dim=128)
index_params = client.prepare_index_params()
index_params.add_index(field_name="vector", index_type="AUTOINDEX",
metric_type="COSINE")
client.create_collection(
collection_name="my_collection",
schema=schema,
index_params=index_params,
properties={"ttl_field": "expire_at"},
)
import java.util.Collections;
import java.util.HashMap;
import java.util.Map;
import io.milvus.v2.client.ConnectConfig;
import io.milvus.v2.client.MilvusClientV2;
import io.milvus.v2.common.DataType;
import io.milvus.v2.common.IndexParam;
import io.milvus.v2.service.collection.request.AddFieldReq;
import io.milvus.v2.service.collection.request.CreateCollectionReq;
MilvusClientV2 client = new MilvusClientV2(ConnectConfig.builder()
.uri("YOUR_CLUSTER_ENDPOINT")
.build());
CreateCollectionReq.CollectionSchema schema = CreateCollectionReq.CollectionSchema.builder().build();
schema.addField(AddFieldReq.builder().fieldName("id").dataType(DataType.Int64)
.isPrimaryKey(true).autoID(false).build());
schema.addField(AddFieldReq.builder().fieldName("expire_at").dataType(DataType.Timestamptz)
.isNullable(true).build());
schema.addField(AddFieldReq.builder().fieldName("vector").dataType(DataType.FloatVector)
.dimension(128).build());
IndexParam indexParam = IndexParam.builder().fieldName("vector")
.indexType(IndexParam.IndexType.AUTOINDEX)
.metricType(IndexParam.MetricType.COSINE).build();
Map<String, String> properties = new HashMap<>();
properties.put("ttl_field", "expire_at");
client.createCollection(CreateCollectionReq.builder()
.collectionName("my_collection")
.collectionSchema(schema)
.indexParams(Collections.singletonList(indexParam))
.properties(properties)
.build());
const { MilvusClient, DataType } = require("@zilliz/milvus2-sdk-node");
const client = new MilvusClient({ address: "YOUR_CLUSTER_ENDPOINT" });
await client.createCollection({
collection_name: "my_collection",
fields: [
{ name: "id", data_type: DataType.Int64, is_primary_key: true, autoID: false },
{ name: "expire_at", data_type: DataType.Timestamptz, nullable: true },
{ name: "vector", data_type: DataType.FloatVector, dim: 128 },
],
index_params: [
{ field_name: "vector", index_type: "AUTOINDEX", metric_type: "COSINE" },
],
properties: { ttl_field: "expire_at" },
});
// go
# restful
Once the collection exists, insert entities with ISO 8601 timestamp strings.
- Python
- Java
- NodeJS
- Go
- cURL
import random
from pymilvus import MilvusClient
client = MilvusClient(uri="YOUR_CLUSTER_ENDPOINT")
# Assumes "my_collection" was created earlier with \`ttl_field\`: "expire_at"
rows = [
# Never expires
{"id": 1, "expire_at": None,
"vector": [random.random() for _ in range(128)]},
# Expires at 2026-12-31 UTC midnight
{"id": 2, "expire_at": "2026-12-31T00:00:00Z",
"vector": [random.random() for _ in range(128)]},
# Shanghai local time — normalized to UTC internally
{"id": 3, "expire_at": "2027-01-01T00:00:00+08:00",
"vector": [random.random() for _ in range(128)]},
]
client.insert("my_collection", rows)
import java.util.ArrayList;
import java.util.List;
import java.util.Random;
import com.google.gson.Gson;
import com.google.gson.JsonNull;
import com.google.gson.JsonObject;
import io.milvus.v2.client.ConnectConfig;
import io.milvus.v2.client.MilvusClientV2;
import io.milvus.v2.service.vector.request.InsertReq;
MilvusClientV2 client = new MilvusClientV2(ConnectConfig.builder()
.uri("YOUR_CLUSTER_ENDPOINT")
.build());
// Assumes "my_collection" was created earlier with \`ttl_field\`: "expire_at".
Gson gson = new Gson();
Random rng = new Random();
List<Float> vector = new ArrayList<>();
for (int i = 0; i < 128; i++) vector.add(rng.nextFloat());
List<JsonObject> rows = new ArrayList<>();
// Never expires
JsonObject r1 = new JsonObject();
r1.addProperty("id", 1);
r1.add("expire_at", JsonNull.INSTANCE);
r1.add("vector", gson.toJsonTree(vector));
rows.add(r1);
// Expires at 2026-12-31 UTC midnight
JsonObject r2 = new JsonObject();
r2.addProperty("id", 2);
r2.addProperty("expire_at", "2026-12-31T00:00:00Z");
r2.add("vector", gson.toJsonTree(vector));
rows.add(r2);
// Shanghai local time — normalized to UTC internally
JsonObject r3 = new JsonObject();
r3.addProperty("id", 3);
r3.addProperty("expire_at", "2027-01-01T00:00:00+08:00");
r3.add("vector", gson.toJsonTree(vector));
rows.add(r3);
client.insert(InsertReq.builder()
.collectionName("my_collection")
.data(rows)
.build());
const { MilvusClient } = require("@zilliz/milvus2-sdk-node");
const client = new MilvusClient({ address: "YOUR_CLUSTER_ENDPOINT" });
const vector = Array.from({ length: 128 }, () => Math.random());
// Assumes "my_collection" was created earlier with \`ttl_field\`: "expire_at".
await client.insert({
collection_name: "my_collection",
data: [
// Never expires
{ id: 1, expire_at: null, vector },
// Expires at 2026-12-31 UTC midnight
{ id: 2, expire_at: "2026-12-31T00:00:00Z", vector },
// Shanghai local time — normalized to UTC internally
{ id: 3, expire_at: "2027-01-01T00:00:00+08:00", vector },
],
});
// go
# restful
On every query and vector search, the server auto-injects the TTL filter — you never write one yourself, and expired entities never appear in the results:
- Python
- Java
- NodeJS
- Go
- cURL
from pymilvus import MilvusClient
client = MilvusClient(uri="YOUR_CLUSTER_ENDPOINT")
client.load_collection("my_collection")
# Expired rows are filtered out automatically
results = client.query(
collection_name="my_collection",
filter="id >= 0",
output_fields=["id", "expire_at"],
limit=10,
)
print(results)
import java.util.Arrays;
import io.milvus.v2.client.ConnectConfig;
import io.milvus.v2.client.MilvusClientV2;
import io.milvus.v2.service.collection.request.LoadCollectionReq;
import io.milvus.v2.service.vector.request.QueryReq;
import io.milvus.v2.service.vector.response.QueryResp;
MilvusClientV2 client = new MilvusClientV2(ConnectConfig.builder()
.uri("YOUR_CLUSTER_ENDPOINT")
.build());
client.loadCollection(LoadCollectionReq.builder()
.collectionName("my_collection")
.build());
// Expired rows are filtered out automatically
QueryResp results = client.query(QueryReq.builder()
.collectionName("my_collection")
.filter("id >= 0")
.outputFields(Arrays.asList("id", "expire_at"))
.limit(10L)
.build());
System.out.println(results.getQueryResults());
const { MilvusClient } = require("@zilliz/milvus2-sdk-node");
const client = new MilvusClient({ address: "YOUR_CLUSTER_ENDPOINT" });
await client.loadCollection({ collection_name: "my_collection" });
// Expired rows are filtered out automatically
const results = await client.query({
collection_name: "my_collection",
filter: "id >= 0",
output_fields: ["id", "expire_at"],
limit: 10,
});
console.log(results.data);
// go
# restful
The same auto-filter applies to client.search().
To extend an entity's lifetime before compaction physically removes it, upsert with a later expiration timestamp — or None — to return the entity to the queryable set.
- Python
- Java
- NodeJS
- Go
- cURL
import random
from pymilvus import MilvusClient
client = MilvusClient(uri="YOUR_CLUSTER_ENDPOINT")
client.upsert("my_collection", [
{"id": 2,
"vector": [random.random() for _ in range(128)],
"expire_at": "2028-01-01T00:00:00Z"},
])
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.Random;
import com.google.gson.Gson;
import com.google.gson.JsonObject;
import io.milvus.v2.client.ConnectConfig;
import io.milvus.v2.client.MilvusClientV2;
import io.milvus.v2.service.vector.request.UpsertReq;
MilvusClientV2 client = new MilvusClientV2(ConnectConfig.builder()
.uri("YOUR_CLUSTER_ENDPOINT")
.build());
Gson gson = new Gson();
Random rng = new Random();
List<Float> vector = new ArrayList<>();
for (int i = 0; i < 128; i++) vector.add(rng.nextFloat());
JsonObject row = new JsonObject();
row.addProperty("id", 2);
row.add("vector", gson.toJsonTree(vector));
row.addProperty("expire_at", "2028-01-01T00:00:00Z");
client.upsert(UpsertReq.builder()
.collectionName("my_collection")
.data(Collections.singletonList(row))
.build());
const { MilvusClient } = require("@zilliz/milvus2-sdk-node");
const client = new MilvusClient({ address: "YOUR_CLUSTER_ENDPOINT" });
const vector = Array.from({ length: 128 }, () => Math.random());
await client.upsert({
collection_name: "my_collection",
data: [
{ id: 2, vector, expire_at: "2028-01-01T00:00:00Z" },
],
});
// go
# restful
Enable on an existing collection
If the collection already exists and does not have collection.ttl.seconds set, add a TIMESTAMPTZ column with add_collection_field, then mark it as the TTL field with alter_collection_properties. Optionally upsert historical rows to backfill their expiration timestamps — rows you do not backfill keep NULL and never expire.
- Python
- Java
- NodeJS
- Go
- cURL
import random
from pymilvus import MilvusClient, DataType
client = MilvusClient(uri="YOUR_CLUSTER_ENDPOINT")
# Step 1 — add a TIMESTAMPTZ column to the schema
client.add_collection_field(
collection_name="my_collection",
field_name="expire_at",
data_type=DataType.TIMESTAMPTZ,
nullable=True,
)
# Step 2 — mark the new column as the TTL field
client.alter_collection_properties(
collection_name="my_collection",
properties={"ttl_field": "expire_at"},
)
# Step 3 (optional) — backfill expiration timestamps for historical rows
client.upsert("my_collection", [
{"id": 1,
"vector": [random.random() for _ in range(128)],
"expire_at": "2026-12-31T00:00:00Z"},
])
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Random;
import com.google.gson.Gson;
import com.google.gson.JsonObject;
import io.milvus.v2.client.ConnectConfig;
import io.milvus.v2.client.MilvusClientV2;
import io.milvus.v2.common.DataType;
import io.milvus.v2.service.collection.request.AddCollectionFieldReq;
import io.milvus.v2.service.collection.request.AlterCollectionPropertiesReq;
import io.milvus.v2.service.vector.request.UpsertReq;
MilvusClientV2 client = new MilvusClientV2(ConnectConfig.builder()
.uri("YOUR_CLUSTER_ENDPOINT")
.build());
// Step 1 — add a TIMESTAMPTZ column to the schema
client.addCollectionField(AddCollectionFieldReq.builder()
.collectionName("my_collection")
.fieldName("expire_at")
.dataType(DataType.Timestamptz)
.isNullable(true)
.build());
// Step 2 — mark the new column as the TTL field
Map<String, String> properties = new HashMap<>();
properties.put("ttl_field", "expire_at");
client.alterCollectionProperties(AlterCollectionPropertiesReq.builder()
.collectionName("my_collection")
.properties(properties)
.build());
// Step 3 (optional) — backfill expiration timestamps for historical rows
Gson gson = new Gson();
Random rng = new Random();
List<Float> vector = new ArrayList<>();
for (int i = 0; i < 128; i++) vector.add(rng.nextFloat());
JsonObject row = new JsonObject();
row.addProperty("id", 1);
row.add("vector", gson.toJsonTree(vector));
row.addProperty("expire_at", "2026-12-31T00:00:00Z");
client.upsert(UpsertReq.builder()
.collectionName("my_collection")
.data(Collections.singletonList(row))
.build());
const { MilvusClient, DataType } = require("@zilliz/milvus2-sdk-node");
const client = new MilvusClient({ address: "YOUR_CLUSTER_ENDPOINT" });
const vector = Array.from({ length: 128 }, () => Math.random());
// Step 1 — add a TIMESTAMPTZ column to the schema
await client.addCollectionField({
collection_name: "my_collection",
field: { name: "expire_at", data_type: DataType.Timestamptz, nullable: true },
});
// Step 2 — mark the new column as the TTL field
await client.alterCollectionProperties({
collection_name: "my_collection",
properties: { ttl_field: "expire_at" },
});
// Step 3 (optional) — backfill expiration timestamps for historical rows
await client.upsert({
collection_name: "my_collection",
data: [
{ id: 1, vector, expire_at: "2026-12-31T00:00:00Z" },
],
});
// go
# restful
Drop the TTL setting
Call drop_collection_properties with ttl_field in property_keys to stop per-entity expiration. The TIMESTAMPTZ column itself remains on the schema — you can still query on it as a regular field.
- Python
- Java
- NodeJS
- Go
- cURL
from pymilvus import MilvusClient
client = MilvusClient(uri="YOUR_CLUSTER_ENDPOINT")
client.drop_collection_properties(
collection_name="my_collection",
property_keys=["ttl_field"],
)
import java.util.Collections;
import io.milvus.v2.client.ConnectConfig;
import io.milvus.v2.client.MilvusClientV2;
import io.milvus.v2.service.collection.request.DropCollectionPropertiesReq;
MilvusClientV2 client = new MilvusClientV2(ConnectConfig.builder()
.uri("YOUR_CLUSTER_ENDPOINT")
.build());
client.dropCollectionProperties(DropCollectionPropertiesReq.builder()
.collectionName("my_collection")
.propertyKeys(Collections.singletonList("ttl_field"))
.build());
const { MilvusClient } = require("@zilliz/milvus2-sdk-node");
const client = new MilvusClient({ address: "YOUR_CLUSTER_ENDPOINT" });
await client.dropCollectionProperties({
collection_name: "my_collection",
properties: ["ttl_field"],
});
// go
# restful
Dropping ttl_field disables the automatic filter for future queries, but entities that had already expired are not automatically surfaced again. To make a previously-expired entity visible, upsert it with a None or future expiration timestamp — that is the only way to restore access to expired rows within the same load session.
Migrate between the two modesPrivate Preview
The two TTL modes are mutually exclusive, so switching between them is a multi-step operation.
Switch from collection-level to entity-level TTL
If your collection was created with collection.ttl.seconds and you want to switch to per-entity expiration, follow these four steps. Skipping Step 1 causes Step 3 to fail with collection TTL is already set, cannot be set ttl field.
- Python
- Java
- NodeJS
- Go
- cURL
import random
from pymilvus import MilvusClient, DataType
client = MilvusClient(uri="YOUR_CLUSTER_ENDPOINT")
# Assumes "my_collection" already exists with \`collection.ttl.seconds\` set.
# Step 1 — disable collection-level TTL (mandatory; the two modes are mutually exclusive)
client.drop_collection_properties(
collection_name="my_collection",
property_keys=["collection.ttl.seconds"],
)
# Step 2 — add a TIMESTAMPTZ column to the schema
client.add_collection_field(
collection_name="my_collection",
field_name="expire_at",
data_type=DataType.TIMESTAMPTZ,
nullable=True,
)
# Step 3 — set the ttl_field property on the column you just added
client.alter_collection_properties(
collection_name="my_collection",
properties={"ttl_field": "expire_at"},
)
# Step 4 (optional) — backfill expiration timestamps for historical entities
client.upsert("my_collection", [
{"id": 1,
"vector": [random.random() for _ in range(128)],
"expire_at": "2026-12-31T00:00:00Z"},
])
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Random;
import com.google.gson.Gson;
import com.google.gson.JsonObject;
import io.milvus.v2.client.ConnectConfig;
import io.milvus.v2.client.MilvusClientV2;
import io.milvus.v2.common.DataType;
import io.milvus.v2.service.collection.request.AddCollectionFieldReq;
import io.milvus.v2.service.collection.request.AlterCollectionPropertiesReq;
import io.milvus.v2.service.collection.request.DropCollectionPropertiesReq;
import io.milvus.v2.service.vector.request.UpsertReq;
MilvusClientV2 client = new MilvusClientV2(ConnectConfig.builder()
.uri("YOUR_CLUSTER_ENDPOINT")
.build());
// Assumes "my_collection" already exists with \`collection.ttl.seconds\` set.
// Step 1 — disable collection-level TTL (mandatory; the two modes are mutually exclusive)
client.dropCollectionProperties(DropCollectionPropertiesReq.builder()
.collectionName("my_collection")
.propertyKeys(Collections.singletonList("collection.ttl.seconds"))
.build());
// Step 2 — add a TIMESTAMPTZ column to the schema
client.addCollectionField(AddCollectionFieldReq.builder()
.collectionName("my_collection")
.fieldName("expire_at")
.dataType(DataType.Timestamptz)
.isNullable(true)
.build());
// Step 3 — set the ttl_field property on the column you just added
Map<String, String> ttlField = new HashMap<>();
ttlField.put("ttl_field", "expire_at");
client.alterCollectionProperties(AlterCollectionPropertiesReq.builder()
.collectionName("my_collection")
.properties(ttlField)
.build());
// Step 4 (optional) — backfill expiration timestamps for historical entities
Gson gson = new Gson();
Random rng = new Random();
List<Float> vector = new ArrayList<>();
for (int i = 0; i < 128; i++) vector.add(rng.nextFloat());
JsonObject row = new JsonObject();
row.addProperty("id", 1);
row.add("vector", gson.toJsonTree(vector));
row.addProperty("expire_at", "2026-12-31T00:00:00Z");
client.upsert(UpsertReq.builder()
.collectionName("my_collection")
.data(Collections.singletonList(row))
.build());
// nodejs
// go
# restful
Historical entities for which you do not backfill expire_at will have NULL in that column, meaning they never expire. Backfill only the rows that should have a finite lifetime.
Switch from entity-level to collection-level TTL
To move in the other direction, drop ttl_field and set collection.ttl.seconds:
- Python
- Java
- NodeJS
- Go
- cURL
from pymilvus import MilvusClient
client = MilvusClient(uri="YOUR_CLUSTER_ENDPOINT")
# Assumes "my_collection" already exists with \`ttl_field\` set.
client.drop_collection_properties(
collection_name="my_collection",
property_keys=["ttl_field"],
)
client.alter_collection_properties(
collection_name="my_collection",
properties={"collection.ttl.seconds": 1209600}, # 14 days
)
import java.util.Collections;
import java.util.HashMap;
import java.util.Map;
import io.milvus.v2.client.ConnectConfig;
import io.milvus.v2.client.MilvusClientV2;
import io.milvus.v2.service.collection.request.AlterCollectionPropertiesReq;
import io.milvus.v2.service.collection.request.DropCollectionPropertiesReq;
MilvusClientV2 client = new MilvusClientV2(ConnectConfig.builder()
.uri("YOUR_CLUSTER_ENDPOINT")
.build());
// Assumes "my_collection" already exists with \`ttl_field\` set.
client.dropCollectionProperties(DropCollectionPropertiesReq.builder()
.collectionName("my_collection")
.propertyKeys(Collections.singletonList("ttl_field"))
.build());
Map<String, String> properties = new HashMap<>();
properties.put("collection.ttl.seconds", "1209600"); // 14 days
client.alterCollectionProperties(AlterCollectionPropertiesReq.builder()
.collectionName("my_collection")
.properties(properties)
.build());
// nodejs
// go
# restful
FAQs
When does data expire due to TTL settings?
Currently, the data expires based on the time point at which it was inserted or upserted. Expired data will not be displayed in search results. For details, refer to Examples.
When will the expired data be physically deleted?
Once the data expires, it will not be included in any search results. However, it will be physically deleted only after the subsequent system compaction, according to your cluster's compaction policies.
If you need to delete the data shortly after it expires, contact us.
When will the CU capacity decrease?
The CU capacity of a cluster is whichever is higher between memory usage and storage usage. If storage usage applies, you can view the decrease in the CU capacity on the Zilliz Cloud console after the expired data is physically deleted.