OpenAI
Use an OpenAI embedding model with Zilliz Cloud by choosing an embedding model and creating a collection with a text embedding function.
Model choices
Zilliz Cloud supports all embedding models provided by OpenAI. Below are the available OpenAI embedding models for quick reference:
Model Name | Dimensions | Max Tokens | Description |
|---|---|---|---|
text-embedding-3-small | Default: 1,536 (can be shortened to a dimension size below 1,536) | 8,191 | Ideal for cost-sensitive and scalable semantic search—offers strong performance at a lower price point. |
text-embedding-3-large | Default: 3,072 (can be shortened to a dimension size below 3,072) | 8,191 | Best for applications demanding enhanced retrieval accuracy and richer semantic representations. |
text-embedding-ada-002 | Fixed: 1,536 (cannot be shortened) | 8,191 | A previous-generation model suited for legacy pipelines or scenarios requiring backward compatibility. |
The third generation embedding models (text-embedding-3) support reducing the size of the embedding via a dim parameter. Typically larger embeddings are more expensive from a compute, memory, and storage perspective. Being able to adjust the number of dimensions allows more control over overall cost and performance. For more details about each model, refer to Embedding models and OpenAI announcement blog post.
Before you start
Before using a text embedding function, make sure the following prerequisites are met:
-
Choose an embedding model
Decide which embedding model to use, as this choice determines the embedding behavior and output format. See Choose an embedding model for details.
-
Integrate with OpenAI and get your integration ID
You must create a model provider integration with OpenAI and get your integration ID before using any embedding models provided by it. See Integrate with Model Providers for details.
-
Design a compatible collection schema
Plan your collection schema to include:
-
A text field (
VARCHAR) for raw input text -
A dense vector field whose data type and dimension match the selected embedding model
-
-
Prepare to work with raw text at insert and search time
With a text embedding function enabled, you insert and query raw text directly. Embeddings are generated automatically by the system.
Step 1: Create a collection with a text embedding function
Define schema fields
To use an embedding function, create a collection with a specific schema. This schema must include at least three necessary fields:
-
The primary field that uniquely identifies each entity in a collection.
-
A
VARCHARfield that stores raw data to be embedded. -
A vector field reserved to store dense vector embeddings that the text embedding function will generate for the
VARCHARfield.
The following example defines a schema with one VARCHAR field "document" for storing text data and one vector field "dense" for storing dense embeddings to be generated by the text embedding function. Remember to set the vector dimension (dim) to match the output of your chosen embedding model.
- Python
- Java
- NodeJS
- Go
- cURL
from pymilvus import MilvusClient, DataType, Function, FunctionType
# Initialize Milvus client
client = MilvusClient(
uri="YOUR_CLUSTER_ENDPOINT",
token="YOUR_CLUSTER_TOKEN"
)
# Create a new schema for the collection
schema = client.create_schema()
# Add primary field "id"
schema.add_field("id", DataType.INT64, is_primary=True, auto_id=False)
# Add scalar field "document" for storing textual data
schema.add_field("document", DataType.VARCHAR, max_length=9000)
# Add vector field "dense" for storing embeddings.
# IMPORTANT: Set dim to match the exact output dimension of the embedding model.
# For instance, OpenAI's text-embedding-3-small model outputs 1536-dimensional vectors.
# For dense vector, data type can be FLOAT_VECTOR or INT8_VECTOR
schema.add_field("dense", DataType.FLOAT_VECTOR, dim=1536)
import io.milvus.v2.common.DataType;
import io.milvus.v2.client.ConnectConfig;
import io.milvus.v2.client.MilvusClientV2;
import io.milvus.v2.service.collection.request.AddFieldReq;
import io.milvus.v2.service.collection.request.CreateCollectionReq;
String CLUSTER_ENDPOINT = "YOUR_CLUSTER_ENDPOINT";
String TOKEN = "YOUR_CLUSTER_TOKEN";
ConnectConfig connectConfig = ConnectConfig.builder()
.uri(CLUSTER_ENDPOINT)
.token(TOKEN)
.build();
MilvusClientV2 client = new MilvusClientV2(connectConfig);
CreateCollectionReq.CollectionSchema schema = client.createSchema();
schema.addField(AddFieldReq.builder()
.fieldName("id")
.dataType(DataType.Int64)
.isPrimaryKey(true)
.autoID(false)
.build());
schema.addField(AddFieldReq.builder()
.fieldName("document")
.dataType(DataType.VarChar)
.maxLength(9000)
.build());
schema.addField(AddFieldReq.builder()
.fieldName("dense")
.dataType(DataType.FloatVector)
.dimension(1536)
.build());
// nodejs
// go
# restful
Define the text embedding function
The text embedding function automatically converts raw data stored in a VARCHAR field into embeddings and stores them into the explicitly defined vector field.
The example below adds a Function module (openai_embedding) that converts the scalar field "document" into embeddings, storing the resulting vectors in the "dense" vector field defined earlier.
- Python
- Java
- NodeJS
- Go
- cURL
# Define embedding function (example: OpenAI provider)
text_embedding_function = Function(
name="openai_embedding", # Unique identifier for this embedding function
function_type=FunctionType.TEXTEMBEDDING, # Type of embedding function
input_field_names=["document"], # Scalar field to embed
output_field_names=["dense"], # Vector field to store embeddings
params={ # Provider-specific configuration (highest priority)
"provider": "openai", # Embedding model provider
"model_name": "text-embedding-3-small", # Embedding model
"integration_id": "YOUR_INTEGRATION_ID", # Integration ID generated in the Zilliz Cloud console for the selected model provider
# "dim": "1536", # Optional: shorten the vector dimension
# "user": "user123" # Optional: identifier for API tracking
}
)
# Add the embedding function to your schema
schema.add_function(text_embedding_function)
import io.milvus.v2.service.collection.request.CreateCollectionReq.Function;
Function function = Function.builder()
.functionType(FunctionType.TEXTEMBEDDING)
.name("openai_embedding")
.inputFieldNames(Collections.singletonList("document"))
.outputFieldNames(Collections.singletonList("dense"))
.param("provider", "openai")
.param("model_name", "text-embedding-3-small")
.param("integration_id", "YOUR_INTEGRATION_ID")
.build();
schema.addFunction(function);
// nodejs
// go
# restful
Configure the index
After defining the schema with necessary fields and the built-in function, set up the index for your collection. To simplify this process, use AUTOINDEX as the index_type, an option that allows Zilliz Cloud to choose and configure the most suitable index type based on the structure of your data.
- Python
- Java
- NodeJS
- Go
- cURL
# Prepare index parameters
index_params = client.prepare_index_params()
# Add AUTOINDEX to automatically select optimal indexing method
index_params.add_index(
field_name="dense",
index_type="AUTOINDEX",
metric_type="COSINE"
)
import io.milvus.v2.common.IndexParam;
List<IndexParam> indexes = new ArrayList<>();
indexes.add(IndexParam.builder()
.fieldName("dense")
.indexType(IndexParam.IndexType.AUTOINDEX)
.metricType(IndexParam.MetricType.COSINE)
.build());
// nodejs
// go
# restful
Create the collection
Now create the collection using the schema and index parameters defined.
- Python
- Java
- NodeJS
- Go
- cURL
# Create collection named "demo"
client.create_collection(
collection_name='demo',
schema=schema,
index_params=index_params
)
import io.milvus.v2.service.collection.request.CreateCollectionReq;
CreateCollectionReq requestCreate = CreateCollectionReq.builder()
.collectionName("demo")
.collectionSchema(schema)
.indexParams(indexes)
.build();
client.createCollection(requestCreate);
// nodejs
// go
# restful
Step 2: Insert data
After setting up your collection and index, you're ready to insert your raw data. In this process, you need only to provide the raw text. The Function module we defined earlier automatically generates the corresponding sparse vector for each text entry.
- Python
- Java
- NodeJS
- Go
- cURL
# Insert sample documents
client.insert('demo', [
{'id': 1, 'document': 'Milvus simplifies semantic search through embeddings.'},
{'id': 2, 'document': 'Vector embeddings convert text into searchable numeric data.'},
{'id': 3, 'document': 'Semantic search helps users find relevant information quickly.'},
])
import com.google.gson.Gson;
import com.google.gson.JsonObject;
import io.milvus.v2.service.vector.request.InsertReq;
Gson gson = new Gson();
List<JsonObject> rows = Arrays.asList(
gson.fromJson("{\"id\": 0, \"document\": \"Milvus simplifies semantic search through embeddings.\"}", JsonObject.class),
gson.fromJson("{\"id\": 1, \"document\": \"Vector embeddings convert text into searchable numeric data.\"}", JsonObject.class),
gson.fromJson("{\"id\": 2, \"document\": \"Semantic search helps users find relevant information quickly.\"}", JsonObject.class),
);
client.insert(InsertReq.builder()
.collectionName("demo")
.data(rows)
.build());
// nodejs
// go
# restful
Step 3: Search with text
After data insertion, perform a semantic search using raw query text. Milvus automatically converts your query into an embedding vector, retrieves relevant documents based on similarity, and returns the top-matching results.
- Python
- Java
- NodeJS
- Go
- cURL
# Perform semantic search
results = client.search(
collection_name='demo',
data=['How does Milvus handle semantic search?'], # Use text query rather than query vector
anns_field='dense', # Use the vector field that stores embeddings
limit=1,
output_fields=['document'],
)
print(results)
import io.milvus.v2.service.vector.request.SearchReq;
import io.milvus.v2.service.vector.request.data.EmbeddedText;
import io.milvus.v2.service.vector.response.SearchResp;
SearchResp searchResp = client.search(SearchReq.builder()
.collectionName("demo")
.data(Collections.singletonList(new EmbeddedText("How does Milvus handle semantic search?")))
.limit(1)
.outputFields(Collections.singletonList("document"))
.build());
List<List<SearchResp.SearchResult>> searchResults = searchResp.getSearchResults();
for (List<SearchResp.SearchResult> results : searchResults) {
for (SearchResp.SearchResult result : results) {
System.out.println(result);
}
}
// nodejs
// go
# restful
For more information about search and query operations, refer to Basic Vector Search and Query.