Skip to main content
Version: User Guides (Cloud)

Voyage AI

This topic describes how to configure and use Voyage AI embedding functions in Milvus.

Model choices

Milvus supports embedding models provided by Voyage AI. Below are the currently available embedding models for quick reference:

Model Name

Dimensions

Max Tokens

Description

voyage-3-large

1,024 (default), 256, 512, 2,048

32,000

The best general-purpose and multilingual retrieval quality.

voyage-3

1,024

32,000

Optimized for general-purpose and multilingual retrieval quality. Refer to blog post for details.

voyage-3-lite

512

32,000

Optimized for latency and cost. Refer to blog post for details.

voyage-code-3

1,024 (default), 256, 512, 2,048

32,000

Optimized for code retrieval. Refer to blog post for details.

voyage-finance-2

1,024

32,000

Optimized for finance retrieval and RAG. Refer to blog post for details.

voyage-law-2

1,024

16,000

Optimized for legal retrieval and RAG. Also improved performance across all domains. Refer to blog post for details.

voyage-code-2

1,536

16,000

Optimized for code retrieval (17% better than alternatives) / Previous generation of code embeddings. Refer to blog post for details.

For details, refer to Text embedding models.

Before you start

Before using a text embedding function, make sure the following prerequisites are met:

  • Choose an embedding model

    Decide which embedding model to use, as this choice determines the embedding behavior and output format. See Choose an embedding model for details.

  • Integrate with Voyage AI and get your integration ID

    You must create a model provider integration with Voyage AI and get your integration ID before using any embedding models provided by it. See Integrate with Model Providers for details.

  • Design a compatible collection schema

    Plan your collection schema to include:

    • A text field (VARCHAR) for raw input text

    • A dense vector field whose data type and dimension match the selected embedding model

  • Prepare to work with raw text at insert and search time

    With a text embedding function enabled, you insert and query raw text directly. Embeddings are generated automatically by the system.

Step 1: Create a collection with a text embedding function

Define schema fields

To use an embedding function, create a collection with a specific schema. This schema must include at least three necessary fields:

  • The primary field that uniquely identifies each entity in a collection.

  • A VARCHAR field that stores raw data to be embedded.

  • A vector field reserved to store dense vector embeddings that the text embedding function will generate for the VARCHAR field.

The following example defines a schema with one VARCHAR field "document" for storing text data and one vector field "dense" for storing dense embeddings to be generated by the text embedding function. Remember to set the vector dimension (dim) to match the output of your chosen embedding model.

from pymilvus import MilvusClient, DataType, Function, FunctionType

# Initialize Milvus client
client = MilvusClient(
uri="YOUR_CLUSTER_ENDPOINT",
token="YOUR_CLUSTER_TOKEN"
)

# Create a new schema for the collection
schema = client.create_schema()

# Add primary field "id"
schema.add_field("id", DataType.INT64, is_primary=True, auto_id=False)

# Add scalar field "document" for storing textual data
schema.add_field("document", DataType.VARCHAR, max_length=9000)

# Add vector field "dense" for storing embeddings.
# IMPORTANT: Set dim to match the exact output dimension of the embedding model.
schema.add_field("dense", DataType.FLOAT_VECTOR, dim=1024)

Define the text embedding function

The text embedding function automatically converts raw data stored in a VARCHAR field into embeddings and stores them into the explicitly defined vector field.

The example below adds a Function module (voya) that converts the scalar field "document" into embeddings, storing the resulting vectors in the "dense" vector field defined earlier.

Once you have defined your embedding function, add it to your collection schema. This instructs Milvus to use the specified embedding function to process and store embeddings from your text data.

# Define embedding function specifically for embedding model provider
text_embedding_function = Function(
name="voya", # Unique identifier for this embedding function
function_type=FunctionType.TEXTEMBEDDING, # Indicates a text embedding function
input_field_names=["document"], # Scalar field(s) containing text data to embed
output_field_names=["dense"], # Vector field(s) for storing embeddings
params={ # Provider-specific embedding parameters (function-level)
"provider": "voyageai", # Must be set to "voyageai"
"model_name": "voyage-3-large", # Specifies the embedding model to use

"integration_id": "YOUR_INTEGRATION_ID", # Integration ID generated in the Zilliz Cloud console for the selected model provider

# "url": "https://api.voyageai.com/v1/embeddings", # Defaults to the official endpoint if omitted
# "dim": "1024" # Output dimension of the vector embeddings after truncation
# "truncation": "true" # Whether to truncate the input texts to fit within the context length. Defaults to true.
}
)

# Add the configured embedding function to your existing collection schema
schema.add_function(text_embedding_function)

Configure the index

After defining the schema with necessary fields and the built-in function, set up the index for your collection. To simplify this process, use AUTOINDEX as the index_type, an option that allows Zilliz Cloud to choose and configure the most suitable index type based on the structure of your data.

# Prepare index parameters
index_params = client.prepare_index_params()

# Add AUTOINDEX to automatically select optimal indexing method
index_params.add_index(
field_name="dense",
index_type="AUTOINDEX",
metric_type="COSINE"
)

Create the collection

Now create the collection using the schema and index parameters defined.

# Create collection named "demo"
client.create_collection(
collection_name='demo',
schema=schema,
index_params=index_params
)

Step 2: Insert data

After setting up your collection and index, you're ready to insert your raw data. In this process, you need only to provide the raw text. The Function module we defined earlier automatically generates the corresponding sparse vector for each text entry.

# Insert sample documents
client.insert('demo', [
{'id': 1, 'document': 'Milvus simplifies semantic search through embeddings.'},
{'id': 2, 'document': 'Vector embeddings convert text into searchable numeric data.'},
{'id': 3, 'document': 'Semantic search helps users find relevant information quickly.'},
])

Step 3: Search with text

After data insertion, perform a semantic search using raw query text. Milvus automatically converts your query into an embedding vector, retrieves relevant documents based on similarity, and returns the top-matching results.

# Perform semantic search
results = client.search(
collection_name='demo',
data=['How does Milvus handle semantic search?'], # Use text query rather than query vector
anns_field='dense', # Use the vector field that stores embeddings
limit=1,
output_fields=['document'],
)

print(results)