メインコンテンツまでスキップ
バージョン: User Guides (Cloud)

Cohere

This topic describes how to configure and use Cohere embedding functions in Milvus.

Model choices

Milvus supports embedding models provided by Cohere. Below are the currently available embedding models for quick reference:

Model Name

Dimensions

Max Tokens

Description

embed-english-v3.0

1,024

512

A model that allows for text to be classified or turned into embeddings. English only.

embed-multilingual-v3.0

1,024

512

Provides multilingual classification and embedding support. See supported languages here.

embed-english-light-v3.0

384

512

A smaller, faster version of embed-english-v3.0. Almost as capable, but a lot faster. English only.

embed-multilingual-light-v3.0

384

512

A smaller, faster version of embed-multilingual-v3.0. Almost as capable, but a lot faster. Supports multiple languages.

embed-english-v2.0

4,096

512

Older embeddings model that allows for text to be classified or turned into embeddings. English only.

embed-english-light-v2.0

1,024

512

A smaller, faster version of embed-english-v2.0. Almost as capable, but a lot faster. English only.

embed-multilingual-v2.0

768

256

Provides multilingual classification and embedding support. See supported languages here.

For details, refer to Cohere's Embed Models.

Before you start

Before using a text embedding function, make sure the following prerequisites are met:

  • Choose an embedding model

    Decide which embedding model to use, as this choice determines the embedding behavior and output format. See Choose an embedding model for details.

  • Integrate with Cohere and get your integration ID

    You must create a model provider integration with Cohere and get your integration ID before using any embedding models provided by it. See Integrate with Model Providers for details.

  • Design a compatible collection schema

    Plan your collection schema to include:

    • A text field (VARCHAR) for raw input text

    • A dense vector field whose data type and dimension match the selected embedding model

  • Prepare to work with raw text at insert and search time

    With a text embedding function enabled, you insert and query raw text directly. Embeddings are generated automatically by the system.

Step 1: Create a collection with a text embedding function

Define schema fields

To use an embedding function, create a collection with a specific schema. This schema must include at least three necessary fields:

  • The primary field that uniquely identifies each entity in a collection.

  • A VARCHAR field that stores raw data to be embedded.

  • A vector field reserved to store dense vector embeddings that the text embedding function will generate for the VARCHAR field.

The following example defines a schema with one scalar field "document" for storing textual data and one vector field "dense" for storing embeddings to be generated by the Function module. Remember to set the vector dimension (dim) to match the output of your chosen embedding model.

from pymilvus import MilvusClient, DataType, Function, FunctionType

# Initialize Milvus client
client = MilvusClient(
uri="YOUR_CLUSTER_ENDPOINT",
token="YOUR_CLUSTER_TOKEN"
)

# Create a new schema for the collection
schema = client.create_schema()

# Add primary field "id"
schema.add_field("id", DataType.INT64, is_primary=True, auto_id=False)

# Add scalar field "document" for storing textual data
schema.add_field("document", DataType.VARCHAR, max_length=9000)

# Add vector field "dense" for storing embeddings.
# IMPORTANT: Set dim to match the exact output dimension of the embedding model.
schema.add_field("dense", DataType.FLOAT_VECTOR, dim=1024)

Define the text embedding function

The Function module in Milvus automatically converts raw data stored in a scalar field into embeddings and stores them into the explicitly defined vector field.

The example below adds a Function module (cohere_func) that converts the scalar field "document" into embeddings, storing the resulting vectors in the "dense" vector field defined earlier.

Once you have defined your embedding function, add it to your collection schema. This instructs Milvus to use the specified embedding function to process and store embeddings from your text data.

# Define embedding function specifically for embedding model provider
text_embedding_function = Function(
name="cohere_func", # Unique identifier for this embedding function
function_type=FunctionType.TEXTEMBEDDING, # Indicates a text embedding function
input_field_names=["document"], # Scalar field(s) containing text data to embed
output_field_names=["dense"], # Vector field(s) for storing embeddings
params={ # Provider-specific embedding parameters (function-level)
"provider": "cohere", # Must be set to "cohere"
"model_name": "embed-english-v3.0", # Specifies the embedding model to use

"integration_id": "YOUR_INTEGRATION_ID", # Integration ID generated in the Zilliz Cloud console for the selected model provider

# "url": "https://api.cohere.com/v2/embed", # Defaults to the official endpoint if omitted
# "truncate": "NONE", # Specifies how the API will handle inputs longer than the maximum token length.
}
)

# Add the configured embedding function to your existing collection schema
schema.add_function(text_embedding_function)

Configure the index

After defining the schema with necessary fields and the built-in function, set up the index for your collection. To simplify this process, use AUTOINDEX as the index_type, an option that allows Zilliz Cloud to choose and configure the most suitable index type based on the structure of your data.

# Prepare index parameters
index_params = client.prepare_index_params()

# Add AUTOINDEX to automatically select optimal indexing method
index_params.add_index(
field_name="dense",
index_type="AUTOINDEX",
metric_type="COSINE"
)

Create the collection

Now create the collection using the schema and index parameters defined.

# Create collection named "demo"
client.create_collection(
collection_name='demo',
schema=schema,
index_params=index_params
)

Step 2: Insert data

After setting up your collection and index, you're ready to insert your raw data. In this process, you need only to provide the raw text. The Function module we defined earlier automatically generates the corresponding sparse vector for each text entry.

# Insert sample documents
client.insert('demo', [
{'id': 1, 'document': 'Milvus simplifies semantic search through embeddings.'},
{'id': 2, 'document': 'Vector embeddings convert text into searchable numeric data.'},
{'id': 3, 'document': 'Semantic search helps users find relevant information quickly.'},
])

Step 3: Search with text

After data insertion, perform a semantic search using raw query text. Milvus automatically converts your query into an embedding vector, retrieves relevant documents based on similarity, and returns the top-matching results.

# Perform semantic search
results = client.search(
collection_name='demo',
data=['How does Milvus handle semantic search?'], # Use text query rather than query vector
anns_field='dense', # Use the vector field that stores embeddings
limit=1,
output_fields=['document'],
)

print(results)