メインコンテンツまでスキップ
バージョン: User Guides (Cloud)

ICU
Public Preview

The icu tokenizer is built on the Internationalization Components of Unicode (ICU) open‑source project, which provides key tools for software internationalization. By using ICU's word‑break algorithm, the tokenizer can accurately split text into words across the majority of the world’s languages.

📘Notes

The icu tokenizer preserves punctuation marks and spaces as separate tokens in the output. For example, "Привет! Как дела?" becomes ["Привет", "!", " ", "Как", " ", "дела", "?"]. To remove these standalone punctuation tokens, use the removepunct filter.

Configuration

To configure an analyzer using the icu tokenizer, set tokenizer to icu in analyzer_params.

analyzer_params = {
"tokenizer": "icu",
}

The icu tokenizer can work in conjunction with one or more filters. For example, the following code defines an analyzer that uses the icu tokenizer and remove punct filter:

analyzer_params = {
"tokenizer": "icu",
"filter": ["removepunct"]
}

After defining analyzer_params, you can apply them to a VARCHAR field when defining a collection schema. This allows Zilliz Cloud to process the text in that field using the specified analyzer for efficient tokenization and filtering. For details, refer to Example use.

Examples

Before applying the analyzer configuration to your collection schema, verify its behavior using the run_analyzer method.

Analyzer configuration

analyzer_params = {
"tokenizer": "icu",
}

Verification using run_analyzer

from pymilvus import (
MilvusClient,
)

client = MilvusClient(
uri="YOUR_CLUSTER_ENDPOINT",
token="YOUR_CLUSTER_TOKEN"
)

# Sample text to analyze
sample_text = "Привет! Как дела?"

# Run the standard analyzer with the defined configuration
result = client.run_analyzer(sample_text, analyzer_params)
print("Standard analyzer output:", result)

Expected output

['Привет', '!', ' ', 'Как', ' ', 'дела', '?']