Skip to main content
Version: User Guides (Cloud)

Jieba

The jieba tokenizer processes Chinese text by breaking it down into its component words.

📘Notes

The jieba tokenizer preserves punctuation marks as separate tokens in the output. For example, "你好!世界。" becomes ["你好", "!", "世界", "。"]. To remove these standalone punctuation tokens, use the removepunct filter.

Configuration

Milvus supports two configuration approaches for the jieba tokenizer: a simple configuration and a custom configuration.

Simple configuration

With the simple configuration, you only need to set the tokenizer to "jieba". For example:

# Simple configuration: only specifying the tokenizer name
analyzer_params = {
"tokenizer": "jieba", # Use the default settings: dict=["_default_"], mode="search", hmm=True
}

This simple configuration is equivalent to the following custom configuration:

# Custom configuration equivalent to the simple configuration above
analyzer_params = {
"type": "jieba", # Tokenizer type, fixed as "jieba"
"dict": ["_default_"], # Use the default dictionary
"mode": "search", # Use search mode for improved recall (see mode details below)
"hmm": True # Enable HMM for probabilistic segmentation
}

For details on parameters, refer to Custom configuration.

Custom configuration

For more control, you can provide a custom configuration that allows you to specify a custom dictionary, select the segmentation mode, and enable or disable the Hidden Markov Model (HMM). For example:

# Custom configuration with user-defined settings
analyzer_params = {
"tokenizer": {
"type": "jieba", # Fixed tokenizer type
"dict": ["customDictionary"], # Custom dictionary list; replace with your own terms
"mode": "exact", # Use exact mode (non-overlapping tokens)
"hmm": False # Disable HMM; unmatched text will be split into individual characters
}
}

Parameter

Description

Default Value

type

The type of tokenizer. This is fixed to "jieba".

"jieba"

dict

A list of dictionaries that the analyzer will load as its vocabulary source. Built-in options:

  • "default": Loads the engine's built‑in Simplified‑Chinese dictionary. For details, refer to dict.txt.

  • "extend_default": Loads everything in "default" plus an additional Traditional‑Chinese supplement. For details, refer to dict.txt.big.

    You can also mix the built‑in dictionary with any number of custom dictionaries. Example: ["default", "结巴分词器"].

["default"]

mode

The segmentation mode. Possible values:

  • "exact": Tries to segment the sentence in the most precise manner, making it ideal for text analysis.

  • "search": Builds on exact mode by further breaking down long words to improve recall, making it suitable for search engine tokenization.

    For more information, refer to Jieba GitHub Project.

"search"

hmm

A boolean flag indicating whether to enable the Hidden Markov Model (HMM) for probabilistic segmentation of words not found in the dictionary.

true

To load a large custom vocabulary from an external file instead of inlining it via dict, see Custom configuration with a dictionary file below.

After defining analyzer_params, you can apply them to a VARCHAR field when defining a collection schema. This allows Zilliz Cloud to process the text in that field using the specified analyzer for efficient tokenization and filtering. For details, refer to Example use.

Custom configuration with a dictionary file
Private Preview

For large custom vocabularies — domain glossaries, product terminology, or proper-noun lists — store the words in a file and register the file as a remote file resource, then reference it from the tokenizer via the extra_dict_file parameter. The analyzer loads these words into its vocabulary on top of the built-in dictionary.

The file is plain UTF‑8 text with one term per line. For example:

结巴分词器
向量数据库

Upload the file to the object store that your Milvus cluster is configured to use, then register it:

from pymilvus import MilvusClient

client = MilvusClient(uri="YOUR_CLUSTER_ENDPOINT")

# Register the uploaded file under a name you'll reference from analyzer configs.
client.add_file_resource(
name="zh_terms",
path="file/zh_terms.txt", # full S3 object key, including the rootPath prefix
)

Reference the registered resource in the tokenizer via extra_dict_file:

analyzer_params = {
"tokenizer": {
"type": "jieba",
"dict": ["_default_"], # keep the built-in dictionary
"mode": "exact",
"hmm": False,
"extra_dict_file": {
"type": "remote",
"resource_name": "zh_terms",
"file_name": "zh_terms.txt",
},
},
}

client.run_analyzer(["milvus结巴分词器中文测试"], analyzer_params)
# → [['milvus', '结巴', '分词器', '中文', '测试']]

The extra_dict_file parameter accepts an object with the following fields:

Field

Description

type

The resource type. Use "remote" for a file registered via add_file_resource. For the "local" variant used in self-hosted deployments, refer to Manage File Resources.

resource_name

The name used when the file was registered with add_file_resource.

file_name

The filename portion of the registered resource's object-store path (for example, "zh_terms.txt" if the resource was registered with path="file/zh_terms.txt").

Words added via extra_dict_file are merged with the built-in dictionary, so jieba's segmentation algorithm sees them alongside existing entries. Whether any specific term surfaces as a standalone token depends on jieba's probability-weighted DAG selection — a long custom term such as 向量数据库 may still be split into 向量 + 数据库 if those shorter entries have higher frequencies in the built-in dictionary.

Examples

Before applying the analyzer configuration to your collection schema, verify its behavior using the run_analyzer method.

Analyzer configuration

analyzer_params = {
"tokenizer": {
"type": "jieba",
"dict": ["结巴分词器"],
"mode": "exact",
"hmm": False
}
}

Verification using run_analyzer

from pymilvus import (
MilvusClient,
)

client = MilvusClient(
uri="YOUR_CLUSTER_ENDPOINT",
token="YOUR_CLUSTER_TOKEN"
)

# Sample text to analyze
sample_text = "milvus结巴分词器中文测试"

# Run the standard analyzer with the defined configuration
result = client.run_analyzer(sample_text, analyzer_params)
print("Standard analyzer output:", result)

Expected output

['milvus', '结巴分词器', '中', '文', '测', '试']