Jieba
The jieba
tokenizer processes Chinese text by breaking it down into its component words.
Configuration
To configure an analyzer using the jieba
tokenizer, set tokenizer
to jieba
in analyzer_params
.
- Python
- Java
- NodeJS
- Go
- cURL
analyzer_params = {
"tokenizer": "jieba",
}
Map<String, Object> analyzerParams = new HashMap<>();
analyzerParams.put("tokenizer", "jieba");
const analyzer_params = {
"tokenizer": "jieba",
};
// go
# restful
analyzerParams='{
"tokenizer": "jieba"
}'
After defining analyzer_params
, you can apply them to a VARCHAR
field when defining a collection schema. This allows Zilliz Cloud to process the text in that field using the specified analyzer for efficient tokenization and filtering. For details, refer to Example use.
Examples
Before applying the analyzer configuration to your collection schema, verify its behavior using the run_analyzer
method.
Analyzer configuration:
- Python
- Java
- NodeJS
- Go
- cURL
analyzer_params = {
"tokenizer": "jieba",
}
// java
// javascript
// go
# restful
Verification using run_analyzer
:
- Python
- Java
- NodeJS
- Go
- cURL
# Sample text to analyze
sample_text = "Milvus 是一个高性能、可扩展的向量数据库!"
# Run the standard analyzer with the defined configuration
result = MilvusClient.run_analyzer(sample_text, analyzer_params)
print(result)
// java
// javascript
// go
# restful
Expected output:
["Milvus", " ", "是", "一个", "高性", "性能", "高性能", "、", "可", "扩展", "的", "向量", "数据", "据库", "数据库", "!"]