Standard TokenizerPublic Preview
The standard
tokenizer in Zilliz Cloud splits text based on spaces and punctuation marks, making it suitable for most languages.
Configuration
To configure an analyzer using the standard
tokenizer, set tokenizer
to standard
in analyzer_params
.
- Python
- Java
analyzer_params = {
"tokenizer": "standard",
}
Map<String, Object> analyzerParams = new HashMap<>();
analyzerParams.put("tokenizer", "standard");
The standard
tokenizer can work in conjunction with one or more filters. For example, the following code defines an analyzer that uses the standard
tokenizer and lowercase
filter:
- Python
- Java
analyzer_params = {
"tokenizer": "standard",
"filter": ["lowercase"]
}
Map<String, Object> analyzerParams = new HashMap<>();
analyzerParams.put("tokenizer", "standard");
analyzerParams.put("filter", Collections.singletonList("lowercase"));
After defining analyzer_params
, you can apply them to a VARCHAR
field when defining a collection schema. This allows Zilliz Cloud to process the text in that field using the specified analyzer for efficient tokenization and filtering. For details, refer to Example use.
Example output
Here’s an example of how the standard
tokenizer processes text:
Original text:
"The Milvus vector database is built for scale!"
Expected output:
["The", "Milvus", "vector", "database", "is", "built", "for", "scale"]