Standard AnalyzerPublic Preview
The standard
analyzer is the default analyzer in Zilliz Cloud, which is automatically applied to text fields if no analyzer is specified. It uses grammar-based tokenization, making it effective for most languages.
Definition
The standard
analyzer consists of:
-
Tokenizer: Uses the
standard
tokenizer to split text into discrete word units based on grammar rules. For more information, refer to Standard. -
Filter: Uses the
lowercase
filter to convert all tokens to lowercase, enabling case-insensitive searches. For more information, refer to
The functionality of the standard
analyzer is equivalent to the following custom analyzer configuration:
- Python
- Java
analyzer_params = {
"tokenizer": "standard",
"filter": ["lowercase"]
}
Map<String, Object> analyzerParams = new HashMap<>();
analyzerParams.put("type", "english");
analyzerParams.put("filter", Collections.singletonList("lowercase"));
Configuration
To apply the standard
analyzer to a field, simply set type
to standard
in analyzer_params
, and include optional parameters as needed.
- Python
- Java
analyzer_params = {
"type": "standard", # Specifies the standard analyzer type
}
Map<String, Object> analyzerParams = new HashMap<>();
analyzerParams.put("type", "standard");
The standard
analyzer accepts the following optional parameters:
Parameter | Description |
---|---|
| An array containing a list of stop words, which will be removed from tokenization. Defaults to |
Example configuration of custom stop words:
- Python
- Java
analyzer_params = {
"type": "standard", # Specifies the standard analyzer type
"stop_words", ["of"] # Optional: List of words to exclude from tokenization
}
Map<String, Object> analyzerParams = new HashMap<>();
analyzerParams.put("type", "standard");
analyzerParams.put("stop_words", Collections.singletonList("of"));
After defining analyzer_params
, you can apply them to a VARCHAR
field when defining a collection schema. This allows Zilliz Cloud to process the text in that field using the specified analyzer for efficient tokenization and filtering. For more information, refer to Example use.
Example output
Here’s how the standard
analyzer processes text.
Original text:
"The Milvus vector database is built for scale!"
Expected output:
["the", "milvus", "vector", "database", "is", "built", "for", "scale"]