LengthPublic Preview
The length
filter removes tokens that do not meet specified length requirements, allowing you to control the length of tokens retained during text processing.
Configuration
The length
filter is a custom filter in Zilliz Cloud, specified by setting "type": "length"
in the filter configuration. You can configure it as a dictionary within the analyzer_params
to define length limits.
- Python
- Java
analyzer_params = {
"tokenizer": "standard",
"filter":[{
"type": "length", # Specifies the filter type as length
"max": 10, # Sets the maximum token length to 10 characters
}],
}
Map<String, Object> analyzerParams = new HashMap<>();
analyzerParams.put("tokenizer", "standard");
analyzerParams.put("filter",
Collections.singletonList(new HashMap<String, Object>() {{
put("type", "length");
put("max", 10);
}}));
The length
filter accepts the following configurable parameters.
Parameter | Description |
---|---|
| Sets the maximum token length. Tokens longer than this length are removed. |
The length
filter operates on the terms generated by the tokenizer, so it must be used in combination with a tokenizer. For a list of tokenizers available in Zilliz Cloud, refer to Tokenizer Reference.
After defining analyzer_params
, you can apply them to a VARCHAR
field when defining a collection schema. This allows Zilliz Cloud to process the text in that field using the specified analyzer for efficient tokenization and filtering. For details, refer to Example use.
Example output
Here’s an example of how the length
filter processes text:
Example text:
"The length filter allows control over token length requirements for text processing."
Expected output (with max: 10
):
["length", "filter", "allows", "control", "over", "token", "length", "for", "text"]