Skip to main content
Version: User Guides (Cloud)

Remove Punct
Public Preview

The removepunct filter removes standalone punctuation tokens from the token stream. Use it when you want cleaner text processing that focuses on meaningful content words rather than punctuation marks.

📘Notes

This filter is most effective with jieba, lindera, and icu tokenizers, which preserve punctuation as separate tokens (e.g., "Hello!"["Hello", "!"]). Other tokenizers like standard and whitespace discard punctuation during tokenization, so removepunct has no effect on them.

Configuration

The removepunct filter is built into Zilliz Cloud. To use it, simply specify its name in the filter section within analyzer_params.

analyzer_params = {
"tokenizer": "jieba",
"filter": ["removepunct"]
}

The removepunct filter operates on the terms generated by the tokenizer, so it must be used in combination with a tokenizer.

After defining analyzer_params, you can apply them to a VARCHAR field when defining a collection schema. This allows Zilliz Cloud to process the text in that field using the specified analyzer for efficient tokenization and filtering. For details, refer to Example use.

Examples

Before applying the analyzer configuration to your collection schema, verify its behavior using the run_analyzer method.

Analyzer configuration

analyzer_params = {
"tokenizer": "icu",
"filter": ["removepunct"]
}

Verification using run_analyzer

from pymilvus import (
MilvusClient,
)

client = MilvusClient(uri="YOUR_CLUSTER_ENDPOINT")

# Sample text to analyze
sample_text = "Привет! Как дела?"

# Run the standard analyzer with the defined configuration
result = client.run_analyzer(sample_text, analyzer_params)
print("Standard analyzer output:", result)

Expected output

['Привет', 'Как', 'дела']