Skip to main content
Version: User Guides (BYOC)

Whitespace

The whitespace tokenizer divides text into terms whenever there is a space between words.

Configuration

To configure an analyzer using the whitespace tokenizer, set tokenizer to whitespace in analyzer_params.

analyzer_params = {
"tokenizer": "whitespace",
}

The whitespace tokenizer can work in conjunction with one or more filters. For example, the following code defines an analyzer that uses the whitespace tokenizer and lowercase filter:

analyzer_params = {
"tokenizer": "whitespace",
"filter": ["lowercase"]
}

After defining analyzer_params, you can apply them to a VARCHAR field when defining a collection schema. This allows Zilliz Cloud to process the text in that field using the specified analyzer for efficient tokenization and filtering. For details, refer to Example use.

Examples

Before applying the analyzer configuration to your collection schema, verify its behavior using the run_analyzer method.

Analyzer configuration:

analyzer_params = {
"tokenizer": "whitespace",
"filter": ["lowercase"]
}

**Verification using run_analyzer:

# Sample text to analyze
sample_text = "The Milvus vector database is built for scale!"

# Run the standard analyzer with the defined configuration
result = MilvusClient.run_analyzer(sample_text, analyzer_params)
print(result)

Expected output:

['the', 'milvus', 'vector', 'database', 'is', 'built', 'for', 'scale!']