You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/configuration/node-config.md
+1-2Lines changed: 1 addition & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -198,11 +198,10 @@ This section contains the configuration options for a Searcher.
198
198
| --- | --- | --- |
199
199
| `aggregation_memory_limit` | Controls the maximum amount of memory that can be used for aggregations before aborting. This limit is per searcher node. A node may run concurrent queries, which share the limit. The first query that will hit the limit will be aborted and frees its memory. It is used to prevent excessive memory usage during the aggregation phase, which can lead to performance degradation or crashes. | `500M`|
200
200
| `aggregation_bucket_limit` | Determines the maximum number of buckets returned to the client. | `65000` |
201
-
| `fast_field_cache_capacity` | Fast field in memory cache capacity on a Searcher. If your filter by dates, run aggregations, range queries, or if you use the search stream API, or even for tracing, it might worth increasing this parameter. The [metrics](../reference/metrics.md) starting by `quickwit_cache_fastfields_cache` can help you make an informed choice when setting this value. | `1G` |
201
+
| `fast_field_cache_capacity` | Fast field in memory cache capacity on a Searcher. If your filter by dates, run aggregations, range queries, or even for tracing, it might worth increasing this parameter. The [metrics](../reference/metrics.md) starting by `quickwit_cache_fastfields_cache` can help you make an informed choice when setting this value. | `1G` |
202
202
| `split_footer_cache_capacity` | Split footer in memory cache (it is essentially the hotcache) capacity on a Searcher.| `500M` |
203
203
| `partial_request_cache_capacity` | Partial request in memory cache capacity on a Searcher. Cache intermediate state for a request, possibly making subsequent requests faster. It can be disabled by setting the size to `0`. | `64M` |
204
204
| `max_num_concurrent_split_searches` | Maximum number of concurrent split search requests running on a Searcher. | `100` |
205
-
| `max_num_concurrent_split_streams` | Maximum number of concurrent split stream requests running on a Searcher. | `100` |
206
205
| `split_cache` | Searcher split cache configuration options defined in the section below. Cache disabled if unspecified. | |
207
206
| `request_timeout_secs` | The time before a search request is cancelled. This should match the timeout of the stack calling into quickwit if there is one set. | `30` |
-[ClickHouse RowBinary](https://clickhouse.tech/docs/en/interfaces/formats/#rowbinary). If `partition_by_field` is set, Quickwit returns chunks of data for each partition field value. Each chunk starts with 16 bytes being partition value and content length and then the `fast_field` values in `RowBinary` format.
126
-
127
-
`fast_field` and `partition_by_field` must be fast fields of type `i64` or `u64`.
128
-
129
-
This endpoint is available as long as you have at least one node running a searcher service in the cluster.
130
-
131
-
132
-
133
-
:::note
134
-
135
-
The endpoint will return 10 million values if 10 million documents match the query. This is expected, this endpoint is made to support queries matching millions of documents and return field values in a reasonable response time.
|`query`|`String`| Query text. See the [query language doc](query-language.md)|_required_|
150
-
|`fast_field`|`String`| Name of a field to retrieve from documents. This field must be a fast field of type `i64` or `u64`. |_required_|
151
-
|`search_field`|`[String]`| Fields to search on. Comma-separated list, e.g. "field1,field2" | index_config.search_settings.default_search_fields |
152
-
|`start_timestamp`|`i64`| If set, restrict search to documents with a `timestamp >= start_timestamp`. The value must be in seconds. ||
153
-
|`end_timestamp`|`i64`| If set, restrict search to documents with a `timestamp < end_timestamp`. The value must be in seconds. ||
154
-
|`partition_by_field`|`String`| If set, the endpoint returns chunks of data for each partition field value. This field must be a fast field of type `i64` or `u64`. ||
155
-
|`output_format`|`String`| Response output format. `csv` or `clickHouseRowBinary`|`csv`|
156
-
157
-
:::info
158
-
The `start_timestamp` and `end_timestamp` should be specified in seconds regardless of the timestamp field precision.
159
-
:::
160
-
161
-
#### Response
162
-
163
-
The response is an HTTP stream. Depending on the client's capability, it is an HTTP1.1 [chunked transfer encoded stream](https://en.wikipedia.org/wiki/Chunked_transfer_encoding) or an HTTP2 stream.
164
-
165
-
It returns a list of all the field values from documents matching the query. The field must be marked as "fast" in the index config for this to work.
166
-
The formatting is based on the specified output format.
167
-
168
-
On error, an "X-Stream-Error" header will be sent via the trailers channel with information about the error, and the stream will be closed via [`sender.abort()`](https://docs.rs/hyper/0.14.16/hyper/body/struct.Sender.html#method.abort).
169
-
Depending on the client, the trailer header with error details may not be shown. The error will also be logged in quickwit ("Error when streaming search results").
0 commit comments