Skip to content

Commit 9fba250

Browse files
maellekrlmlr
authored andcommitted
docs: further vignette tweaks
1 parent 88cb100 commit 9fba250

File tree

1 file changed

+59
-61
lines changed

1 file changed

+59
-61
lines changed

vignettes/prudence.Rmd

Lines changed: 59 additions & 61 deletions
Original file line numberDiff line numberDiff line change
@@ -30,22 +30,19 @@ knitr::opts_chunk$set(
3030
Sys.setenv(DUCKPLYR_FALLBACK_COLLECT = 0)
3131
```
3232

33-
Unlike traditional data frames, duckplyr defers computation until absolutely necessary, allowing DuckDB to optimize execution.
34-
This article explains how to control the materialization of data to maintain a seamless dplyr-like experience while remaining cautious of memory usage.
35-
36-
33+
This article explains how to control the materialization of data to maintain a seamless dplyr-like experience as well as to protect memory.
3734

3835
```{r attach}
3936
library(conflicted)
4037
library(dplyr)
4138
conflict_prefer("filter", "dplyr")
4239
```
4340

44-
## Introduction
41+
## dplyr drop-in replacement: eager data frames
4542

46-
From a user's perspective, data frames backed by duckplyr, with class `"duckplyr_df"`, behave as regular data frames in almost all respects.
43+
Data frames backed by duckplyr, with class `"duckplyr_df"`, behave as regular data frames in almost all respects from a user's perspective.
4744
In particular, direct column access like `df$x`, or retrieving the number of rows with `nrow()`, works identically.
48-
Conceptually, duckplyr frames are "eager":
45+
Therefore, conceptually, duckplyr frames are "eager".
4946

5047
```{r}
5148
df <-
@@ -60,14 +57,14 @@ df$y
6057
nrow(df)
6158
```
6259

63-
Under the hood, two key differences provide improved performance and usability:
60+
Under the hood though, two key differences provide improved performance and usability:
6461

6562
- **lazy materialization**: Unlike traditional data frames, duckplyr defers computation until absolutely necessary, i.e. lazily, allowing DuckDB to optimize execution.
6663
- **prudence**: Automatic materialization is controllable, as automatic materialization of large data might otherwise inadvertently lead to memory problems.
6764

6865
The term "prudence" is introduced here to set a clear distinction from the concept of "laziness", and because "control of automatic materialization" is a mouthful.
6966

70-
## Eager and lazy computation
67+
## DuckDB optimization: lazy evaluation
7168

7269
For a duckplyr frame that is the result of a dplyr operation, accessing column data or retrieving the number of rows will trigger a computation that is carried out by DuckDB, not dplyr.
7370
In this sense, duckplyr frames are also "lazy": the computation is deferred until the last possible moment, allowing DuckDB to optimize the whole pipeline.
@@ -112,10 +109,11 @@ The result becomes available when accessed:
112109
system.time(mean_arr_delay_ewr$mean_arr_delay[[1]])
113110
```
114111

115-
### Comparison
112+
### Comparison with similar tools
116113

117114
The functionality is similar to lazy tables in [dbplyr](https://dbplyr.tidyverse.org/) and lazy frames in [dtplyr](https://dtplyr.tidyverse.org/).
118115
However, the behavior is different: at the time of writing, the internal structure of a lazy table or frame is different from a data frame, and columns cannot be accessed directly.
116+
Users need to explicitly `collect()` the data, the data frame is not "eager" at all.
119117

120118
| | **Eager** 😃 | **Lazy** 😴 |
121119
|-------------|:------------:|:-----------:|
@@ -142,31 +140,65 @@ system.time(
142140

143141
See also the [duckplyr: dplyr Powered by DuckDB](https://duckdb.org/2024/04/02/duckplyr.html) blog post for more information.
144142

145-
## Prudence
143+
## Memory protection: control of automatic materialization with `prudence`
146144

147145
Being both "eager" and "lazy" at the same time introduces a challenge:
148-
it is too easy to accidentally trigger computation,
146+
**it is too easy to accidentally trigger computation**,
149147
which is prohibitive if an intermediate result is too large to fit into memory.
150-
Prudence is a setting for duckplyr frames that limits the size of the data that is materialized automatically.
151148

152-
### Concept
149+
Fortunately, duckplyr frames have a setting called `prudence` that limits the size of the data that is materialized automatically,
150+
and that the user can choose based on the data size.
151+
152+
### When to automatically materialize?
153153

154154
Three levels of prudence are available:
155155

156-
- _lavish_: always automatically materialize, as in the first example.
157-
- _frugal_: never automatically materialize, throw an error when attempting to access the data.
158-
- _thrifty_: only automaticaly materialize the data if it is small, otherwise throw an error.
156+
- __lavish__: _always_ automatically materialize, as in the first example.
157+
- __frugal__: _never_ automatically materialize, throw an error when attempting to access the data.
158+
- __thrifty__: automatically materialize the data _if it is small_, otherwise throw an error.
159159

160160
For lavish duckplyr frames, as in the two previous examples, the underlying DuckDB computation is carried out upon the first request.
161161
Once the results are computed, they are cached and subsequent requests are fast.
162162
This is a good choice for small to medium-sized data, where DuckDB can provide a nice speedup but materializing the data is affordable at any stage.
163163
This is the default for `duckdb_tibble()` and `as_duckdb_tibble()`.
164164

165165
For frugal duckplyr frames, accessing a column or requesting the number of rows triggers an error.
166-
This is a good choice for large data sets where the cost of materializing the data may be prohibitive due to size or computation time, and the user wants to control when the computation is carried out and where the results are stored.
166+
This is a good choice for large data sets where the cost of materializing the data may be prohibitive due to size or computation time, and the user wants to control when the computation is carried out and how (to memory, or to a file).
167167
Results can be materialized explicitly with `collect()` and other functions.
168168

169-
Thrifty duckplyr frames are a compromise between lavish and frugal, discussed further below.
169+
Thrifty duckplyr frames are a compromise between lavish and frugal, discussed below.
170+
171+
### Thrift
172+
173+
Thrifty is a compromise between frugal and lavish.
174+
Materialization is allowed for data up to a certain size, measured in cells (values) and rows in the resulting data frame.
175+
176+
```{r}
177+
nrow(flights)
178+
flights_partial <-
179+
flights |>
180+
duckplyr::as_duckdb_tibble(prudence = "thrifty")
181+
```
182+
183+
With this setting, the data is materialized only if the result has fewer than 1,000,000 cells (rows multiplied by columns).
184+
185+
```{r error = TRUE}
186+
flights_partial |>
187+
select(origin, dest, dep_delay, arr_delay) |>
188+
nrow()
189+
```
190+
191+
The original input is too large to be materialized, so the operation fails.
192+
On the other hand, the result after aggregation is small enough to be materialized:
193+
194+
```{r}
195+
flights_partial |>
196+
count(origin) |>
197+
nrow()
198+
```
199+
200+
Thrifty is a good choice for data sets where the cost of materializing the data is prohibitive only for large results.
201+
This is the default for the ingestion functions like `read_parquet_duckdb()`.
170202

171203

172204
### Example
@@ -201,7 +233,7 @@ flights_frugal[[1]]
201233
```
202234

203235

204-
### Enforcing DuckDB operation
236+
### Side effect: Enforcing DuckDB operation
205237

206238
For operations not supported by duckplyr, the original dplyr implementation is used as a fallback.
207239
As the original dplyr implementation accesses columns directly, the data must be materialized before a fallback can be executed.
@@ -227,7 +259,7 @@ flights_frugal |>
227259
By using operations supported by duckplyr and avoiding fallbacks as much as possible, your pipelines will be executed by DuckDB in an optimized way.
228260

229261

230-
### From frugal to lavish
262+
### Conversion between prudence levels
231263

232264
A frugal duckplyr frame can be converted to a lavish one with `as_duckdb_tibble(prudence = "lavish")`.
233265
The `collect.duckplyr_df()` method triggers computation and converts to a plain tibble.
@@ -255,54 +287,20 @@ flights_frugal |>
255287
class()
256288
```
257289

258-
### Comparison
290+
### Comparison with similar tools
259291

260-
Frugal duckplyr frames behave like lazy tables in dbplyr and lazy frames in dtplyr: the computation only starts when you _explicitly_ request it with `collect.duckplyr_df()` or through other means.
292+
Frugal duckplyr frames behave like lazy tables in dbplyr and lazy frames in dtplyr: the computation only starts when you *explicitly* request it with `collect.duckplyr_df()` or through other means.
261293
However, frugal duckplyr frames can be converted to lavish ones at any time, and vice versa.
262294
In dtplyr and dbplyr, there are no lavish frames: collection always needs to be explicit.
263295

264-
265-
## Thrift
266-
267-
Thrifty is a compromise between frugal and lavish.
268-
Materialization is allowed for data up to a certain size, measured in cells (values) and rows in the resulting data frame.
269-
270-
```{r}
271-
nrow(flights)
272-
flights_partial <-
273-
flights |>
274-
duckplyr::as_duckdb_tibble(prudence = "thrifty")
275-
```
276-
277-
With this setting, the data is materialized only if the result has fewer than 1,000,000 cells (rows multiplied by columns).
278-
279-
```{r error = TRUE}
280-
flights_partial |>
281-
select(origin, dest, dep_delay, arr_delay) |>
282-
nrow()
283-
```
284-
285-
The original input is too large to be materialized, so the operation fails.
286-
On the other hand, the result after aggregation is small enough to be materialized:
287-
288-
```{r}
289-
flights_partial |>
290-
count(origin) |>
291-
nrow()
292-
```
293-
294-
Thrifty is a good choice for data sets where the cost of materializing the data is prohibitive only for large results.
295-
This is the default for the ingestion functions like `read_parquet_duckdb()`.
296-
297-
298296
## Conclusion
299297

300-
The duckplyr package provides
298+
The duckplyr package provides
301299

302-
- a drop-in replacement for duckplyr, which necessitates "eager" data frames that automatically materialize like in dplyr,
303-
- optimization by DuckDB, which means "lazy" evaluation where the data is materialized at the latest possible stage.
300+
- a drop-in replacement for duckplyr, which necessitates "eager" data frames that automatically materialize like in dplyr,
301+
- optimization by DuckDB, which means lazy evaluation where the data is materialized at the latest possible stage.
304302

305-
Automatic materialization can be dangerous for memory with large data, so duckplyr provides a setting called `prudence` that controls automatic materialization:
303+
Automatic materialization can be dangerous for memory with large data, so duckplyr provides a setting called `prudence` that controls automatic materialization:
306304
is the data automatically materialized _always_ ("lavish" frames), _never_ ("frugal" frames) or _up to a certain size_ ("thrifty" frames).
307305

308306
See `vignette("large")` for more details on working with large data sets, `vignette("fallback")` for fallbacks to dplyr, and `vignette("limits")` for the operations supported by duckplyr.

0 commit comments

Comments
 (0)