|
1 | | -# mlr3torch dev |
2 | | - |
3 | | -* feat: add `po("nn_identity")` |
4 | | -* feat: Add `LearnerTorchModule` for easily creating torch learners from torch modules. |
5 | | -* feat: `TorchIngressToken` now also can take a `Selector` as argument `features`. |
6 | | -* feat: Added encoders for numericals and categoricals |
7 | | -* feat: Added `po("nn_fn")` for calling custom functions in a network. |
8 | | -* feat: Added `po("nn_ft_cls")` for concatenating a CLS token to a tokenized input. |
9 | | -* BREAKING_CHANGE: The output dimension of neural networks for binary classification tasks is now |
| 1 | +# mlr3torch 0.3.0 |
| 2 | + |
| 3 | +## Breaking Changes: |
| 4 | + |
| 5 | +* The output dimension of neural networks for binary classification tasks is now |
10 | 6 | expected to be 1 and not 2 as before. The behavior of `nn("head")` was also changed to match this. |
11 | 7 | This means that for binary classification tasks, `t_loss("cross_entropy")` now generates |
12 | 8 | `nn_bce_with_logits_loss` instead of `nn_cross_entropy_loss`. |
13 | 9 | This also came with a reparametrization of the `t_loss("cross_entropy")` loss (thanks to @tdhock, #374). |
14 | | -* feat: Added function `lazy_shape()` to get the shape of a lazy tensor. |
15 | | -* feat: Better error messages for MLP and TabResNet learners. |
16 | | -* feat: TabResNet learner now supports lazy tensors. |
17 | | -* feat: The `LearnerTorch` base class now supports the private method `$.ingress_tokens(task, param_vals)` |
18 | | - for generating the `torch::dataset`. |
19 | | -* fix: `NA` is now a valid shape for lazy tensors |
20 | | -* feat: lazy tensors of length 0 can now be materialized. |
21 | | -* feat: `nn("block")` (which allows to repeat the same network segment multiple |
| 10 | + |
| 11 | +## New Features: |
| 12 | + |
| 13 | + |
| 14 | +### PipeOps & Learners: |
| 15 | + |
| 16 | +* Added `po("nn_identity")` |
| 17 | +* Added `po("nn_fn")` for calling custom functions in a network. |
| 18 | +* Added the FT Transformer model for tabular data. |
| 19 | +* Added encoders for numericals and categoricals |
| 20 | +* `nn("block")` (which allows to repeat the same network segment multiple |
22 | 21 | times) now has an extra argument `trafo`, which allows to modify the |
23 | 22 | parameter values per layer. |
24 | | -* feat: Shapes can now have multiple `NA`s and not only the batch dimension can be missing. However, most `nn()` operators still expect only one missing values and will throw an error if multiple dimensions are unknown. |
25 | | -* feat: Progress callback got argument `digits` for controlling the precision |
| 23 | + |
| 24 | +### Callbacks: |
| 25 | + |
| 26 | +* The context for callbacks now includes the network prediction (`y_hat`). |
| 27 | +* The `lr_one_cycle` callback now infers the total number of steps. |
| 28 | +* Progress callback got argument `digits` for controlling the precision |
26 | 29 | with which validation/training scores are logged. |
27 | | -* feat: Training now does not fail anymore when encountering a missing value |
| 30 | + |
| 31 | +### Other: |
| 32 | + |
| 33 | +* `TorchIngressToken` now also can take a `Selector` as argument `features`. |
| 34 | +* Added function `lazy_shape()` to get the shape of a lazy tensor. |
| 35 | +* Better error messages for MLP and TabResNet learners. |
| 36 | +* TabResNet learner now supports lazy tensors. |
| 37 | +* The `LearnerTorch` base class now supports the private method `$.ingress_tokens(task, param_vals)` |
| 38 | + for generating the `torch::dataset`. |
| 39 | +* Shapes can now have multiple `NA`s and not only the batch dimension can be missing. However, most `nn()` operators |
| 40 | + still expect only one missing values and will throw an error if multiple dimensions are unknown. |
| 41 | +* Training now does not fail anymore when encountering a missing value |
28 | 42 | during validation but uses `NA` instead. |
29 | | -* feat: The context for callbacks now includes the network prediction (`y_hat`). |
30 | | -* feat: It is now possible to specify parameter groups |
| 43 | +* It is now possible to specify parameter groups for optimizers via the |
| 44 | +`param_groups` parameter. |
| 45 | + |
| 46 | + |
| 47 | +## Bug Fixes: |
| 48 | + |
| 49 | +* fix: lazy tensors of length 0 can now be materialized. |
| 50 | +* fix: `NA` is now a valid shape for lazy tensors |
31 | 51 | * fix: The `lr_reduce_on_plateau` callback now works. |
32 | | -* feat: The `lr_one_cycle` callback now infers the total number of steps. |
33 | 52 |
|
34 | 53 | # mlr3torch 0.2.1 |
35 | 54 |
|
|
0 commit comments