You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/guides/data_input_grain.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -24,7 +24,7 @@ Grain ensures determinism in data input pipelines by saving the pipeline's state
24
24
***Debug training anomalies**: When troubleshooting training spikes or anomalies, the ability to replay the exact data sequence helps distinguish between bad data batches and underlying hardware or software issues.
25
25
26
26
## Data shuffling
27
-
***Global shuffle**: This feature is only available when using Grain with [ArrayRecord](https://github.com/google/array_record) (random access) format, achieved by shuffling indices globally at the beginning of each epoch and then reading the elements according to the random order. This is usually fast enough, even when using hard drives and distributed file systems.
27
+
***Global shuffle**: This feature is only available when using Grain with [ArrayRecord](https://github.com/google/array_record) (random access) format, achieved by shuffling indices globally at the beginning of each epoch and then reading the elements according to the random order. This shuffle method effectively prevents local overfitting, leading to better training results.
28
28
***Hierarchical shuffle**: For sequential access format [Parquet](https://arrow.apache.org/docs/python/parquet.html), shuffle is performed by these steps: file shuffling, interleave from files, and window shuffle using a fixed size buffer.
Copy file name to clipboardExpand all lines: docs/guides/data_input_pipeline.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -37,11 +37,11 @@ The approaches to solve these challenges depend on whether your dataset supports
37
37
Random-access formats are highly recommended for multi-host training because they allow any part of the file to be read directly by its index.<br>
38
38
In MaxText, this is best supported by the ArrayRecord format using the Grain input pipeline. This approach gracefully handles the key challenges:
39
39
***Concurrent access and uniqueness**: Grain assigns a unique set of indices to each host. ArrayRecord allows different hosts to read from different indices in the same file.
40
-
***Uneven completion**: Data indices are distributed evenly among hosts. Without packing, the data imbalance between hosts will be at most one batch. To handle the final steps where some hosts run out of data, you can enable the `generate_padding_example` flag. This directs hosts to generate empty "padding" batches until the training or evaluation steps are met. **Note**: When sequence packing is enabled, the difference in the number of packed examples per host can be larger. The `generate_padding_example` flag still solves this. However, as more hosts begin generating padding, you will observe a decrease in total_weights and a slower change in the training loss. If all hosts exhaust their data before the target step count is reached, both total_weights and loss will drop to 0.
40
+
***Uneven completion**: Data indices are distributed evenly among hosts. Without packing, the data imbalance between hosts will be at most one batch. To handle the final steps where some hosts run out of data, you can enable the `generate_padding_batch_train`/`generate_padding_batch_eval` flag. This directs hosts to generate empty "padding" batches until the training or evaluation steps are met. **Note**: When sequence packing is enabled, the difference in the number of packed examples per host can be larger. The `generate_padding_batch_train`/`generate_padding_batch_eval` flag still solves this. However, as more hosts begin generating padding, you will observe a decrease in total_weights and a slower change in the training loss. If all hosts exhaust their data before the target step count is reached, both total_weights and loss will drop to 0.
41
41
42
42
### Sequential access dataset
43
43
***Concurrent access and uniqueness**: Sequential-access datasets (e.g., Parquet, JSON, TFRecord) cannot be accessed by index, requiring a different strategy -- file-based sharding, where each host is given exclusive access to a specific subset of data files. **Key requirement**: `(Number of data files) % (Number of data-loading hosts) == 0`. If the file count isn't a multiple of the host count, the files will be distributed unevenly. For example, with 10 files and 8 hosts, some hosts will get two files while others get one, significantly worsening the "uneven completion" problem. If you have fewer files than hosts, performance will be severely degraded as all hosts are concurrently accessing all the files.
44
-
***Uneven completion**: Similar to random-access datasets, you can use the `generate_padding_example` flag to handle hosts that finish their file shards early (currently only supported in Hugging Face pipeline, not available in TFDS pipeline).
44
+
***Uneven completion**: Similar to random-access datasets, you can use the `generate_padding_batch_train`/`generate_padding_batch_eval` flag to handle hosts that finish their file shards early.
0 commit comments