Skip to content

[Feature] Compressed storage gpu #3062

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

AdrianOrenstein
Copy link

@AdrianOrenstein AdrianOrenstein commented Jul 12, 2025

Description

Replay buffers are used to store a lot of data and are used to feed neural networks with batched samples to learn from. So then ideally we could put this data as close to where the network is being updated. Often raw sensory observations are stored in these buffers, such as images, audio, or text, which consumes many gigabytes of precious memory. CPU memory and accelerator VRAM may be limited, or memory transfer between these devices may be costly. So this PR aims to streamline data compression to aid in efficient storage and memory transfer.

Mainly, creating a compressed storage object will aid in training state-of-the-art RL methods on benchmarks such as the Atari Learning Environment. The ~torchrl.data.replay_buffers.storages.CompressedStorage class provides the memory savings through compression.

closes #3058
closes #2983

  • I have raised an issue to propose this change (required for new features and bug fixes)

Types of changes

What types of changes does your code introduce? Remove all that do not apply:

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds core functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation (update in the documentation)
  • Example (update in the folder of examples)

Checklist

Go over all the following points, and put an x in all the boxes that apply.
If you are unsure about any of these, don't hesitate to ask. We are here to help!

  • I have read the CONTRIBUTION guide (required)
  • My change requires a change to the documentation.
  • I have updated the tests accordingly (required for a bug fix or a new feature).
  • I have updated the documentation accordingly.

Copy link

pytorch-bot bot commented Jul 12, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/rl/3062

Note: Links to docs will display an error until the docs builds have been completed.

❌ 12 New Failures, 6 Unrelated Failures

As of commit 6f97290 with merge base db0e30d (image):

NEW FAILURES - The following jobs have failed:

FLAKY - The following job failed but was likely due to flakiness present on trunk:

BROKEN TRUNK - The following jobs failed but was present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jul 12, 2025
@vmoens vmoens added the enhancement New feature or request label Jul 14, 2025
@vmoens vmoens changed the title Compressed storage gpu [Feature] Compressed storage gpu Jul 14, 2025
@AdrianOrenstein
Copy link
Author

AdrianOrenstein commented Jul 14, 2025

When the tensor is on the CPU numpy is the fastest to convert to a bytestream.

---------------------------- benchmark 'tensor_to_bytestream_speed': 5 tests ----------------------------
Name (time in us)                                                  Mean                     OPS          
---------------------------------------------------------------------------------------------------------
test_tensor_to_bytestream_speed[numpy]                           1.1852 (1.0)      843,727.6370 (1.0)    
test_tensor_to_bytestream_speed[safetensors]                    11.7078 (9.88)      85,413.0849 (0.10)   
test_tensor_to_bytestream_speed[pickle]                         17.5312 (14.79)     57,041.2807 (0.07)   
test_tensor_to_bytestream_speed[torch.save]                     29.3144 (24.73)     34,112.9736 (0.04)   
test_tensor_to_bytestream_speed[tensor.untyped_storage]     37,213.3849 (>1000.0)       26.8721 (0.00)   
---------------------------------------------------------------------------------------------------------

@vmoens vmoens force-pushed the compressed-storage-gpu branch 2 times, most recently from 74d85fa to 95f532e Compare July 16, 2025 22:22
@vmoens vmoens force-pushed the compressed-storage-gpu branch from 95f532e to 5581cf6 Compare July 16, 2025 22:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature Request] Compressing data stored in the Replay Buffer
3 participants