feat: improving usage memory #281
Open
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
🛠️ What’s inside this PR
Memory-friendly streaming in
ActiveJob::JobsRelation#eacheachno longer delegates toto_a; it now streams jobs page-by-page, keeping only the current batch in memory.to_awas re-implemented to materialise and cache the collection only when explicitly requested, preserving backwards compatibility.last,[],reverse) were moved to rely on the newto_aimplementation.Compatibility kept intact
@loaded_jobspresent),eachstill uses the cached array.jobs.to_abefore iterating.Test coverage
jobs_relation_memory_testto ensure thateachno longer caches jobs and that the adapter is called exactly twice (data + termination).📊 Expected gains (default
page_size= 1 000)Assumes an average job payload of ~0.8 kB.
Backend calls
Example: 100 k jobs → from 2 to 101 queries.
Wall-clock time
⚖️ Trade-offs & notes
eachpasses if the queue mutates in the meantime (was already the case when refetching, but now happens by default).jobs = relation.to_afirst to restore the cached behaviour.🚀 TL;DR
This PR slashes peak RAM usage by up to two orders of magnitude when iterating over large job sets, while keeping the original API intact and offering an opt-in cache when needed.