Description
I jumped on testing the recent PR for adding the :discard option to concurrency controls and noticed different behaviour than i expected / hoped for.
I have an Alert
job which i enqueue with a delay of 2 minutes when a mention comes in. Since multiple mentions may come in at once (my app does keyword scanning when new content is created), i enqueue the Alert job with a delay of 2 minutes via a after_save_commit, and collect and group all mentions from the last 2 minutes and send them in a single notification. So that people do not get spammed for every single mention:
Alert::NotificationJob.set(wait: 2.minutes).perform_later(alert, self)
I have configured the Alert job to discard any jobs being enqueued for the same Alert.
class Alert::NotificationJob < ApplicationJob
queue_as :default
limits_concurrency to: 1, key: ->(alert, _) { alert }, duration: 3.minutes, on_conflict: :discard
def perform(alert, episode_alert)
....
end
This is where i seem to miss understand how :discard
works. I was expecting the above setup to discard any new Alert jobs from being enqueued for a period of up to 3 minutes (based on duration) or earlier in case the job finished before that (since it is enqueued to run in 2 minutes).
But what actually is happening is that, with my after_save_commit firing 20 times:
- 20 Alert::NotificationJob's are enqueued at the same time (everytime my after_save_commit callback is triggered)
:discard
is not preventing new Alert::NotificationJob's to be enqueued even though a job with the same Alert ID is already enqueued- Instead, the jobs are only discarded / deleted when they are supposed to run (in those case after 2 minutes)
In my case this causes from 20 enqueued jobs, 2 to actually run which results in a notification email going out twice. Even though the 2 jobs have been enqueued at exactly the same time
Job 1 Enqueued: July 12, 2025 14:21
Job 2 Enqueued: July 12, 2025 14:21
I was expecting :discard
would prevent such a scenario and not let multiple jobs with the same concurrency key be enqueued. I migrated from GoodJob and expected it to work similar to the total_limit option
I would like to understand what causes in my case 2 jobs to be ran (instead of only 1) when they are enqueued at the same time (and therefore within the 3 minute duration limit) and if i can prevent this using SolidQueue concurrency controls