Skip to content

Conversation

@vanessavmac
Copy link
Collaborator

@vanessavmac vanessavmac commented Nov 8, 2025

Summary

As part of moving to processing service v2, this PR sets up RabbitMQ as the Celery broker and antenna as the default Celery task queue.

Same changes to rabbitmq/celery from #910

As done in #910, this PR moves CI testing to a separate CI stack so that CI tests are done with an isolated environment + db. This makes it possible to do future batch processing tests.

Related Issues

Related to #515

How to Test the Changes

Tested locally with a single batch job of 6 images. Periodic tasks like ami.ml.tasks.check_processing_services_online work as expected.

Checklist

  • I have tested these changes appropriately.
  • I have added and/or modified relevant tests.
  • I updated relevant documentation or comments.
  • I have verified that this PR follows the project's coding standards.
  • Any dependent changes have already been merged to main.

Summary by CodeRabbit

  • Chores
    • Integrated RabbitMQ message broker for background task processing
    • Configured Celery task queue system with antenna queue specification
    • Updated Docker Compose configurations for local and CI environments
    • Enhanced environment variable configuration for database and message broker connectivity

@netlify
Copy link

netlify bot commented Nov 8, 2025

Deploy Preview for antenna-preview canceled.

Name Link
🔨 Latest commit 82fdfd8
🔍 Latest deploy log https://app.netlify.com/projects/antenna-preview/deploys/69169c9a8e1dcc000881b0df

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 8, 2025

Walkthrough

This PR integrates RabbitMQ as the Celery message broker across local, CI, and production environments. It adds RabbitMQ services to Docker Compose configurations, provisions broker credentials in environment files, configures Celery to explicitly use the "antenna" queue, and extends settings with Redis URL and Celery task queue defaults.

Changes

Cohort / File(s) Summary
Environment Configuration
.envs/.ci/.django, .envs/.local/.django, .envs/.ci/.postgres, .envs/.local/.postgres
Added RabbitMQ broker URL and credentials (CELERY_BROKER_URL, RABBITMQ_DEFAULT_USER, RABBITMQ_DEFAULT_PASS) to Django environments; added DATABASE_URL to PostgreSQL environments for optional tool usage.
Celery Worker Startup
compose/local/django/celery/worker/start, compose/production/django/celery/worker/start
Added explicit --queues=antenna flag to Celery worker invocations in both debug and normal execution paths.
Django Settings
config/settings/base.py
Introduced REDIS_URL global setting (environment-derived), added CELERY_TASK_DEFAULT_QUEUE="antenna", and expanded Celery configuration with broker transport options, health checks, and Redis-backed result backend fallback.
Docker Compose Services
docker-compose.yml, docker-compose.ci.yml, docker-compose.production.yml
Added rabbitmq service (image: rabbitmq:3-management) to local and CI stacks; updated django and celeryworker services to depend on rabbitmq; added RabbitMQ hostname aliasing in production extra_hosts; introduced CI-scoped volumes for postgres and minio.
Documentation
.agents/AGENTS.md
Added Performance Optimization and Git Commit Guidelines sections; updated Celery broker documentation from Redis to RabbitMQ.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • Verify RabbitMQ broker configuration consistency across all environments (local, CI, production)
  • Confirm Celery queue name "antenna" is correctly referenced in worker scripts and settings
  • Validate docker-compose service dependencies and health-check wiring, especially in CI stack
  • Review Redis URL fallback logic in settings for result backend configuration

Possibly related PRs

Poem

🐇 Through RabbitMQ warrens, messages hop and bounce,
Celery workers tuned, ounce by fuzzy ounce,
The antenna queue sings clear and true,
Docker compiles our warren anew,
Where Rabbits and workers dance as one,
Configuration done! 🐰

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Description check ⚠️ Warning The description covers the summary, related issues, and testing performed. However, it lacks key template sections: a structured list of changes, detailed description of how changes solve the issue, deployment notes, and an incomplete checklist. Add a structured 'List of Changes' section, expand the 'Detailed Description' with deployment considerations, include any deployment notes needed, and complete the checklist items with checkmarks or explanations.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and accurately summarizes the main change: setting up RabbitMQ as the Celery broker, which is the primary objective of this PR.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/rabbitmq-broker

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
config/settings/base.py (1)

350-357: Critical: Redis-specific broker transport options incompatible with RabbitMQ.

CELERY_BROKER_TRANSPORT_OPTIONS contains Redis-specific settings (socket_timeout, socket_keepalive, retry_on_timeout) but the broker is now RabbitMQ (AMQP). These options are specific to the Redis transport and won't apply to RabbitMQ.

For RabbitMQ, the relevant transport options differ. If you need connection management for RabbitMQ, consider:

CELERY_BROKER_TRANSPORT_OPTIONS = {
    "visibility_timeout": 43200,  # Relevant for AMQP
    "max_retries": 3,
    "interval_start": 0,
    "interval_step": 0.2,
    "interval_max": 0.5,
}

Or remove the Redis-specific options entirely if not needed:

-# Connection settings to match Redis timeout and keepalive
-CELERY_BROKER_TRANSPORT_OPTIONS = {
-    "visibility_timeout": 43200,  # 12 hours - default celery value
-    "socket_timeout": 120,  # Matches Redis timeout setting
-    "socket_connect_timeout": 30,  # Max time to establish connection
-    "socket_keepalive": True,  # Enable TCP keepalive
-    "retry_on_timeout": True,  # Retry operations if Redis times out
-    "max_connections": 20,  # Per process connection pool limit
-}
+# Connection settings for RabbitMQ/AMQP broker
+CELERY_BROKER_TRANSPORT_OPTIONS = {
+    "visibility_timeout": 43200,  # 12 hours - default celery value
+}

The Redis-specific settings at lines 338-341 (CELERY_REDIS_*) are still appropriate as they apply to the result backend, which remains Redis.

🧹 Nitpick comments (1)
config/settings/base.py (1)

263-263: Consider removing redundant REDIS_URL assignment.

REDIS_URL is already accessed via env("REDIS_URL", default=None) in the CACHES configuration at line 254. Extracting it to a module-level variable is fine for reuse (line 308 uses it), but this creates a minor duplication.

This is acceptable as-is for clarity, but if you prefer to avoid duplication, you could reference CACHES["default"]["LOCATION"] instead.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 28bc270 and c7b0380.

📒 Files selected for processing (6)
  • .envs/.ci/.django (1 hunks)
  • .envs/.local/.django (1 hunks)
  • compose/local/django/celery/worker/start (1 hunks)
  • config/settings/base.py (2 hunks)
  • config/settings/local.py (1 hunks)
  • docker-compose.yml (3 hunks)
🔇 Additional comments (6)
docker-compose.yml (3)

117-126: LGTM: RabbitMQ service configured correctly.

The RabbitMQ service is properly configured with management UI, appropriate port mappings, and credentials matching the environment configuration.


98-100: LGTM: Celery worker correctly configured for antenna queue.

The worker command properly restricts task consumption to the antenna queue and includes the necessary rabbitmq dependency.


102-106: celerybeat already has the rabbitmq dependency through YAML anchor inheritance.

The celerybeat service uses <<: *django, which merges all properties from the django anchor. The parent anchor includes depends_on: [postgres, redis, minio-init, ml_backend, rabbitmq], so celerybeat inherits the rabbitmq dependency. The environment confirms RabbitMQ is the broker: CELERY_BROKER_URL=amqp://rabbituser:rabbitpass@rabbitmq:5672//. No changes are needed.

compose/local/django/celery/worker/start (1)

7-7: LGTM: Worker start script correctly updated.

The worker is properly configured to consume from the antenna queue while preserving the development auto-reload functionality.

config/settings/base.py (2)

304-304: LGTM: Default Celery queue configured.

Setting CELERY_TASK_DEFAULT_QUEUE = "antenna" correctly establishes the default queue for task routing, consistent with the worker configuration.

Note: This setting is duplicated in config/settings/local.py (line 89), which should be removed.


308-308: LGTM: Result backend fallback configured correctly.

The result backend properly falls back to REDIS_URL when CELERY_RESULT_BACKEND is not explicitly provided, maintaining Redis for result storage while using RabbitMQ for message brokering.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
.envs/.local/.django (1)

19-22: Consider specifying an explicit vhost in CELERY_BROKER_URL.

The AMQP URL currently lacks a trailing slash or vhost specification. While RabbitMQ will default to its standard vhost, being explicit improves clarity and prevents potential misconfiguration.

Apply this diff to use the default vhost explicitly:

-CELERY_BROKER_URL=amqp://rabbituser:rabbitpass@rabbitmq:5672
+CELERY_BROKER_URL=amqp://rabbituser:rabbitpass@rabbitmq:5672/

Or use the default vhost "/" encoded as %2F:

-CELERY_BROKER_URL=amqp://rabbituser:rabbitpass@rabbitmq:5672
+CELERY_BROKER_URL=amqp://rabbituser:rabbitpass@rabbitmq:5672/%2F
docker-compose.ci.yml (1)

46-49: Add a healthcheck for the RabbitMQ service.

Multiple services (django, celeryworker, ml_backend) depend on RabbitMQ, but there's no healthcheck to ensure it's ready before dependent services start. This can lead to connection failures during CI startup.

Apply this diff to add a healthcheck:

 rabbitmq:
   image: rabbitmq:3-management
   env_file:
     - ./.envs/.ci/.django
+  healthcheck:
+    test: ["CMD", "rabbitmq-diagnostics", "ping"]
+    interval: 10s
+    timeout: 5s
+    retries: 5
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c7b0380 and 249c058.

📒 Files selected for processing (3)
  • .envs/.ci/.django (1 hunks)
  • .envs/.local/.django (1 hunks)
  • docker-compose.ci.yml (5 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • .envs/.ci/.django
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: test
🔇 Additional comments (1)
docker-compose.ci.yml (1)

39-44: LGTM! Celeryworker correctly configured for antenna queue.

The celeryworker correctly specifies --queues=antenna, which aligns with the PR objectives and the CELERY_TASK_DEFAULT_QUEUE setting in the codebase.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
.envs/.ci/.postgres (1)

6-6: Consider using consistent DATABASE_URL scheme across environments.

The DATABASE_URL is correctly constructed from the individual PostgreSQL variables. However, this file uses the postgresql:// scheme while .envs/.local/.postgres uses postgres://. Both are valid aliases, but maintaining consistency across local and CI environments reduces potential confusion.

Apply this diff to align with the local environment (or vice versa):

-DATABASE_URL=postgresql://4JXkOnTAeDmDyIapSRrGEE:d4xojpnJU3OzPQ0apSCLP1oHR1TYvyMzAlF5KpE9HFL6MPlnbDibwI@postgres:5432/ami-ci
+DATABASE_URL=postgres://4JXkOnTAeDmDyIapSRrGEE:d4xojpnJU3OzPQ0apSCLP1oHR1TYvyMzAlF5KpE9HFL6MPlnbDibwI@postgres:5432/ami-ci
docker-compose.ci.yml (1)

46-49: Consider adding a healthcheck for RabbitMQ.

The RabbitMQ service lacks a healthcheck, which could lead to connection failures if dependent services (django, celeryworker) attempt to connect before RabbitMQ is fully ready.

Apply this diff to add a healthcheck:

 rabbitmq:
   image: rabbitmq:3-management
   env_file:
     - ./.envs/.ci/.django
+  healthcheck:
+    test: ["CMD", "rabbitmq-diagnostics", "-q", "ping"]
+    interval: 10s
+    timeout: 5s
+    retries: 5
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 249c058 and 1c337cd.

📒 Files selected for processing (5)
  • .envs/.ci/.django (1 hunks)
  • .envs/.ci/.postgres (1 hunks)
  • .envs/.local/.django (1 hunks)
  • .envs/.local/.postgres (1 hunks)
  • docker-compose.ci.yml (4 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • .envs/.local/.django
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: test
🔇 Additional comments (7)
.envs/.local/.postgres (1)

6-6: DATABASE_URL correctly constructed.

The connection string properly matches all individual PostgreSQL variables. Note the scheme inconsistency with the CI environment file (see .envs/.ci/.postgres review).

docker-compose.ci.yml (5)

4-8: LGTM! Good CI isolation with named volumes.

The named volumes with local driver provide proper data persistence for CI testing while avoiding conflicts with local development setup.


16-21: LGTM! Proper RabbitMQ dependency for Django/Celery.

The django service correctly depends on rabbitmq since it uses Celery for task management.


42-44: LGTM! Celeryworker properly configured with antenna queue.

The celeryworker correctly depends on rabbitmq and the command properly specifies the antenna queue, aligning with the PR objectives.


31-32: LGTM! Postgres data properly persisted with named volume.

The postgres service correctly uses the named volume for data persistence in CI.


54-55: LGTM! MinIO data properly persisted with named volume.

The minio service correctly uses the named volume for data persistence in CI.

.envs/.ci/.django (1)

22-27: LGTM! RabbitMQ configuration is correct.

The RabbitMQ configuration is properly set with consistent credentials and the correct AMQP URL format (single trailing slash for default vhost). While the credentials are simple, they're acceptable for the CI environment.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
docker-compose.yml (1)

119-128: Consider adding a volume for RabbitMQ data persistence.

The RabbitMQ service is properly configured with the management plugin and correct port mappings. However, without a defined volume, RabbitMQ data (queues, messages, exchanges) will be lost when the container is removed.

For local development, this might be acceptable if you're okay with losing queued tasks on restart. If persistence is desired, consider adding a volume:

  rabbitmq:
    image: rabbitmq:3-management
+   volumes:
+     - rabbitmq_data:/var/lib/rabbitmq
    ports:
      - "5672:5672"

And add the volume definition at the top:

volumes:
  rabbitmq_data: {}
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 53d13eb and efec9f3.

📒 Files selected for processing (7)
  • .envs/.ci/.postgres (1 hunks)
  • .envs/.local/.postgres (1 hunks)
  • compose/local/django/celery/worker/start (1 hunks)
  • compose/production/django/celery/worker/start (1 hunks)
  • docker-compose.ci.yml (4 hunks)
  • docker-compose.production.yml (1 hunks)
  • docker-compose.yml (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • .envs/.local/.postgres
  • compose/local/django/celery/worker/start
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: test
🔇 Additional comments (8)
.envs/.ci/.postgres (1)

6-7: LGTM! Helpful convenience variable.

The DATABASE_URL addition is well-documented and provides a useful shortcut for database tools while keeping the existing POSTGRES_ variables as the source of truth for Django.

compose/production/django/celery/worker/start (1)

7-7: LGTM! Queue targeting is explicit and consistent.

The --queues=antenna flag correctly constrains the production Celery worker to the antenna queue, aligning with the default queue configuration in settings.

docker-compose.yml (3)

27-27: LGTM! Correct dependency added.

The django service correctly depends on RabbitMQ since it will need to enqueue Celery tasks through the RabbitMQ broker.


98-102: LGTM! Appropriate local development configuration.

The celeryworker service correctly adds:

  • CELERY_DEBUG=1 for enhanced debugging in local development
  • RabbitMQ dependency for proper startup ordering
  • Updated command to use the /start-celeryworker script

125-125: No action required — RabbitMQ credentials are properly configured.

Verification confirms that .envs/.local/.django contains all required RabbitMQ configuration variables (RABBITMQ_DEFAULT_USER, RABBITMQ_DEFAULT_PASS, and CELERY_BROKER_URL), with credentials correctly aligned across both environments (local and CI). The configuration is already consistent and properly set up.

docker-compose.ci.yml (3)

1-8: LGTM! Excellent CI isolation.

The explicit stack name antenna-ci and dedicated volumes (ami_ci_postgres_data, minio_ci_data) ensure complete isolation from the local development environment, preventing conflicts when running tests.


24-24: LGTM! RabbitMQ integration is consistent.

The RabbitMQ service and dependencies are properly configured for the CI environment:

  • Django and celeryworker services correctly depend on RabbitMQ
  • RabbitMQ service uses the same image and configuration pattern as local development
  • Environment variables are loaded from .envs/.ci/.django

Also applies to: 44-45, 48-52


71-72: LGTM! Proper health check dependency.

The service_healthy condition ensures that minio-init only runs after MinIO is fully ready, preventing initialization failures.

Comment on lines +19 to 22
extra_hosts: # hostname aliases to the IPs of persistant services running outside docker network
- "db:${DATABASE_IP}"
- "rabbitmq:${RABBITMQ_IP}"
- "redis:${REDIS_IP}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Verify RABBITMQ_IP is set in production environment.

The extra_hosts configuration correctly follows the existing pattern for external service resolution. Ensure that the ${RABBITMQ_IP} environment variable is properly defined in your production deployment configuration.


🏁 Script executed:

#!/bin/bash
# Description: Verify RABBITMQ_IP is referenced in production environment files

# Search for RABBITMQ_IP definition in production environment files
rg -n "RABBITMQ_IP" .envs/.production/

Length of output: 0


🏁 Script executed:

#!/bin/bash
# Description: Investigate environment variable configuration structure

# 1. Check if .envs directory exists and its structure
echo "=== Directory structure of .envs/ ==="
fd . .envs/ -type f -name "*" 2>/dev/null | head -20

# 2. Search entire repo for RABBITMQ_IP (not just .envs/.production/)
echo -e "\n=== All occurrences of RABBITMQ_IP in repo ==="
rg -n "RABBITMQ_IP" . 2>/dev/null

# 3. Search for DATABASE_IP and REDIS_IP to understand the pattern
echo -e "\n=== How DATABASE_IP is referenced ==="
rg -n "DATABASE_IP" . 2>/dev/null | head -10

echo -e "\n=== How REDIS_IP is referenced ==="
rg -n "REDIS_IP" . 2>/dev/null | head -10

Length of output: 473


🏁 Script executed:

#!/bin/bash
# Description: Search for environment configuration and documentation

# 1. Look for .env files in repo root
echo "=== .env files in repo root ==="
fd -t f "^\.env" . -max-depth 1 2>/dev/null

# 2. Search for documentation about environment setup
echo -e "\n=== README or documentation files ==="
fd -t f "(README|INSTALL|SETUP|DEPLOY)" . -iname "*" 2>/dev/null | head -10

# 3. Check docker-compose.worker.yml for rabbitmq references
echo -e "\n=== Full extra_hosts section in docker-compose.worker.yml ==="
rg -A 5 "extra_hosts" ./docker-compose.worker.yml 2>/dev/null

# 4. Check for any comments in docker-compose.production.yml about these variables
echo -e "\n=== Comments in docker-compose.production.yml around extra_hosts ==="
rg -B 5 -A 5 "extra_hosts" ./docker-compose.production.yml 2>/dev/null

Length of output: 821


🏁 Script executed:

#!/bin/bash
# Description: Check if Celery worker should connect to RabbitMQ

# 1. Search for celery/rabbitmq configuration in codebase
echo "=== Celery/RabbitMQ references ==="
rg -n -i "celery|rabbitmq|broker" . --type=py --type=yml 2>/dev/null | head -20

# 2. Check docker-compose.worker.yml full configuration
echo -e "\n=== Full docker-compose.worker.yml ==="
cat ./docker-compose.worker.yml 2>/dev/null

Length of output: 887


Add RABBITMQ_IP mapping to docker-compose.worker.yml extra_hosts

The celeryworker service requires RABBITMQ_IP hostname mapping to connect to the external RabbitMQ broker, but this is missing from docker-compose.worker.yml. The celeryworker inherits from the django service which currently only includes DATABASE_IP and REDIS_IP. Add the rabbitmq hostname mapping to lines 16-17 to match the production configuration:

    extra_hosts:
      - "db:${DATABASE_IP}"
      - "rabbitmq:${RABBITMQ_IP}"
      - "redis:${REDIS_IP}"

Without this, the Celery worker will fail to resolve the rabbitmq hostname and cannot connect to the broker.

🤖 Prompt for AI Agents
In docker-compose.production.yml around lines 19 to 22, the celeryworker service
(via the django service) is missing the rabbitmq hostname mapping in
extra_hosts; add the "rabbitmq:${RABBITMQ_IP}" mapping alongside the existing db
and redis entries so extra_hosts contains db, rabbitmq, and redis mappings to
their respective environment IP variables, ensuring the worker can resolve and
connect to the external RabbitMQ broker.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between efec9f3 and e8d10ab.

📒 Files selected for processing (7)
  • .agents/AGENTS.md (2 hunks)
  • .gitignore (1 hunks)
  • .vscode/launch.json (2 hunks)
  • README.md (1 hunks)
  • compose/local/django/celery/worker/start (1 hunks)
  • compose/local/django/start (1 hunks)
  • docker-compose.yml (3 hunks)
✅ Files skipped from review due to trivial changes (2)
  • .gitignore
  • README.md
🚧 Files skipped from review as they are similar to previous changes (1)
  • docker-compose.yml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: test
🔇 Additional comments (6)
compose/local/django/start (1)

9-14: Debug mode implementation looks good.

The conditional debug support is properly implemented with appropriate defaults and identical uvicorn configuration in both branches.

compose/local/django/celery/worker/start (1)

8-10: Queue configuration correctly enforced.

The explicit --queues=antenna constraint is properly applied in both debug and non-debug branches, aligning with the PR objective to constrain workers to the "antenna" queue.

.vscode/launch.json (2)

20-35: Configuration is correct; worker script needs adjustment.

This configuration correctly expects the Celery worker on port 5679. However, compose/local/django/celery/worker/start:8 currently binds to port 5678, creating a mismatch. Once the worker script is updated to use port 5679 (as flagged in that file), this configuration will work correctly.


52-58: Compound configuration enhances debugging workflow.

The combined "Attach: Django + Celery" configuration with stopAll: true provides a convenient way to debug both services simultaneously, which is valuable for testing Celery task integration.

.agents/AGENTS.md (2)

16-24: Good additions, but tangential to PR scope.

The Performance Optimization and Git Commit Guidelines sections are well-written and useful. However, these additions appear unrelated to the stated PR objective of configuring RabbitMQ as the Celery broker. Consider whether these belong in this PR or should be separated into a documentation-only commit.


1-106: ⚠️ Scope mismatch: This PR should update infrastructure files, not just documentation.

The PR objectives state that this PR "configures RabbitMQ as the Celery broker," "sets 'antenna' as the default Celery task queue," and "updates local and CI Docker Compose, env files, and worker start scripts." However, the only file provided for review is developer documentation (CLAUDE.md), which makes no mention of:

  • RabbitMQ broker configuration
  • CELERY_BROKER_URL environment variables
  • RABBITMQ_HOST, RABBITMQ_PORT credentials
  • Queue constraints (the "antenna" queue mentioned in objectives)
  • Changes to docker-compose or .env files
  • Updates to worker start scripts for RabbitMQ

This suggests the actual infrastructure files are missing from the code review. Please verify that all changed files are included:

  • docker-compose.yml (local and CI stacks)
  • .env files
  • compose/local/django/start
  • compose/local/django/celery/worker/start
  • config/celery_app.py or settings configuration files

Resolved conflicts by:
- Keeping main's debugger improvements (correct ports and -Xfrozen_modules=off flag)
- Preserving this branch's RabbitMQ queue name (--queues=antenna)
- Merging celeryworker config (ports: [] + depends_on: rabbitmq)
- Using consistent override file naming (docker-compose.override-example.yml)
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
compose/local/django/celery/worker/start (1)

13-13: Prefer running the Celery CLI under watchfiles for reliability.

Invoking celery.__main__.main with --args is brittle. Run the Celery CLI directly so argument parsing and exit codes behave as expected.

Apply this change:

-    exec watchfiles --filter python celery.__main__.main --args '-A config.celery_app worker --queues=antenna -l INFO'
+    exec watchfiles --filter python 'celery -A config.celery_app worker --queues=antenna -l INFO'
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e8d10ab and 82fdfd8.

📒 Files selected for processing (3)
  • .agents/AGENTS.md (2 hunks)
  • compose/local/django/celery/worker/start (1 hunks)
  • docker-compose.yml (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • docker-compose.yml
  • .agents/AGENTS.md
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: test
🔇 Additional comments (1)
compose/local/django/celery/worker/start (1)

11-11: Debugger path LGTM; confirm Python supports -Xfrozen_modules.

Port 5679 resolves the prior conflict and the queue pin is correct. Please confirm the container’s Python version supports -Xfrozen_modules=off to avoid startup failures on older images.

Copy link
Collaborator

@mihow mihow left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for breaking this out Vanessa! I just updated how we run the start command.

@mihow mihow merged commit 1045513 into main Nov 14, 2025
7 checks passed
@mihow mihow deleted the feat/rabbitmq-broker branch November 14, 2025 03:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants