-
Notifications
You must be signed in to change notification settings - Fork 5
✨(mta) add prototype integrated MTA out #285
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
effaa74
to
59f0786
Compare
adds some variables for the dante configuration uses heredocs format for Dockerfile RUNs remove unused stuff in the Dockerfile adds the github workflow to build the image
4abcf35
to
3780c67
Compare
for better grepability
Selfcheck prometheus
Wrapping the proxied socket in TLS worked but was incompatible with later doing STARTTLS in SMTP connections.
@coderabbitai deep review please |
WalkthroughAdds a SOCKS proxy service and tests; reworks outbound delivery to support mode-based direct MX and SMTP relay (with optional SOCKS proxy); centralizes draft logic into new helpers; implements a self-check (management command, Celery task/beat, optional Prometheus metrics); updates settings, docs, tests, CI, and removes legacy S3 auth helper. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor User
participant API as API/Django
participant MDA as MDA Outbound
participant DNS as DNS Resolver
participant SMTP as SMTP Relay/MX
participant Proxy as SOCKS Proxy
User->>API: create/send message
API->>MDA: send_outbound_message(message)
alt MTA_OUT_MODE == "relay"
MDA->>SMTP: send_smtp_mail(smtp_host, port, creds, envelope, content, proxy?)
SMTP-->>MDA: per-recipient statuses
else MTA_OUT_MODE == "direct"
MDA->>DNS: resolve MX for recipient domains
DNS-->>MDA: MX records (fallback to A)
loop per MX host
MDA->>SMTP: send_smtp_mail(mx_ip, port, envelope, content, proxy?)
SMTP-->>MDA: per-recipient statuses
end
else invalid
MDA-->>API: raise ValueError
end
MDA-->>API: update MessageRecipient statuses
API-->>User: return result
sequenceDiagram
autonumber
actor Beat
participant Worker as Celery Worker
participant SC as SelfCheck Orchestrator
participant PG as Prometheus Pushgateway
Note over Beat: scheduled at MESSAGES_SELFCHECK_INTERVAL
Beat->>Worker: enqueue core.mda.tasks.selfcheck_task
Worker->>SC: run_selfcheck()
SC->>SC: create draft, send, poll receipt, verify, cleanup
alt metrics enabled
SC->>PG: push metrics (times, success)
end
SC-->>Worker: result {success, times}
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Assessment against linked issues
Assessment against linked issues: Out-of-scope changes
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
🧩 Analysis chainI'll conduct a deep review of your MTA integration changes. Let me examine the code changes and implementation details. 🏁 Script executed:
Length of output: 1868 🏁 Script executed:
Length of output: 48724 🏁 Script executed:
Length of output: 20303 🏁 Script executed:
Length of output: 13944 🏁 Script executed:
Length of output: 12152 🏁 Script executed:
Length of output: 14395 @sylvinus I've conducted a comprehensive deep review of your MTA implementation. This is an impressive piece of work with solid architecture and excellent test coverage. Here's my detailed analysis: 🏆 StrengthsSecurity & Authentication
Architecture & Design
Testing & Reliability
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 39
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (9)
src/backend/core/services/identity/keycloak.py (4)
63-80
: Bug: underscore-prefixed custom attributes still sent to Keycloak unnormalizedYou skip normalization for keys starting with “_”, but they are still injected via the dict expansion at Line 67, so they’ll be sent as raw (often non-list) values and may leak private metadata. Either exclude them entirely or normalize to lists-of-strings like others. Suggest excluding and normalizing only allowed keys:
- # Prepare group attributes - group_attributes = { - "maildomain_id": [str(maildomain.id)], - "maildomain_name": [maildomain.name], - **(maildomain.custom_attributes or {}), - } + # Prepare group attributes (exclude private keys and normalize values) + group_attributes = { + "maildomain_id": [str(maildomain.id)], + "maildomain_name": [maildomain.name], + } # Add custom attributes if maildomain.custom_attributes: for key, value in maildomain.custom_attributes.items(): - if key.startswith("_"): - continue - # Ensure values are lists (Keycloak requirement) - if isinstance(value, list): - group_attributes[key] = value - else: - group_attributes[key] = [str(value)] + if key.startswith("_"): + continue # do not send private keys to Keycloak + # Ensure values are lists of strings (Keycloak requirement) + if value is None: + group_attributes[key] = [] + elif isinstance(value, list): + group_attributes[key] = [str(v) for v in value] + else: + group_attributes[key] = [str(value)]
153-160
: Normalize user custom attributes and drop private keys before sending to KeycloakUser attributes are passed through as-is (including underscores and non-lists). Normalize and filter like groups to avoid API type errors and metadata leaks.
- # Prepare user attributes - user_attributes = { - "mailbox_id": [str(mailbox.id)], - "maildomain_id": [str(mailbox.domain.id)], - "local_part": [mailbox.local_part], - "domain_name": [mailbox.domain.name], - **user_custom_attributes, - } + # Prepare user attributes (exclude private keys and normalize values) + normalized_user_attrs = {} + if user_custom_attributes: + for key, value in user_custom_attributes.items(): + if key.startswith("_"): + continue + if value is None: + normalized_user_attrs[key] = [] + elif isinstance(value, list): + normalized_user_attrs[key] = [str(v) for v in value] + else: + normalized_user_attrs[key] = [str(value)] + + user_attributes = { + "mailbox_id": [str(mailbox.id)], + "maildomain_id": [str(mailbox.domain.id)], + "local_part": [mailbox.local_part], + "domain_name": [mailbox.domain.name], + **normalized_user_attrs, + }
108-111
: Hardengroup_id
retrieval increate_group()
KeycloakAdmin.create_group()
returns the new group’s ID (string) orNone
if the group already exists or the Location header is missing. Add a fallback lookup viaget_group_by_path()
to ensuregroup_id
is always set:@@ src/backend/core/services/identity/keycloak.py:108-111 - group_id = keycloak_admin.create_group( - payload=group_payload, parent=parent_id - ) + group_id = keycloak_admin.create_group( + payload=group_payload, parent=parent_id + ) + if not group_id: + created = keycloak_admin.get_group_by_path(group_path) + if created and "error" not in created: + group_id = created.get("id")
199-206
: Align membership lookup with creation logic
In src/backend/core/services/identity/keycloak.py (around line 199), replace:- group_name = f"maildomain-{mailbox.domain.name}" + group_path = f"{settings.KEYCLOAK_GROUP_PATH_PREFIX}{mailbox.domain.name}" + group_name = group_path.rsplit("/", maxsplit=1)[-1]This ensures the name used for
get_groups
matches how groups are created insync_maildomain_to_keycloak_group
.src/backend/pyproject.toml (1)
1-1
: Install missing dependencies in CI configuration. The smoke test on Python 3.13 failed to import dkimpy (dkim
), dnspython (dns
), PySocks (socks
), cryptography, and django; add these packages to the CI install step to prevent runtime import errors.src/backend/core/api/utils.py (1)
66-87
: Guardgenerate_upload_policy
for non-S3 backends and missing boto3.Current implementation assumes S3 storage and boto3 presence; with
django-storages[s3]
removed this will raise at runtime. Add a duck-typing guard with a clear error, or restore the dependency (see pyproject comment).Apply:
def generate_upload_policy(item): """ Generate a S3 upload policy for a given item. """ - # Generate a unique key for the item + # Generate a unique key for the item key = f"{item.key_base}/{item.filename}" - # Generate the policy - s3_client = default_storage.connection.meta.client - policy = s3_client.generate_presigned_post( - default_storage.bucket_name, - key, - Fields={"acl": "private"}, - Conditions=[ - {"acl": "private"}, - ["content-length-range", 0, settings.ITEM_FILE_MAX_SIZE], - ], - ExpiresIn=settings.AWS_S3_UPLOAD_POLICY_EXPIRATION, - ) - - return policy + # Generate the policy only if the storage exposes an S3-like client + storage = default_storage + # Duck-typing: expect `.connection.meta.client.generate_presigned_post` + s3_client = getattr(getattr(getattr(storage, "connection", None), "meta", None), "client", None) + bucket_name = getattr(storage, "bucket_name", None) + if not (s3_client and bucket_name and hasattr(s3_client, "generate_presigned_post")): + raise NotImplementedError( + "generate_upload_policy requires an S3-compatible storage backend. " + "Either configure django-storages S3 (and ensure boto3 is installed) " + "or replace this code path for your storage." + ) + + policy = s3_client.generate_presigned_post( + bucket_name, + key, + Fields={"acl": "private"}, + Conditions=[ + {"acl": "private"}, + ["content-length-range", 0, settings.ITEM_FILE_MAX_SIZE], + ], + ExpiresIn=settings.AWS_S3_UPLOAD_POLICY_EXPIRATION, + ) + + return policydocs/architecture.md (1)
24-27
: Update “Mail Transfer Layer” to reflect Celery-based outbound.
This still references a Postfix MTA-Out, which contradicts the new mode-based outbound implemented in this PR.Apply this diff:
-**MTA-Out (Outbound)**: Postfix server for email delivery and relay +**MTA-Out (Outbound)**: Celery‑based SMTP delivery (direct to MX or via relay), with optional SOCKS proxyingsrc/backend/core/api/viewsets/draft.py (1)
47-49
: Schema requires “subject” but tests allow missing/empty subject; align API docs with behavior.- "subject": drf_serializers.CharField( - required=True, help_text="Subject of the message" - ), + "subject": drf_serializers.CharField( + required=False, + allow_blank=True, + allow_null=True, + help_text="Subject of the message (optional; empty or null allowed)", + ),src/socks-proxy/tests/conftest.py (1)
1-264
: Remove remaining lint errors: unused imports and bareexcept
blocks
- Remove unused imports:
time
(line 3),socket
(line 6),ssl
(line 8),struct
(line 9).- Replace bare
except:
inget_container_ip
(line 64) and in fixture teardowns (lines 216–219, 259–262) with either a specific exception orcontextlib.suppress(BaseException)
.Re-run
ruff check src/socks-proxy/tests/conftest.py
to confirm all issues are resolved.
🧹 Nitpick comments (59)
src/backend/core/signals.py (1)
169-171
: Good: delete_by_query made resilient to 404/409 with conflicts='proceed'.Optional: add
refresh='wait_for'
if tests or UI need immediate consistency after delete.If needed, confirm opensearch-py supports
refresh
in your pinned version.src/socks-proxy/tests/__init__.py (1)
1-1
: Drop__init__.py
to avoid packaging tests directory.With pytest’s
--import-mode=importlib
already enabled (see backend pyproject), keeping__init__.py
can change import semantics and occasionally hinder discovery. Recommend removing this file.Apply:
-# Tests package for SOCKS proxy
Or delete the file entirely.
src/socks-proxy/tests/requirements.txt (1)
1-5
: Align dependency pins and add trailing newline.
- requests here is 2.32.3, backend is 2.32.5. Prefer aligning to 2.32.5 to avoid drift.
- Ensure file ends with a newline for tooling compatibility.
Apply:
pytest==8.3.5 pytest-cov==6.0.0 aiosmtpd==1.4.4 -requests==2.32.3 +requests==2.32.5 PySocks==1.7.1 +src/backend/pyproject.toml (3)
86-86
: Unify pytest versions across repo.Backend dev deps pin pytest==8.3.4 while socks-proxy tests pin 8.3.5. Align to 8.3.5 to reduce cross-env surprises.
Apply:
-"pytest==8.3.4", +"pytest==8.3.5",
49-49
: Direct VCS dependency for flanker.Pinning by commit hash is fine; consider vendoring a fork or publishing a tagged release to improve reproducibility and SBOM clarity.
75-92
: Dev-only tools are fine; addpipdeptree
usage to docs/Makefile target comments.Since you added it, a brief mention in README/Makefile helps future debuggability.
env.d/development/socks-proxy.defaults (1)
1-4
: Clarify non-production defaults and consider a toggle for auth.
- Add a comment warning not to reuse these creds in prod.
- Optionally support
PROXY_AUTH_REQUIRED=true|false
for quicker local testing.Apply:
PROXY_USERS="user1:pwd1,user2:pwd2" -# This should fit most local development environments with Docker. +# This should fit most local development environments with Docker. +# Do NOT reuse these credentials in production. PROXY_SOURCE_IP_WHITELIST="127.0.0.1/32,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16" + +# Optional: set to "false" to disable auth locally (if supported by entrypoint) +# PROXY_AUTH_REQUIRED="true"src/backend/core/api/utils.py (5)
1-1
: Update module docstring; it no longer “generates S3 authorization headers.”Reflect current responsibilities to avoid confusion.
Apply:
-"""Util to generate S3 authorization headers for object storage access control""" +"""Utilities for tree transformation and (optionally) S3 upload policy generation."""
6-6
: Remove dead commented import.Stale
# import botocore
should be dropped.Apply:
-# import botocore
38-64
: Delete legacy commented-out S3 auth code.Commented blocks rot and confuse readers. History is in Git.
Apply:
-# def generate_s3_authorization_headers(key): -# """ -# Generate authorization headers for an s3 object. -# These headers can be used as an alternative to signed urls with many benefits: -# - the urls of our files never expire and can be stored in our items' content -# - we don't leak authorized urls that could be shared (file access can only be done -# with cookies) -# - access control is truly realtime -# - the object storage service does not need to be exposed on internet -# """ -# url = default_storage.unsigned_connection.meta.client.generate_presigned_url( -# "get_object", -# ExpiresIn=0, -# Params={"Bucket": default_storage.bucket_name, "Key": key}, -# ) -# request = botocore.awsrequest.AWSRequest(method="get", url=url) -# -# s3_client = default_storage.connection.meta.client -# # pylint: disable=protected-access -# credentials = s3_client._request_signer._credentials -# frozen_credentials = credentials.get_frozen_credentials() -# region = s3_client.meta.region_name -# auth = botocore.auth.S3SigV4Auth(frozen_credentials, "s3", region) -# auth.add_auth(request) -# -# return request
17-35
: flat_to_nested mutates inputs; document or avoid in-place sort.It sorts and augments
items
in-place. If callers reuse the list, that’s surprising. Either copy before sorting or document mutation explicitly.Apply:
- # Sort the flat list by path to ensure parent nodes are processed first - items.sort(key=lambda x: x["path"]) + # Sort a copy to avoid mutating the caller's list + items = sorted(items, key=lambda x: x["path"])
32-35
: Multiple roots error message could include offending paths.Helps debugging content issues.
Apply:
- if len(roots) > 1: - raise ValueError("More than one root element detected") + if len(roots) > 1: + raise ValueError(f"More than one root element detected: {[r['path'] for r in roots]}")src/socks-proxy/README.md (1)
29-31
: Document credential delimiter constraints to avoid misconfiguration.
Passwords/usernames cannot contain “,” or “:” due to parsing in entrypoint.sh.Apply this diff to add a note below the table:
+Note: +- Usernames and passwords must not contain “,” or “:” characters (they are used as delimiters in PROXY_USERS). +- Avoid using production credentials; these accounts exist only inside the proxy container.src/socks-proxy/entrypoint.sh (2)
54-61
: Reduce attack surface: drop unnecessary SOCKS BIND.
Unless you need active-mode/port-binding, allow only CONNECT.Apply this diff:
- command: bind connect + command: connect
66-68
: Set explicit permissions on generated config.
Not strictly sensitive, but deterministic perms help debugging and avoid noisy warnings in some images.Apply this diff:
echo "$DANTE_CONFIG" > /etc/sockd.conf +chmod 0644 /etc/sockd.conf
docs/selfcheck.md (2)
49-49
: Clarify scheduling gate.
Document that beat scheduling can be disabled by configuration to avoid surprises in prod-like environments.Apply this diff:
-The selfcheck runs automatically every 10 minutes via Celery Beat. The interval can be configured using the `MESSAGES_SELFCHECK_INTERVAL` setting. +The selfcheck runs automatically every 10 minutes via Celery Beat (configurable via `MESSAGES_SELFCHECK_INTERVAL`). It is disabled when `DISABLE_CELERY_BEAT_SCHEDULE=True`.
11-13
: Avoid leaking implementation details that may drift.
Ifsend_message(force_mta_out=True)
is an internal detail, consider referencing the public API (“outbound delivery path”) to prevent docs rot.Apply this diff:
-3. **Sends the message** via the outbound system using `prepare_outbound_message` and `send_message(force_mta_out=True)` +3. **Sends the message** via the outbound delivery path (internally prepares and enqueues the message for MTA‑out)src/backend/core/mda/smtp.py (1)
161-168
: Add login error handling and debug visibility.
Consider catching SMTPAuthenticationError distinctly to signal auth vs. transport failures.Apply this diff:
- if smtp_username and smtp_password: - client.login(smtp_username, smtp_password) + if smtp_username and smtp_password: + try: + client.login(smtp_username, smtp_password) + except smtplib.SMTPAuthenticationError as auth_err: + logger.error("SMTP auth failed for user '%s': %s", smtp_username, auth_err, exc_info=True) + for email in recipient_emails: + statuses[email] = {"error": f"auth_failed: {auth_err}", "delivered": False} + return statusessrc/socks-proxy/tests/Dockerfile (3)
4-6
: Trim apt footprint; prefer no-recommends.Reduces image size and speeds builds.
-RUN apt-get update && apt-get install -y \ - netcat-openbsd \ - && rm -rf /var/lib/apt/lists/* +RUN apt-get update && apt-get install -y --no-install-recommends \ + netcat-openbsd \ + && rm -rf /var/lib/apt/lists/*
1-3
: Harden Python defaults for CI logs and determinism.Small env toggles improve signal/noise and avoid bytecode writes.
FROM python:3.13-slim-bookworm + +# Python/ pip sane defaults +ENV PYTHONDONTWRITEBYTECODE=1 \ + PYTHONUNBUFFERED=1 \ + PIP_NO_CACHE_DIR=1 \ + PIP_DISABLE_PIP_VERSION_CHECK=1
11-16
: Optional: run tests as non-root.Safer-by-default; only if your tests don’t need privileged ports/files.
# Copy requirements and install Python dependencies COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt + +# Drop privileges (optional) +RUN adduser --system --group app && chown -R app:app /app +USER appsrc/socks-proxy/Dockerfile (4)
7-10
: Slimmer build deps install.Avoids pulling recommended packages; cleans apt cache.
-RUN <<EOR -apt-get update -apt-get install -y build-essential curl -EOR +RUN <<'EOR' +set -eux +apt-get update +apt-get install -y --no-install-recommends build-essential curl +rm -rf /var/lib/apt/lists/* +EOR
12-21
: Faster build; fail fast on errors.Use parallel make; keep strict shell flags.
-RUN <<EOR -set -eu +RUN <<'EOR' +set -eux curl -fsSL -o dante.tar.gz $DANTE_URL echo "$DANTE_SHA256 dante.tar.gz" | sha256sum -c - tar -xzf dante.tar.gz cd dante-$DANTE_VER ./configure -make +make -j"$(nproc)" make install EOR
23-30
: Expose default port; aid discoverability.Docs-only effect; doesn’t publish ports automatically.
FROM debian:trixie-slim AS runtime COPY --from=base /usr/local/sbin/sockd /usr/local/sbin/sockd COPY --chmod=0755 entrypoint.sh /entrypoint.sh +EXPOSE 1080 ENTRYPOINT ["/entrypoint.sh"] CMD ["/usr/local/sbin/sockd"]
27-30
: Consider non-root runtime with writable config path.If entrypoint writes /etc/sockd.conf, root is required. Consider writing to /run/sockd.conf and exec sockd -f /run/sockd.conf, then drop to a dedicated user. Improves container hardening.
Would you like a follow-up patch to adjust entrypoint.sh and drop privileges?
src/mta-out/tests/conftest.py (1)
150-156
: TLS/auth should be conditional.Some dev relays (e.g., MailCatcher on 1025) don’t support STARTTLS/auth. Make both conditional to reduce flaky CI.
Example (outside selected lines):
client.ehlo() if client.has_extn("starttls"): client.starttls() client.ehlo() else: logger.info("STARTTLS not supported; proceeding without TLS") if MTA_OUT_SMTP_USERNAME and MTA_OUT_SMTP_PASSWORD: client.login(MTA_OUT_SMTP_USERNAME, MTA_OUT_SMTP_PASSWORD)Confirm whether these tests are meant to target the relay itself or the MTA-out service. If it’s the latter, use a separate test var (e.g., MTA_OUT_TEST_SMTP_HOST) to avoid conflating app config with test target.
src/backend/core/mda/tasks.py (1)
65-104
: Improve task lifecycle reporting and metadata.Surface “STARTED” and propagate error field even on success for observability.
@celery_app.task(bind=True) def selfcheck_task(self): @@ - try: + try: + # Report task start for monitors/UIs + self.update_state(state="STARTED", meta={"status": "running"}) result = run_selfcheck() @@ self.update_state( state="SUCCESS", meta={ "status": "completed", "success": result["success"], "send_time": result["send_time"], "reception_time": result["reception_time"], + "error": result.get("error"), }, )Optional:
- Set time limits (e.g.,
@celery_app.task(bind=True, soft_time_limit=..., time_limit=...)
) aligned withMESSAGES_SELFCHECK_TIMEOUT
.- Consider
rate_limit="1/m"
if scheduled frequently.Would you like a patch to wire time limits to settings?
README.md (1)
186-194
: Tighten test instructions: explicitly mention no TLS/auth for 8917.Minor clarity so readers don’t try -tls or auth against mailcatcher.
Apply:
-# Send a test message to the mailcatcher, then read it on http://localhost:8904/ -swaks [email protected] --server localhost:8917 +# Send a test message to the mailcatcher (no TLS/auth), then read it on http://localhost:8904/ +swaks [email protected] --server localhost:8917 --header "Subject: Mailcatcher test".github/workflows/messages.yml (1)
123-133
: Align Docker build target withruntime
stage
Optional: to make the intended stage explicit, add thetarget: runtime
flag (there is noruntime-prod
alias in this Dockerfile):with: image_name: "socks-proxy" context: "src/socks-proxy" + target: runtime
src/socks-proxy/tests/test_smtp.py (3)
1-6
: Remove unused imports flagged by Ruff.-import pytest import logging -import socket import time -import smtplib from email.mime.text import MIMEText
28-31
: Reduce flakiness: replace fixed sleep with short polling for message receipt.- # Wait for message and get connection info - time.sleep(1) - messages = mock_smtp_server.get_messages() + # Wait for message and get connection info (poll up to ~5s) + deadline = time.time() + 5 + messages = [] + while time.time() < deadline: + messages = mock_smtp_server.get_messages() + if messages: + break + time.sleep(0.05)Apply the same change in the proxy test block (lines 57-60).
34-39
: Assert presence of connection_info fields before value checks.- connection_info = messages[0].get("connection_info", {}) + connection_info = messages[0].get("connection_info", {}) + assert "peer_host" in connection_info, "Mock server must expose peer_host" logger.info(f"Direct SMTP connection info: {connection_info}")Do the same in the proxy test (lines 63-67).
src/backend/core/api/viewsets/draft.py (1)
96-116
: Fix docs: confusing “At least one of draftBody must be provided” and outdated attachment wording.- At least one of draftBody must be provided. + Optional fields: you may provide any subset of the fields. Subject may be empty or omitted. @@ - To add attachments, upload them first using the /api/v1.0/blob/upload/{mailbox_id}/ endpoint - and include the returned blobIds in the attachmentIds field. + To add attachments, upload them first using the /api/v1.0/blob/upload/{mailbox_id}/ endpoint + and include the returned blobIds in the "attachments" list as objects: { "blobId", "partId", "name" }.Makefile (1)
223-229
: Include socks-proxy tests in the aggregate test target.test: ## run all tests test: \ back-test \ front-test \ mta-in-test \ - mta-out-test + mta-out-test \ + socks-proxy-testsrc/backend/core/mda/outbound_mta.py (2)
128-133
: Avoid logging PII (recipient emails).Logging full recipient lists exposes PII. Log counts or hashed/partially-masked addresses.
- logger.info( - "Sending to MX %s (%s) for recipients: %s", - mx_hostname, - mx_ip, - recipient_emails, - ) + logger.info( + "Sending to MX %s (%s) for %d recipient(s)", + mx_hostname, + mx_ip, + len(recipient_emails), + )
102-116
: Avoid duplicating recipient selection logic here.
send_message_via_mx()
recomputes deliverable recipients, which can drift fromoutbound.send_message()
logic. Prefer passing the external recipient list into this function.I can refactor the signature to
send_message_via_mx(message, recipient_emails)
and thread it throughsend_outbound_message
.src/socks-proxy/tests/test_socks_proxy.py (3)
6-6
: Remove unused import.
.conftest.PROXY1_CONFIG
is unused (Ruff F401). Drop it.-from .conftest import SOCKSClient, PROXY1_CONFIG +from .conftest import SOCKSClient
2-5
: Drop unused logger.
logging
andlogger
aren’t used.-import logging - -logger = logging.getLogger(__name__) +# logging not required in this module
73-80
: Simplify assertion; avoid try/except that masks failures.Directly assert the boolean from
test_connection()
.def test_socks_proxy_connection_establishment(socks_client): """Test that SOCKS proxy connection can be established""" - try: - socks_client.test_connection("8.8.8.8", 53) - assert True, "SOCKS connection should be established successfully" - except Exception as e: - pytest.fail(f"Failed to establish SOCKS connection: {e}") + assert socks_client.test_connection("8.8.8.8", 53), \ + "SOCKS connection should be established successfully"src/backend/core/tests/mda/test_outbound_e2e.py (2)
86-101
: Stabilize mailcatcher IP resolution in CI.
socket.gethostbyname("mailcatcher")
assumes Docker DNS is present in the test environment. Fall back tolocalhost
and the published port if resolution fails, to reduce CI flakes.I can add a small helper to try
mailcatcher
, thenlocalhost
, with a warning, and use@override_settings(MTA_OUT_DIRECT_PORT=<mapped-port>)
accordingly.
205-224
: Tighten polling loop with jitter and early exit logging.Minor: add small jitter and a debug log on each empty poll to aid diagnosing timeouts.
compose.yaml (2)
52-52
: Restrict mailcatcher SMTP port to localhost in development.Avoid exposing SMTP to the entire host by binding to 127.0.0.1.
- - "8917:1025" + - "127.0.0.1:8917:1025"
321-342
: Add a healthcheck for socks-proxy and wait on health instead of start.Ensure tests don’t race the proxy startup.
socks-proxy: build: context: src/socks-proxy target: runtime env_file: - env.d/development/socks-proxy.defaults - env.d/development/socks-proxy.local ports: - "8916:1080" + healthcheck: + test: ["CMD-SHELL", "nc -z localhost 1080"] + interval: 1s + timeout: 2s + retries: 60 @@ socks-proxy-test: @@ - depends_on: - socks-proxy: - condition: service_started + depends_on: + socks-proxy: + condition: service_healthysrc/backend/core/tests/mda/test_outbound.py (2)
94-103
: Assert all expected kwargs; include timeout when relevant.Current assert is solid. Consider also asserting absence of proxy kwargs in relay mode to prevent accidental leakage from settings.
188-191
: Avoid in-place sorting of mock_calls.Sorting
mock_calls
mutates the list and can obscure debugging. Sort into a local list instead.- mock_smtp_send.mock_calls.sort( - key=lambda x: " ".join(x.kwargs["recipient_emails"]) - ) + ordered_calls = sorted( + mock_smtp_send.mock_calls, + key=lambda x: " ".join(x.kwargs["recipient_emails"]), + )And update subsequent assertions to use
ordered_calls[0]
/ordered_calls[1]
.src/backend/core/mda/draft.py (2)
96-101
: Minor: avoid noisy info logs on every draft update.Downgrade to debug to keep logs signal high in production.
- logger.info("Saving message %s with fields %s", message.id, updated_fields) + logger.debug("Saving message %s with fields %s", message.id, updated_fields)Also applies to: 288-293
103-112
: Naming consistency for update payload.You accept keys "to/cc/bcc" and later expect "draftBody" (camelCase). Consider aligning keys (snake_case) across API/internal to reduce confusion.
I can help produce a minimal adapter to normalize payload keys in one place before processing.
src/backend/core/mda/selfcheck.py (1)
241-246
: Optional: include early exit when TO is empty to avoid needless setup.You already skip on empty FROM. Mirror for TO.
I can send a tiny patch if you want symmetry here.
src/backend/core/management/commands/selfcheck.py (2)
67-69
: Typo: metric help text.“Receptions” → “Reception”.
- "Receptions duration of the self check", + "Reception duration of the self check",
90-96
: Don't fail the command if metrics push gateway is unavailable.Make metrics push best-effort.
- def send_metrics(self): - return push_to_gateway( - settings.MESSAGES_SELFCHECK_PROMETHEUS_METRICS_PUSHGATEWAY_URL, - job="selfcheck", - registry=self.registry, - ) + def send_metrics(self): + try: + return push_to_gateway( + settings.MESSAGES_SELFCHECK_PROMETHEUS_METRICS_PUSHGATEWAY_URL, + job="selfcheck", + registry=self.registry, + ) + except Exception: # best-effort; don't break selfcheck + return Nonesrc/backend/messages/settings.py (2)
162-166
: Clarify allowed values for MTA_OUT_MODE.Comments are inconsistent with usage (“external” is supported elsewhere).
- # "direct" or "relay" + # One of: "direct", "relay", or "external" MTA_OUT_MODE = values.Value( "direct", environ_name="MTA_OUT_MODE", environ_prefix=None )
173-176
: Optional: validate MTA_OUT_MODE at startup.Prevent misconfiguration early.
Add in Base.post_setup():
# settings.py (inside Base.post_setup) valid_modes = {"direct", "relay", "external"} if cls.MTA_OUT_MODE not in valid_modes: raise ValueError(f"MTA_OUT_MODE must be one of {valid_modes}, got {cls.MTA_OUT_MODE!r}")src/socks-proxy/tests/conftest.py (7)
3-3
: Remove unused imports.time, socket, ssl, struct are unused.
-import time +import contextlib @@ -import socket @@ -import ssl -import structAlso applies to: 6-6, 8-9
64-64
: Avoid bare except.Catch Exception explicitly.
- except: + except Exception: return "127.0.0.1" # fallback
216-219
: Use contextlib.suppress for teardown.Cleaner than bare try/except.
- try: - client.quit() - except: - pass + with contextlib.suppress(Exception): + client.quit()
259-262
: Same teardown improvement for proxied client.- try: - client.quit() - except: - pass + with contextlib.suppress(Exception): + client.quit()
146-155
: Deduplicate proxy socket logic with production code.This helper duplicates core.mda.smtp.create_proxied_socket. Importing the shared helper reduces drift.
Example:
-from aiosmtpd.controller import Controller +from aiosmtpd.controller import Controller +from backend.core.mda.smtp import create_proxied_socket # adjust import path as needed @@ -def create_proxied_socket(proxy_host, proxy_port, target_host, target_port, username=None, password=None, timeout=5): - """Create a socket connected through a SOCKS proxy""" - proxy = socks.socksocket() - if type(timeout) in {int, float}: - proxy.settimeout(timeout) - proxy.set_proxy(socks.PROXY_TYPE_SOCKS5, proxy_host, proxy_port, rdns=False, username=username, password=password) - proxy.connect((target_host, target_port)) - - return proxy +# Use shared helper from backend
15-18
: Global logging config in conftest may be intrusive.Consider scoping logging in tests to avoid altering suite-wide logging.
If needed, switch to a fixture that configures logging per-module.
251-256
: Debug level 2 is noisy in CI.Consider lowering or gating by env var.
E.g., set from os.getenv("SMTP_DEBUG", "0").
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
src/backend/poetry.lock
is excluded by!**/*.lock
📒 Files selected for processing (38)
.github/workflows/messages.yml
(1 hunks)Makefile
(4 hunks)README.md
(2 hunks)compose.yaml
(3 hunks)docs/architecture.md
(1 hunks)docs/env.md
(1 hunks)docs/selfcheck.md
(1 hunks)env.d/development/backend.defaults
(1 hunks)env.d/development/socks-proxy.defaults
(1 hunks)src/backend/core/api/utils.py
(2 hunks)src/backend/core/api/viewsets/draft.py
(4 hunks)src/backend/core/management/commands/selfcheck.py
(1 hunks)src/backend/core/mda/draft.py
(1 hunks)src/backend/core/mda/outbound.py
(4 hunks)src/backend/core/mda/outbound_mta.py
(1 hunks)src/backend/core/mda/selfcheck.py
(1 hunks)src/backend/core/mda/smtp.py
(1 hunks)src/backend/core/mda/tasks.py
(2 hunks)src/backend/core/services/identity/keycloak.py
(1 hunks)src/backend/core/signals.py
(3 hunks)src/backend/core/tests/api/test_messages_create.py
(4 hunks)src/backend/core/tests/api/test_messages_delete.py
(2 hunks)src/backend/core/tests/mda/test_outbound.py
(5 hunks)src/backend/core/tests/mda/test_outbound_e2e.py
(2 hunks)src/backend/messages/celery_app.py
(2 hunks)src/backend/messages/settings.py
(2 hunks)src/backend/pyproject.toml
(3 hunks)src/mta-out/tests/conftest.py
(2 hunks)src/mta-out/tests/test_smtp_auth.py
(1 hunks)src/socks-proxy/Dockerfile
(1 hunks)src/socks-proxy/README.md
(1 hunks)src/socks-proxy/entrypoint.sh
(1 hunks)src/socks-proxy/tests/Dockerfile
(1 hunks)src/socks-proxy/tests/__init__.py
(1 hunks)src/socks-proxy/tests/conftest.py
(1 hunks)src/socks-proxy/tests/requirements.txt
(1 hunks)src/socks-proxy/tests/test_smtp.py
(1 hunks)src/socks-proxy/tests/test_socks_proxy.py
(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (16)
src/backend/core/mda/outbound_mta.py (3)
src/backend/core/enums.py (1)
MessageDeliveryStatusChoices
(39-45)src/backend/core/mda/smtp.py (1)
send_smtp_mail
(81-184)src/backend/core/models.py (1)
get_content
(1235-1249)
src/socks-proxy/tests/test_smtp.py (1)
src/socks-proxy/tests/conftest.py (6)
smtp_client_direct
(210-219)mock_smtp_server
(181-185)clear_messages
(139-140)get_messages
(79-80)get_messages
(142-143)smtp_client_via_proxy
(248-262)
src/backend/core/tests/api/test_messages_delete.py (2)
src/backend/core/models.py (2)
create_blob
(464-528)Blob
(1183-1249)src/backend/core/tests/api/test_attachments.py (1)
blob
(160-166)
src/backend/core/signals.py (2)
src/backend/core/models.py (2)
Message
(1044-1180)delete
(934-954)src/backend/core/tests/api/test_attachments.py (1)
blob
(160-166)
src/backend/core/mda/outbound.py (3)
src/backend/core/mda/outbound_mta.py (1)
send_message_via_mx
(102-147)src/backend/core/mda/smtp.py (1)
send_smtp_mail
(81-184)src/backend/core/models.py (2)
Message
(1044-1180)get_content
(1235-1249)
src/backend/core/mda/selfcheck.py (4)
src/backend/core/mda/draft.py (1)
create_draft
(16-116)src/backend/core/mda/outbound.py (2)
prepare_outbound_message
(41-193)send_message
(196-289)src/backend/core/enums.py (1)
MessageDeliveryStatusChoices
(39-45)src/backend/core/models.py (2)
get_parsed_data
(1102-1111)delete
(934-954)
src/backend/core/management/commands/selfcheck.py (1)
src/backend/core/mda/selfcheck.py (1)
run_selfcheck
(208-328)
src/backend/core/mda/draft.py (2)
src/backend/core/models.py (6)
create_blob
(464-528)content_type
(1291-1293)update_stats
(640-763)delete
(934-954)MessageRecipient
(1008-1041)Blob
(1183-1249)src/backend/core/enums.py (1)
MessageRecipientTypeChoices
(31-36)
src/socks-proxy/tests/test_socks_proxy.py (1)
src/socks-proxy/tests/conftest.py (3)
SOCKSClient
(156-177)socks_client
(189-195)test_connection
(164-177)
src/backend/core/api/viewsets/draft.py (1)
src/backend/core/mda/draft.py (2)
create_draft
(16-116)update_draft
(119-294)
src/backend/core/mda/smtp.py (1)
src/socks-proxy/tests/conftest.py (3)
create_proxied_socket
(146-154)ProxySMTP
(222-244)_get_socket
(228-244)
src/backend/core/mda/tasks.py (1)
src/backend/core/mda/selfcheck.py (1)
run_selfcheck
(208-328)
src/backend/core/tests/api/test_messages_create.py (3)
src/backend/core/factories.py (3)
UserFactory
(18-29)MailboxFactory
(58-94)MailboxAccessFactory
(97-107)src/backend/core/tests/mda/test_outbound_e2e.py (1)
draft_detail_url
(26-33)src/backend/core/enums.py (1)
MailboxRoleChoices
(15-21)
src/backend/core/tests/mda/test_outbound.py (4)
src/backend/core/factories.py (1)
ContactFactory
(146-155)src/backend/core/mda/outbound.py (1)
send_message
(196-289)src/backend/core/models.py (1)
get_content
(1235-1249)src/backend/core/enums.py (1)
MessageDeliveryStatusChoices
(39-45)
src/backend/core/tests/mda/test_outbound_e2e.py (2)
src/backend/core/tests/mda/test_inbound_e2e.py (1)
mailbox
(38-47)src/backend/core/tests/mda/test_outbound.py (1)
resolve_return_value
(157-179)
src/socks-proxy/tests/conftest.py (3)
src/mta-out/tests/conftest.py (11)
MessageStore
(21-33)add_message
(27-29)clear
(31-33)get_messages
(117-119)MockSMTPHandler
(50-81)handle_DATA
(57-60)handle_message
(62-81)MockSMTPServer
(84-127)start
(103-106)stop
(108-111)clear_messages
(113-115)src/backend/core/mda/smtp.py (3)
create_proxied_socket
(12-35)ProxySMTP
(38-77)_get_socket
(49-77)src/mta-in/src/delivery_milter.py (1)
connect
(38-47)
🪛 LanguageTool
src/socks-proxy/README.md
[grammar] ~7-~7: Ensure spelling is correct
Context: ...used for routing SMTP traffic through a specifi IP address. ## Architecture ### Compo...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
[grammar] ~13-~13: There might be a mistake here.
Context: ...er** - Custom-built from source (v1.4.4) - User Authentication - Username/passwor...
(QB_NEW_EN)
[grammar] ~14-~14: There might be a mistake here.
Context: ...- Username/password based authentication - Docker Containerization - Multi-stage ...
(QB_NEW_EN)
[grammar] ~15-~15: There might be a mistake here.
Context: ... Multi-stage build for optimized runtime - Comprehensive Testing - Full test suit...
(QB_NEW_EN)
[grammar] ~22-~22: There might be a mistake here.
Context: ... |Variable|Description|Required|Default| |---|---|---|---| | PROXY_USERS | List o...
(QB_NEW_EN)
[grammar] ~23-~23: There might be a mistake here.
Context: ...tion|Required|Default| |---|---|---|---| | PROXY_USERS | List of username:passwor...
(QB_NEW_EN)
[grammar] ~25-~25: There might be a mistake here.
Context: ...IP address or interface name. | true | | | PROXY_INTERNAL | The inbound connectio...
(QB_NEW_EN)
[grammar] ~26-~26: There might be a mistake here.
Context: ...interfaces names. | false | "0.0.0.0" | | PROXY_INTERNAL_PORT | The inbound conn...
(QB_NEW_EN)
[grammar] ~35-~35: There might be a mistake here.
Context: ... Valid/invalid credentials, missing auth - Connection Testing - Establishment, ti...
(QB_NEW_EN)
[grammar] ~36-~36: There might be a mistake here.
Context: ...ablishment, timeouts, connection refused - SMTP via Proxy - Email delivery throug...
(QB_NEW_EN)
[grammar] ~37-~37: There might be a mistake here.
Context: ...y** - Email delivery through SOCKS proxy - Connection Info Capture - IP address l...
(QB_NEW_EN)
docs/selfcheck.md
[grammar] ~9-~9: There might be a mistake here.
Context: ...ROM and TO addresses if they don't exist 2. Creates a test message with a unique s...
(QB_NEW_EN)
[grammar] ~10-~10: There might be a mistake here.
Context: ...ssage** with a unique secret in the body 3. Sends the message via the outbound sys...
(QB_NEW_EN)
[grammar] ~11-~11: There might be a mistake here.
Context: ...em using prepare_outbound_message
and send_message(force_mta_out=True)
4. Waits for message reception by polling...
(QB_NEW_EN)
[grammar] ~12-~12: There might be a mistake here.
Context: ...lbox for a message containing the secret 5. Verifies message integrity by checking...
(QB_NEW_EN)
[grammar] ~13-~13: There might be a mistake here.
Context: ...ains the secret and has proper structure 6. Cleans up test data by deleting the te...
(QB_NEW_EN)
[grammar] ~14-~14: There might be a mistake here.
Context: ...e and thread (but keeping the mailboxes) 7. Times all operations and provides deta...
(QB_NEW_EN)
[grammar] ~21-~21: There might be a mistake here.
Context: ...for instance: [email protected]
) - MESSAGES_SELFCHECK_TO
: Email address to send to (for instance...
(QB_NEW_EN)
[grammar] ~22-~22: There might be a mistake here.
Context: ...nce: [email protected]
) - MESSAGES_SELFCHECK_SECRET
: Secret string to include in the messag...
(QB_NEW_EN)
[grammar] ~23-~23: There might be a mistake here.
Context: ...y (for instance: selfcheck-secret-xyz
) - MESSAGES_SELFCHECK_INTERVAL
: Interval in seconds between self-check...
(QB_NEW_EN)
[grammar] ~24-~24: There might be a mistake here.
Context: ...hecks (for instance: 600
- 10 minutes) - MESSAGES_SELFCHECK_TIMEOUT
: Timeout in seconds for message recepti...
(QB_NEW_EN)
[grammar] ~29-~29: There might be a mistake here.
Context: ...eus metrics reporting (default: False
) - MESSAGES_SELFCHECK_PROMETHEUS_METRICS_PUSHGATEWAY_URL
: URL of the Prometheus Pushgateway to w...
(QB_NEW_EN)
[grammar] ~30-~30: There might be a mistake here.
Context: ...which metrics are sent (default: None
) - MESSAGES_SELFCHECK_PROMETHEUS_METRICS_PREFIX
: Prefix for all Prometheus metrics name...
(QB_NEW_EN)
[grammar] ~77-~77: There might be a mistake here.
Context: ...ls: - INFO
: Normal operation progress - WARNING
: Non-critical issues (e.g., parsing err...
(QB_NEW_EN)
[grammar] ~78-~78: There might be a mistake here.
Context: ... parsing errors for individual messages) - ERROR
: Critical failures that cause the self-...
(QB_NEW_EN)
[grammar] ~83-~83: There might be a mistake here.
Context: ...e success status** of the selfcheck task 2. Monitoring timing metrics to detect pe...
(QB_NEW_EN)
[grammar] ~84-~84: There might be a mistake here.
Context: ...rics** to detect performance degradation 3. Alerting on failures to quickly identi...
(QB_NEW_EN)
[grammar] ~85-~85: There might be a mistake here.
Context: ...uickly identify delivery pipeline issues 4. Tracking trends in reception times to ...
(QB_NEW_EN)
[grammar] ~92-~92: There might be a mistake here.
Context: ...time: Start timestamp of the self check -
selfcheck_end_time: End timestamp of the self check -
sel...
(QB_NEW_EN)
[grammar] ~93-~93: There might be a mistake here.
Context: ...d_time: End timestamp of the self check -
selfcheck_success`: 1 if the self check succeeded, 0 if it...
(QB_NEW_EN)
[grammar] ~94-~94: There might be a mistake here.
Context: ...the self check succeeded, 0 if it failed - selfcheck_send_duration_seconds
: Time taken to send the test message (s...
(QB_NEW_EN)
[grammar] ~95-~95: There might be a mistake here.
Context: ...essage (seconds), only on succesful send - selfcheck_reception_duration_seconds
: Time taken to receive the test message...
(QB_NEW_EN)
docs/architecture.md
[grammar] ~55-~55: There might be a mistake here.
Context: ...DE=relay, or directly to MX servers if
MTA_OUT_MODE=direct` 5. In development, MailCatcher intercepts m...
(QB_NEW_EN)
docs/env.md
[grammar] ~82-~82: There might be a mistake here.
Context: ... mode ('direct' or 'relay') | Required | | MTA_OUT_SMTP_HOST
| mta-out:587
| ...
(QB_NEW_EN)
[grammar] ~83-~83: There might be a mistake here.
Context: ... server host for relay mode | Required | | MTA_OUT_SMTP_USERNAME
| user
| Out...
(QB_NEW_EN)
[grammar] ~84-~84: There might be a mistake here.
Context: ...MTP username for relay mode | Optional | | MTA_OUT_SMTP_PASSWORD
| pass
| Out...
(QB_NEW_EN)
[grammar] ~85-~85: There might be a mistake here.
Context: ...MTP password for relay mode | Optional | | MDA_API_SECRET
| `my-shared-secret-m...
(QB_NEW_EN)
🪛 Ruff (0.12.2)
src/socks-proxy/tests/test_smtp.py
1-1: pytest
imported but unused
Remove unused import: pytest
(F401)
3-3: socket
imported but unused
Remove unused import: socket
(F401)
5-5: smtplib
imported but unused
Remove unused import: smtplib
(F401)
src/socks-proxy/tests/test_socks_proxy.py
6-6: .conftest.PROXY1_CONFIG
imported but unused
Remove unused import: .conftest.PROXY1_CONFIG
(F401)
src/socks-proxy/tests/conftest.py
3-3: time
imported but unused
Remove unused import: time
(F401)
6-6: socket
imported but unused
Remove unused import: socket
(F401)
8-8: ssl
imported but unused
Remove unused import: ssl
(F401)
9-9: struct
imported but unused
Remove unused import: struct
(F401)
64-64: Do not use bare except
(E722)
216-219: Use contextlib.suppress(BaseException)
instead of try
-except
-pass
Replace with contextlib.suppress(BaseException)
(SIM105)
218-218: Do not use bare except
(E722)
259-262: Use contextlib.suppress(BaseException)
instead of try
-except
-pass
Replace with contextlib.suppress(BaseException)
(SIM105)
261-261: Do not use bare except
(E722)
🔇 Additional comments (21)
src/backend/core/signals.py (1)
135-135
: LGTM: tolerate missing docs when deleting a message from the index.Ignoring 404s here is appropriate and reduces noisy errors.
src/backend/core/tests/api/test_messages_delete.py (1)
172-176
: LGTM: explicit setup of message and draft blobs.This asserts the intended blob cleanup path.
src/socks-proxy/tests/requirements.txt (1)
3-3
: Confirm Python 3.13 compatibility of aiosmtpd pin.Backend targets Python 3.13; aiosmtpd==1.4.4 has had occasional compatibility issues on newest interpreters. Please verify in CI for the socks-proxy test image, or bump if needed.
Would you like a quick pin-audit script to check import/runtime compatibility across Python versions?
src/backend/pyproject.toml (6)
62-62
: requests pin differs from socks-proxy tests.Backend: 2.32.5; socks-proxy tests: 2.32.3. Prefer a single pin (2.32.5) unless the socks-proxy image has constraints.
Would you like me to align both and open a follow-up PR if CI is green?
25-27
: Reminder: lockfile refresh needed.After changing deps, re-run the documented target (
make back-poetry-lock
) so CI uses consistent resolved versions.
5-24
: PEP 621 compliance sanity check.Given
package-mode = false
, ensure your build backend readsproject.dependencies
entries as PEP 508 strings. Most entries look correct; onlyprometheus-client
needed fixing.I can provide a quick parser script to validate the TOML for specifier compliance if useful.
69-74
: Project URLs still point tosuitenumerique/st-messages
.Confirm repository/issue paths are correct for this project. If this is intentional, ignore.
I can scan the repo for other hardcoded project links on request.
128-135
: tests per-file ignores are reasonable.No concerns; matches the existing lint strategy.
136-147
: pytest config uses--import-mode=importlib
—good.This supports removing
__init__.py
from test dirs safely.src/backend/core/api/utils.py (1)
1-87
: No remaining references togenerate_s3_authorization_headers
.src/backend/core/mda/smtp.py (1)
3-9
: Declare dependency on PySocks explicitly in requirements (if not already).
Module importssocks
; ensure the backend image installs PySocks to prevent runtime ImportError.Would you like me to open a follow-up to add
PySocks
to the backend dependencies and pin a version?docs/architecture.md (1)
55-56
: Minor wording polish.
[ suggest_nitpick ]
Apply this diff:-4. **Celery** processes sending via **MTA-Out** if `MTA_OUT_MODE=relay`, or directly to MX servers if `MTA_OUT_MODE=direct` +4. **Celery** sends via **relay** when `MTA_OUT_MODE=relay`, or directly to recipient MX servers when `MTA_OUT_MODE=direct`src/backend/core/mda/tasks.py (1)
13-13
: LGTM: imports selfcheck entry point.Import location is appropriate and keeps task thin.
README.md (1)
142-144
: Docs update LGTM.Service table additions for SOCKS Proxy (8916) and Mailcatcher (SMTP 8917) are clear and consistent with the dev stack.
src/backend/core/tests/api/test_messages_create.py (3)
37-41
: Good: second authenticated user fixture to enable cross-mailbox authorization tests.
52-59
: Good: second mailbox fixture aligned with user2’s domain for realistic scenarios.
478-562
: Authorization coverage improved for cross-user draft updates.Scenario is well-structured: 401 unauthenticated, 403 without mailbox access, then 403 when another user with access on a different mailbox attempts to update.
compose.yaml (1)
289-301
: Confirm whether standalone mta-out service is still needed.With MTA-out integrated into Celery, this service may be obsolete in dev. If still used for specific tests, add a short note in docs.
src/backend/core/tests/mda/test_outbound.py (2)
44-46
: Good: BCC domain switched to force distinct MX grouping.This makes the direct path exercise multiple MX targets. LGTM.
124-131
: Patch target consistency.Direct path patches
core.mda.outbound_mta.send_smtp_mail
(good). For completeness, also ensureoutbound.send_outbound_message
doesn’t importsend_smtp_mail
under a different name that bypasses the patch.I can scan the repo to confirm the import path used in
send_outbound_message
matches your patches. Do you want a quick verification script?src/backend/core/mda/draft.py (1)
151-159
: Thread subject sync can race with concurrent writers.Counting messages each time can be racy; checking if this message is first is better done by comparing ids or relying on thread.created_at vs first message.
Would you like a small migration to add thread.first_message_id and update it transactionally?
@@ -309,14 +310,36 @@ services: | |||
- env.d/development/mta-out.local | |||
environment: | |||
- EXEC_CMD=true | |||
- MTA_OUT_HOST=localhost:587 | |||
- MTA_OUT_SMTP_HOST=localhost:587 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Align test relay host with mailcatcher service.
localhost:587
inside the mta-out-test
container points to itself, not mailcatcher. Use the service name for consistency.
- - MTA_OUT_SMTP_HOST=localhost:587
+ - MTA_OUT_SMTP_HOST=mailcatcher:1025
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
- MTA_OUT_SMTP_HOST=localhost:587 | |
- MTA_OUT_SMTP_HOST=mailcatcher:1025 |
🤖 Prompt for AI Agents
In compose.yaml around line 313, the MTA_OUT_SMTP_HOST is set to "localhost:587"
which inside the mta-out-test container resolves to itself instead of the
mailcatcher service; update the value to the mailcatcher service host and port
(for example "mailcatcher:1025") so the container targets the mailcatcher
service correctly.
| `MTA_OUT_MODE` | `direct` | Outbound MTA mode ('direct' or 'relay') | Required | | ||
| `MTA_OUT_SMTP_HOST` | `mta-out:587` | Outbound SMTP server host for relay mode | Required | | ||
| `MTA_OUT_SMTP_USERNAME` | `user` | Outbound SMTP username for relay mode | Optional | | ||
| `MTA_OUT_SMTP_PASSWORD` | `pass` | Outbound SMTP password for relay mode | Optional | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Fix requiredness and clarify relay-only variables; address grammar.
As written, MTA_OUT_SMTP_* look “Required” even when mode=direct. This misleads deploys.
-| `MTA_OUT_MODE` | `direct` | Outbound MTA mode ('direct' or 'relay') | Required |
-| `MTA_OUT_SMTP_HOST` | `mta-out:587` | Outbound SMTP server host for relay mode | Required |
-| `MTA_OUT_SMTP_USERNAME` | `user` | Outbound SMTP username for relay mode | Optional |
-| `MTA_OUT_SMTP_PASSWORD` | `pass` | Outbound SMTP password for relay mode | Optional |
+| `MTA_OUT_MODE` | `direct` | Outbound MTA mode: `direct` or `relay` | Required |
+| `MTA_OUT_SMTP_HOST` | `None` | SMTP relay host:port (used only when `MTA_OUT_MODE=relay`) | Conditional |
+| `MTA_OUT_SMTP_USERNAME` | `None` | SMTP username (used only when `MTA_OUT_MODE=relay`) | Optional |
+| `MTA_OUT_SMTP_PASSWORD` | `None` | SMTP password (used only when `MTA_OUT_MODE=relay`) | Optional |
Additionally:
- If the code supports SOCKS, document the variable(s) (e.g.,
MTA_OUT_PROXIES
) and any format expected. - The “MTA-Out Specific” section (lines 92–95) duplicates/contradicts these keys; mark those as deprecated or align them.
🏁 Script executed:
#!/bin/bash
# Verify env keys used in code vs docs
rg -nP --type=py '(MTA_OUT_(SMT P_HOST|MODE|PROXIES|DIRECT_PORT|SMTP_USERNAME|SMTP_PASSWORD))' -g '!**/site-packages/**'
Length of output: 2947
🏁 Script executed:
#!/bin/bash
# Verify presence of SMTP_HOST, proxies, direct port, and comment consistency
grep -R "MTA_OUT_SMTP_HOST" -n src
grep -R "# SMTP settings for external" -n src/backend/messages/settings.py
grep -R "MTA_OUT_PROXIES" -n src/backend/messages/settings.py
grep -R "MTA_OUT_DIRECT_PORT" -n src/backend/messages/settings.py
Length of output: 1365
docs/env.md – align MTA_OUT docs with code
- In the table at lines 82–85, change
MTA_OUT_MODE
description to “Outbound MTA mode:direct
orexternal
” (Required)MTA_OUT_SMTP_HOST
,MTA_OUT_SMTP_USERNAME
,MTA_OUT_SMTP_PASSWORD
example toNone
, requiredness to “Conditional” and note “used only whenMTA_OUT_MODE=external
”
- Add rows for:
MTA_OUT_PROXIES
|[]
| List of SOCKS proxy URLs (randomly chosen when non-empty; used in direct mode) | OptionalMTA_OUT_DIRECT_PORT
|25
| TCP port for direct mode | Optional
- Remove or deprecate the duplicate “MTA-Out Specific” section (lines 92–95) to avoid contradiction
🧰 Tools
🪛 LanguageTool
[grammar] ~82-~82: There might be a mistake here.
Context: ... mode ('direct' or 'relay') | Required | | MTA_OUT_SMTP_HOST
| mta-out:587
| ...
(QB_NEW_EN)
[grammar] ~83-~83: There might be a mistake here.
Context: ... server host for relay mode | Required | | MTA_OUT_SMTP_USERNAME
| user
| Out...
(QB_NEW_EN)
[grammar] ~84-~84: There might be a mistake here.
Context: ...MTP username for relay mode | Optional | | MTA_OUT_SMTP_PASSWORD
| pass
| Out...
(QB_NEW_EN)
[grammar] ~85-~85: There might be a mistake here.
Context: ...MTP password for relay mode | Optional | | MDA_API_SECRET
| `my-shared-secret-m...
(QB_NEW_EN)
🤖 Prompt for AI Agents
In docs/env.md around lines 82–85, update the MTA_OUT table: change the
MTA_OUT_MODE description to "Outbound MTA mode: `direct` or `external`" and mark
it Required; set MTA_OUT_SMTP_HOST, MTA_OUT_SMTP_USERNAME, and
MTA_OUT_SMTP_PASSWORD example values to `None`, change their requiredness to
"Conditional" and add a note "used only when `MTA_OUT_MODE=external`"; add new
rows for MTA_OUT_PROXIES with example `[]` and description "List of SOCKS proxy
URLs (randomly chosen when non-empty; used in direct mode)" marked Optional, and
MTA_OUT_DIRECT_PORT with example `25`, description "TCP port for direct mode"
marked Optional; finally remove or mark deprecated the duplicate "MTA-Out
Specific" section at lines 92–95 to avoid contradiction and ensure the table is
the single source of truth.
- `selfcheck_send_duration_seconds`: Time taken to send the test message (seconds), only on succesful send | ||
- `selfcheck_reception_duration_seconds`: Time taken to receive the test message (seconds), only on succesful reception | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix typos (“succesful” ➜ “successful”).
Apply this diff:
-- `selfcheck_send_duration_seconds`: Time taken to send the test message (seconds), only on succesful send
-- `selfcheck_reception_duration_seconds`: Time taken to receive the test message (seconds), only on succesful reception
+- `selfcheck_send_duration_seconds`: Time taken to send the test message (seconds), only on successful send
+- `selfcheck_reception_duration_seconds`: Time taken to receive the test message (seconds), only on successful reception
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
- `selfcheck_send_duration_seconds`: Time taken to send the test message (seconds), only on succesful send | |
- `selfcheck_reception_duration_seconds`: Time taken to receive the test message (seconds), only on succesful reception | |
- `selfcheck_send_duration_seconds`: Time taken to send the test message (seconds), only on successful send | |
- `selfcheck_reception_duration_seconds`: Time taken to receive the test message (seconds), only on successful reception |
🧰 Tools
🪛 LanguageTool
[grammar] ~95-~95: There might be a mistake here.
Context: ...essage (seconds), only on succesful send - selfcheck_reception_duration_seconds
: Time taken to receive the test message...
(QB_NEW_EN)
🤖 Prompt for AI Agents
In docs/selfcheck.md around lines 95 to 97, fix the typo "succesful" to
"successful" in both metric descriptions: change "only on succesful send" to
"only on successful send" and "only on succesful reception" to "only on
successful reception" so both lines use the correct spelling.
MTA_OUT_HOSTNAME = os.getenv("MTA_OUT_SMTP_HOST").split(":")[0] | ||
MTA_OUT_PORT = int(os.getenv("MTA_OUT_SMTP_HOST").split(":")[1]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Fragile env parsing (None/IPv6/portless) can crash tests; make parsing robust with defaults.
os.getenv(...).split(":")[1] will blow up if the var is unset or lacks a colon; also breaks on IPv6 without brackets.
Apply:
-MTA_OUT_HOSTNAME = os.getenv("MTA_OUT_SMTP_HOST").split(":")[0]
-MTA_OUT_PORT = int(os.getenv("MTA_OUT_SMTP_HOST").split(":")[1])
+_raw = (os.getenv("MTA_OUT_SMTP_HOST", "localhost:8911") or "localhost:8911").strip()
+# Support "host:port" (prefers last colon), default port 25 if missing. Strip [] for IPv6-in-brackets.
+_host, _port = (_raw.rsplit(":", 1) if ":" in _raw else (_raw, "25"))
+MTA_OUT_HOSTNAME = _host.strip("[]")
+MTA_OUT_PORT = int(_port)
Optionally, consider using a dedicated test variable (e.g., MTA_OUT_TEST_SMTP_HOST) to avoid conflating “relay target” with “server under test”.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
MTA_OUT_HOSTNAME = os.getenv("MTA_OUT_SMTP_HOST").split(":")[0] | |
MTA_OUT_PORT = int(os.getenv("MTA_OUT_SMTP_HOST").split(":")[1]) | |
-_MTA_OUT_HOSTNAME = os.getenv("MTA_OUT_SMTP_HOST").split(":")[0] | |
_raw = (os.getenv("MTA_OUT_SMTP_HOST", "localhost:8911") or "localhost:8911").strip() | |
# Support "host:port" (prefers last colon), default port 25 if missing. | |
# Strip [] for IPv6-in-brackets. | |
_host, _port = (_raw.rsplit(":", 1) if ":" in _raw else (_raw, "25")) | |
MTA_OUT_HOSTNAME = _host.strip("[]") | |
MTA_OUT_PORT = int(_port) |
IFS=',' read -ra USERS <<< "$PROXY_USERS" | ||
for entry in "${USERS[@]}"; do | ||
IFS=':' read -r user pass <<< "$entry" | ||
if id "$user" &>/dev/null; then | ||
echo "User $user already exists, skipping" | ||
else | ||
useradd -M -s /usr/sbin/nologin "$user" | ||
fi | ||
echo "$user:$pass" | chpasswd | ||
done |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Harden PROXY_USERS parsing and validation.
Current parsing accepts empty user/pass and extra colons; also allows “,”/“:” inside creds, which breaks parsing.
Apply this diff:
IFS=',' read -ra USERS <<< "$PROXY_USERS"
for entry in "${USERS[@]}"; do
- IFS=':' read -r user pass <<< "$entry"
+ IFS=':' read -r user pass extra <<< "$entry"
+ if [[ -z "${user:-}" || -z "${pass:-}" || -n "${extra:-}" ]]; then
+ echo "Error: invalid PROXY_USERS entry '$entry'. Expected 'user:pass' with non-empty user and pass, no extra ':'." >&2
+ exit 1
+ fi
+ if [[ "$user" == *:* || "$user" == *,* || "$pass" == *:* || "$pass" == *,* ]]; then
+ echo "Error: username/password must not contain ':' or ',' characters." >&2
+ exit 1
+ fi
if id "$user" &>/dev/null; then
echo "User $user already exists, skipping"
else
useradd -M -s /usr/sbin/nologin "$user"
fi
echo "$user:$pass" | chpasswd
done
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
IFS=',' read -ra USERS <<< "$PROXY_USERS" | |
for entry in "${USERS[@]}"; do | |
IFS=':' read -r user pass <<< "$entry" | |
if id "$user" &>/dev/null; then | |
echo "User $user already exists, skipping" | |
else | |
useradd -M -s /usr/sbin/nologin "$user" | |
fi | |
echo "$user:$pass" | chpasswd | |
done | |
IFS=',' read -ra USERS <<< "$PROXY_USERS" | |
for entry in "${USERS[@]}"; do | |
IFS=':' read -r user pass extra <<< "$entry" | |
if [[ -z "${user:-}" || -z "${pass:-}" || -n "${extra:-}" ]]; then | |
echo "Error: invalid PROXY_USERS entry '$entry'. Expected 'user:pass' with non-empty user and pass, no extra ':'." >&2 | |
exit 1 | |
fi | |
if [[ "$user" == *:* || "$user" == *,* || "$pass" == *:* || "$pass" == *,* ]]; then | |
echo "Error: username/password must not contain ':' or ',' characters." >&2 | |
exit 1 | |
fi | |
if id "$user" &>/dev/null; then | |
echo "User $user already exists, skipping" | |
else | |
useradd -M -s /usr/sbin/nologin "$user" | |
fi | |
echo "$user:$pass" | chpasswd | |
done |
DANTE_CONFIG=" | ||
logoutput: stdout | ||
errorlog: stderr | ||
debug: ${PROXY_DEBUG_LEVEL:-0} | ||
|
||
internal: ${PROXY_INTERNAL:-"0.0.0.0"} port = ${PROXY_INTERNAL_PORT:-"1080"} | ||
external: ${PROXY_EXTERNAL} | ||
|
||
# Use password-file method | ||
socksmethod: username | ||
user.privileged: root | ||
user.notprivileged: nobody | ||
" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Broken Dante config due to quoted defaults in parameter expansion.
When PROXY_INTERNAL/PROXY_INTERNAL_PORT are unset, the resulting config writes quoted values (e.g., "0.0.0.0"), which is invalid for Dante and will prevent startup.
Apply this diff:
DANTE_CONFIG="
logoutput: stdout
errorlog: stderr
debug: ${PROXY_DEBUG_LEVEL:-0}
-internal: ${PROXY_INTERNAL:-"0.0.0.0"} port = ${PROXY_INTERNAL_PORT:-"1080"}
+internal: ${PROXY_INTERNAL:-0.0.0.0} port = ${PROXY_INTERNAL_PORT:-1080}
external: ${PROXY_EXTERNAL}
# Use password-file method
socksmethod: username
user.privileged: root
user.notprivileged: nobody
"
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
DANTE_CONFIG=" | |
logoutput: stdout | |
errorlog: stderr | |
debug: ${PROXY_DEBUG_LEVEL:-0} | |
internal: ${PROXY_INTERNAL:-"0.0.0.0"} port = ${PROXY_INTERNAL_PORT:-"1080"} | |
external: ${PROXY_EXTERNAL} | |
# Use password-file method | |
socksmethod: username | |
user.privileged: root | |
user.notprivileged: nobody | |
" | |
DANTE_CONFIG=" | |
logoutput: stdout | |
errorlog: stderr | |
debug: ${PROXY_DEBUG_LEVEL:-0} | |
internal: ${PROXY_INTERNAL:-0.0.0.0} port = ${PROXY_INTERNAL_PORT:-1080} | |
external: ${PROXY_EXTERNAL} | |
# Use password-file method | |
socksmethod: username | |
user.privileged: root | |
user.notprivileged: nobody | |
" |
🤖 Prompt for AI Agents
In src/socks-proxy/entrypoint.sh around lines 24 to 36, the Dante config is
being generated with quoted defaults in parameter expansion which yields invalid
quoted values (e.g., "0.0.0.0") when PROXY_INTERNAL or PROXY_INTERNAL_PORT are
unset; remove the embedded quotes from the default expressions so the config
writes raw values (change ${PROXY_INTERNAL:-"0.0.0.0"} to
${PROXY_INTERNAL:-0.0.0.0} and ${PROXY_INTERNAL_PORT:-"1080"} to
${PROXY_INTERNAL_PORT:-1080}), keeping the rest of the config intact so Dante
receives unquoted address and port values.
|
||
## Overview | ||
|
||
This service provides a SOCKS5 proxy server that can be used for routing SMTP traffic through a specifi IP address. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix typo ("specifi" ➜ "specific").
Apply this diff:
-This service provides a SOCKS5 proxy server that can be used for routing SMTP traffic through a specifi IP address.
+This service provides a SOCKS5 proxy server that can be used for routing SMTP traffic through a specific IP address.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
This service provides a SOCKS5 proxy server that can be used for routing SMTP traffic through a specifi IP address. | |
This service provides a SOCKS5 proxy server that can be used for routing SMTP traffic through a specific IP address. |
🧰 Tools
🪛 LanguageTool
[grammar] ~7-~7: Ensure spelling is correct
Context: ...used for routing SMTP traffic through a specifi IP address. ## Architecture ### Compo...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
🤖 Prompt for AI Agents
In src/socks-proxy/README.md around line 7, there's a typo: "specifi" should be
"specific"; update the sentence to replace "specifi" with "specific" so it reads
"...through a specific IP address."
| PROXY_USERS | List of username:password pairs allowed to connect in the format "user1:pass1,user2:pass2". | true | | | ||
| PROXY_EXTERNAL | The outbound connections IP address or interface name. | true | | | ||
| PROXY_INTERNAL | The inbound connections IP addresses or interfaces names. | false | "0.0.0.0" | | ||
| PROXY_INTERNAL_PORT | The inbound connections TCP port to listen to. | false | "1080" | | ||
| PROXY_DEBUG_LEVEL | The debug level. | false | "0" | | ||
| PROXY_SOURCE_IP_WHITELIST | The source IPs allowed to connect to the proxy. Be aware you have to use `network_mode: host` for this feature to work. | false | "0.0.0.0/0" | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Align env var table with actual defaults and improve wording.
PROXY_EXTERNAL is not “Required: true” because entrypoint sets a default of eth0. Also tighten phrasing.
Apply this diff:
-| PROXY_USERS | List of username:password pairs allowed to connect in the format "user1:pass1,user2:pass2". | true | |
-| PROXY_EXTERNAL | The outbound connections IP address or interface name. | true | |
-| PROXY_INTERNAL | The inbound connections IP addresses or interfaces names. | false | "0.0.0.0" |
-| PROXY_INTERNAL_PORT | The inbound connections TCP port to listen to. | false | "1080" |
+| PROXY_USERS | Comma‑separated username:password pairs allowed to connect (e.g., "user1:pass1,user2:pass2"). | true | |
+| PROXY_EXTERNAL | Outbound interface name or IP address. | false | "eth0" |
+| PROXY_INTERNAL | Inbound bind interface name or IP address. | false | "0.0.0.0" |
+| PROXY_INTERNAL_PORT | Inbound TCP port to listen on. | false | "1080" |
| PROXY_DEBUG_LEVEL | The debug level. | false | "0" |
| PROXY_SOURCE_IP_WHITELIST | The source IPs allowed to connect to the proxy. Be aware you have to use `network_mode: host` for this feature to work. | false | "0.0.0.0/0" |
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| PROXY_USERS | List of username:password pairs allowed to connect in the format "user1:pass1,user2:pass2". | true | | | |
| PROXY_EXTERNAL | The outbound connections IP address or interface name. | true | | | |
| PROXY_INTERNAL | The inbound connections IP addresses or interfaces names. | false | "0.0.0.0" | | |
| PROXY_INTERNAL_PORT | The inbound connections TCP port to listen to. | false | "1080" | | |
| PROXY_DEBUG_LEVEL | The debug level. | false | "0" | | |
| PROXY_SOURCE_IP_WHITELIST | The source IPs allowed to connect to the proxy. Be aware you have to use `network_mode: host` for this feature to work. | false | "0.0.0.0/0" | | |
| PROXY_USERS | Comma-separated username:password pairs allowed to connect (e.g., "user1:pass1,user2:pass2"). | true | | | |
| PROXY_EXTERNAL | Outbound interface name or IP address. | false | "eth0" | | |
| PROXY_INTERNAL | Inbound bind interface name or IP address. | false | "0.0.0.0" | | |
| PROXY_INTERNAL_PORT | Inbound TCP port to listen on. | false | "1080" | | |
| PROXY_DEBUG_LEVEL | The debug level. | false | "0" | | |
| PROXY_SOURCE_IP_WHITELIST | The source IPs allowed to connect to the proxy. Be aware you have to use `network_mode: host` for this feature to work. | false | "0.0.0.0/0" | |
🧰 Tools
🪛 LanguageTool
[grammar] ~25-~25: There might be a mistake here.
Context: ...IP address or interface name. | true | | | PROXY_INTERNAL | The inbound connectio...
(QB_NEW_EN)
[grammar] ~26-~26: There might be a mistake here.
Context: ...interfaces names. | false | "0.0.0.0" | | PROXY_INTERNAL_PORT | The inbound conn...
(QB_NEW_EN)
🤖 Prompt for AI Agents
In src/socks-proxy/README.md around lines 24 to 29, update the environment
variables table to match actual defaults and improve wording: mark
PROXY_EXTERNAL as not required (set Required: false) and show its default as
"eth0" (reflecting the entrypoint default), tighten the phrasing for each
description to be concise (e.g., "Outbound interface or IP for outgoing
connections" for PROXY_EXTERNAL, "Inbound interfaces or IPs" for
PROXY_INTERNAL), ensure default values are accurate (PROXY_INTERNAL default
"0.0.0.0", PROXY_INTERNAL_PORT "1080", PROXY_DEBUG_LEVEL "0",
PROXY_SOURCE_IP_WHITELIST "0.0.0.0/0"), and keep the note about network_mode:
host for the whitelist feature.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 13
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (7)
src/backend/core/services/identity/keycloak.py (2)
64-80
: Underscore-prefixed attributes are still sent (unnormalized) due to early dict expansion.You skip normalization for keys starting with “_”, but they were already merged into group_attributes at Line 67. This can violate Keycloak’s “list of strings” requirement and cause 400s or silent coercions. Either drop underscore-prefixed keys entirely or normalize them like the others. Suggested fix removes early merge and filters/normalizes explicitly.
- group_attributes = { - "maildomain_id": [str(maildomain.id)], - "maildomain_name": [maildomain.name], - **(maildomain.custom_attributes or {}), - } + group_attributes = { + "maildomain_id": [str(maildomain.id)], + "maildomain_name": [maildomain.name], + } # Add custom attributes if maildomain.custom_attributes: for key, value in maildomain.custom_attributes.items(): - if key.startswith("_"): - continue - # Ensure values are lists (Keycloak requirement) - if isinstance(value, list): - group_attributes[key] = value - else: - group_attributes[key] = [str(value)] + # Skip private/internal attrs + if key.startswith("_"): + continue + # Ensure list-of-strings as required by Keycloak + if isinstance(value, list): + group_attributes[key] = [str(v) for v in value] + else: + group_attributes[key] = [str(value)]
152-159
: User custom attributes aren’t normalized to list-of-strings.Unlike group attributes, user attributes pass through unvalidated, which can break on non-list or non-string values. Apply the same normalization and “_” filtering.
- user_attributes = { - "mailbox_id": [str(mailbox.id)], - "maildomain_id": [str(mailbox.domain.id)], - "local_part": [mailbox.local_part], - "domain_name": [mailbox.domain.name], - **user_custom_attributes, - } + user_attributes = { + "mailbox_id": [str(mailbox.id)], + "maildomain_id": [str(mailbox.domain.id)], + "local_part": [mailbox.local_part], + "domain_name": [mailbox.domain.name], + } + if user_custom_attributes: + for key, value in user_custom_attributes.items(): + if key.startswith("_"): + continue + if isinstance(value, list): + user_attributes[key] = [str(v) for v in value] + else: + user_attributes[key] = [str(value)]src/mta-out/tests/conftest.py (1)
147-153
: Robust SMTP connect + conditional STARTTLS + timeout.
Avoids IndexError, handles servers without STARTTLS, and sets a sensible socket timeout.- client = smtplib.SMTP( - MTA_OUT_SMTP_HOST.split(":")[0], int(MTA_OUT_SMTP_HOST.split(":")[1]) - ) - client.ehlo() - client.starttls() - client.ehlo() + client = smtplib.SMTP(SMTP_HOST, SMTP_PORT, timeout=5) + client.ehlo() + if "starttls" in getattr(client, "esmtp_features", {}): + client.starttls() + client.ehlo() + else: + logger.warning("Server does not advertise STARTTLS; continuing without TLS for tests.")Also consider failing fast if creds are unset:
- client.login(MTA_OUT_SMTP_USERNAME, MTA_OUT_SMTP_PASSWORD) + if not (MTA_OUT_SMTP_USERNAME and MTA_OUT_SMTP_PASSWORD): + raise RuntimeError("MTA_OUT_SMTP_USERNAME/PASSWORD must be set") + client.login(MTA_OUT_SMTP_USERNAME, MTA_OUT_SMTP_PASSWORD)And broaden connection exceptions:
- except (ConnectionRefusedError, smtplib.SMTPException) as e: + except (OSError, smtplib.SMTPException) as e:src/backend/core/api/utils.py (1)
1-7
: Update module intent and remove dead dependency.Docstring still claims S3 auth headers generation; botocore import is commented-out. Clean this up.
-"""Util to generate S3 authorization headers for object storage access control""" +"""Core API utilities (tree helpers, S3 upload policies).""" from django.conf import settings from django.core.files.storage import default_storage -# import botocoredocs/env.md (1)
89-95
: Remove or deprecate duplicate “MTA-Out Specific” section to avoid contradictions.-### MTA-Out Specific -| Variable | Default | Description | Required | -|----------|---------|-------------|----------| -| `SMTP_RELAY_HOST` | `mailcatcher:1025` | SMTP relay server | Dev | -| `SMTP_USERNAME` | `user` | SMTP authentication username | Dev | -| `SMTP_PASSWORD` | `pass` | SMTP authentication password | Dev | +<!-- Deprecated: covered by MTA Settings table above -->src/backend/core/api/viewsets/draft.py (1)
96-116
: Fix OpenAPI description/examples: attachments vs attachmentIds; include senderId.Docs say “attachmentIds” but the API accepts “attachments”. Examples also miss required senderId.
- At least one of draftBody must be provided. - - To add attachments, upload them first using the /api/v1.0/blob/upload/{mailbox_id}/ endpoint - and include the returned blobIds in the attachmentIds field. + To add attachments, upload them first using the /api/v1.0/blob/upload/{mailbox_id}/ endpoint + and include the returned blobIds in the `attachments` field as objects: + [{ "partId": "<client id>", "blobId": "<uploaded blob id>", "name": "filename" }].OpenApiExample( "New Draft Message", value={ + "senderId": "123e4567-e89b-12d3-a456-426614174999", "subject": "Hello", "draftBody": json.dumps({"arbitrary": "json content"}), "to": ["[email protected]"], "cc": ["[email protected]"], "bcc": ["[email protected]"], }, ),OpenApiExample( "Draft Reply", value={ + "senderId": "123e4567-e89b-12d3-a456-426614174999", "parentId": "123e4567-e89b-12d3-a456-426614174000", "subject": "Re: Hello", "draftBody": json.dumps({"arbitrary": "json content"}), "to": ["[email protected]"], }, ),OpenApiExample( "Update Draft with Attachments", value={ "messageId": "123e4567-e89b-12d3-a456-426614174000", + "senderId": "123e4567-e89b-12d3-a456-426614174999", "subject": "Updated subject", "draftBody": json.dumps({"arbitrary": "new json content"}), "to": ["[email protected]"], "attachments": [ {Also applies to: 118-137, 138-152
compose.yaml (1)
140-143
: Invalid compose schema: restart under depends_on.restart is not a valid key under depends_on; move/remove it to avoid compose errors.
postgresql: condition: service_healthy - restart: true
♻️ Duplicate comments (34)
src/backend/core/tests/api/test_messages_delete.py (1)
200-204
: Add attachment-blob lifecycle tests (unique vs shared) and remove the TODOCover both cases to prevent regressions; mirrors production safety requirements.
Add these tests in this module (outside the current test), then drop the TODO:
@pytest.mark.parametrize("role", [enums.MailboxRoleChoices.ADMIN, enums.MailboxRoleChoices.EDITOR, enums.MailboxRoleChoices.SENDER]) def test_delete_message_deletes_unique_attachment_blob(role): user = factories.UserFactory() mailbox = factories.MailboxFactory() factories.MailboxAccessFactory(mailbox=mailbox, user=user, role=role) thread = factories.ThreadFactory() factories.ThreadAccessFactory(mailbox=mailbox, thread=thread, role=enums.ThreadAccessRoleChoices.EDITOR) message = factories.MessageFactory(thread=thread) blob = mailbox.create_blob(b"att", "text/plain") factories.AttachmentFactory(message=message, blob=blob) client = APIClient() client.force_authenticate(user=user) resp = client.delete(reverse("messages-detail", kwargs={"id": message.id})) assert resp.status_code == status.HTTP_204_NO_CONTENT assert not models.Blob.objects.filter(id=blob.id).exists() @pytest.mark.parametrize("role", [enums.MailboxRoleChoices.ADMIN, enums.MailboxRoleChoices.EDITOR, enums.MailboxRoleChoices.SENDER]) def test_delete_message_preserves_shared_attachment_blob(role): user = factories.UserFactory() mailbox = factories.MailboxFactory() factories.MailboxAccessFactory(mailbox=mailbox, user=user, role=role) thread = factories.ThreadFactory() factories.ThreadAccessFactory(mailbox=mailbox, thread=thread, role=enums.ThreadAccessRoleChoices.EDITOR) m1 = factories.MessageFactory(thread=thread) m2 = factories.MessageFactory(thread=thread) shared = mailbox.create_blob(b"shared", "text/plain") a1 = factories.AttachmentFactory(message=m1, blob=shared) a2 = factories.AttachmentFactory(message=m2, blob=shared) client = APIClient() client.force_authenticate(user=user) resp = client.delete(reverse("messages-detail", kwargs={"id": m1.id})) assert resp.status_code == status.HTTP_204_NO_CONTENT # Blob still referenced by a2 assert models.Blob.objects.filter(id=shared.id).exists() # Now delete second message; blob should be gone resp2 = client.delete(reverse("messages-detail", kwargs={"id": m2.id})) assert resp2.status_code == status.HTTP_204_NO_CONTENT assert not models.Blob.objects.filter(id=shared.id).exists()src/backend/core/signals.py (2)
116-121
: Implement safe cleanup for attachment blobs on delete (shared-blob aware).Enable the handler with reference checks; mirrors message cleanup.
-# @receiver(post_delete, sender=models.Attachment) -# def delete_attachments_blobs(sender, instance, **kwargs): -# """Delete the blob associated with an attachment.""" -# if instance.blob: -# instance.blob.delete() +@receiver(post_delete, sender=models.Attachment) +def delete_attachments_blobs(sender, instance, **kwargs): + """Delete the blob associated with an attachment if unreferenced.""" + blob = getattr(instance, "blob", None) + if not blob: + return + still_referenced = ( + models.Message.objects.filter(blob=blob).exists() + or models.Message.objects.filter(draft_blob=blob).exists() + or models.Attachment.objects.filter(blob=blob).exclude(id=instance.id).exists() + ) + if still_referenced: + logger.debug("Skipping deletion of blob %s: still referenced.", getattr(blob, "id", None)) + return + logger.info("Deleting unreferenced blob %s (after attachment %s deletion).", getattr(blob, "id", None), instance.id) + blob.delete()
107-114
: Don’t unconditionally delete blobs; risk of deleting shared blobs (messages/attachments).Guard deletions with reference checks to prevent data loss.
@receiver(post_delete, sender=models.Message) def delete_message_blobs(sender, instance, **kwargs): - """Delete the blobs associated with a message.""" - if instance.blob: - instance.blob.delete() - if instance.draft_blob: - instance.draft_blob.delete() + """Delete message blobs only if no longer referenced elsewhere.""" + def maybe_delete_blob(blob): + if not blob: + return + # Still referenced by other messages/drafts or attachments? + referenced = ( + models.Message.objects.filter(blob=blob).exclude(id=instance.id).exists() + or models.Message.objects.filter(draft_blob=blob).exclude(id=instance.id).exists() + or models.Attachment.objects.filter(blob=blob).exists() + ) + if referenced: + logger.debug("Skipping deletion of blob %s: still referenced.", getattr(blob, "id", None)) + return + logger.info("Deleting unreferenced blob %s (from message %s).", getattr(blob, "id", None), instance.id) + blob.delete() + + maybe_delete_blob(getattr(instance, "blob", None)) + maybe_delete_blob(getattr(instance, "draft_blob", None))src/backend/pyproject.toml (2)
66-67
: Invalid PEP 508 specifier for prometheus-client.
Fix parsing errors in packaging tools.-"prometheus-client (>=0.22.1,<0.23.0)", +"prometheus-client>=0.22.1,<0.23.0",
40-40
: Dropping S3 extra may break S3-dependent code; verify or restore.
If S3 upload policy generation still exists, this will fail at runtime. Either restore the extra or guard S3-only paths and document the drop.Option A — keep S3 support:
-"django-storages==1.14.6", +"django-storages[s3]==1.14.6",Option B — drop S3 safely (code changes elsewhere required):
- Guard calls to
default_storage.connection.meta.client.generate_presigned_post(...)
.- Update docs to state S3 uploads unsupported.
Run to verify residual S3 usage:
#!/bin/bash rg -nP -C2 '\b(generate_presigned_post|default_storage\.connection\.meta\.client)\b'src/mta-out/tests/conftest.py (1)
16-16
: Validate and parse MTA_OUT_SMTP_HOST early.
Prevents ambiguous errors on missing or ill-formed host:port.-MTA_OUT_SMTP_HOST = os.getenv("MTA_OUT_SMTP_HOST") +MTA_OUT_SMTP_HOST = os.getenv("MTA_OUT_SMTP_HOST") or "" +if ":" not in MTA_OUT_SMTP_HOST: + raise RuntimeError("MTA_OUT_SMTP_HOST must be set as 'host:port'") +SMTP_HOST, SMTP_PORT = MTA_OUT_SMTP_HOST.rsplit(":", 1) +try: + SMTP_PORT = int(SMTP_PORT) +except ValueError as e: + raise RuntimeError(f"Invalid MTA_OUT_SMTP_HOST port: {SMTP_PORT!r}") from esrc/socks-proxy/README.md (2)
7-7
: Fix typo (“specifi” → “specific”).-This service provides a SOCKS5 proxy server that can be used for routing SMTP traffic through a specifi IP address. +This service provides a SOCKS5 proxy server that can be used for routing SMTP traffic through a specific IP address.
22-29
: Align env var table with actual defaults/requiredness and tighten wording.Mark PROXY_EXTERNAL as not required (default eth0 in entrypoint), clarify phrasing.
|Variable|Description|Required|Default| |---|---|---|---| -| PROXY_USERS | List of username:password pairs allowed to connect in the format "user1:pass1,user2:pass2". | true | | -| PROXY_EXTERNAL | The outbound connections IP address or interface name. | true | | -| PROXY_INTERNAL | The inbound connections IP addresses or interfaces names. | false | "0.0.0.0" | -| PROXY_INTERNAL_PORT | The inbound connections TCP port to listen to. | false | "1080" | +| PROXY_USERS | Comma‑separated username:password pairs (e.g., "user1:pass1,user2:pass2"). | true | | +| PROXY_EXTERNAL | Outbound interface name or IP address. | false | "eth0" | +| PROXY_INTERNAL | Inbound bind interface name or IP address. | false | "0.0.0.0" | +| PROXY_INTERNAL_PORT | Inbound TCP port. | false | "1080" | | PROXY_DEBUG_LEVEL | The debug level. | false | "0" | | PROXY_SOURCE_IP_WHITELIST | Source IPs allowed to connect. Use `network_mode: host` for this to work. | false | "0.0.0.0/0" |docs/env.md (1)
82-85
: Docs: make SMTP relay vars conditional; add direct-mode knobs; keep terminology consistent with code.These are shown as Required even when mode=direct. Also document proxies and direct port.
-| `MTA_OUT_MODE` | `direct` | Outbound MTA mode ('direct' or 'relay') | Required | -| `MTA_OUT_SMTP_HOST` | `mta-out:587` | Outbound SMTP server host for relay mode | Required | -| `MTA_OUT_SMTP_USERNAME` | `user` | Outbound SMTP username for relay mode | Optional | -| `MTA_OUT_SMTP_PASSWORD` | `pass` | Outbound SMTP password for relay mode | Optional | +| `MTA_OUT_MODE` | `direct` | Outbound MTA mode: `direct` or `relay` | Required | +| `MTA_OUT_SMTP_HOST` | `None` | SMTP relay host:port (used only when `MTA_OUT_MODE=relay`) | Conditional | +| `MTA_OUT_SMTP_USERNAME` | `None` | SMTP username (relay mode) | Optional | +| `MTA_OUT_SMTP_PASSWORD` | `None` | SMTP password (relay mode) | Optional | +| `MTA_OUT_PROXIES` | `[]` | List of SOCKS proxy URLs (used in direct mode; one chosen per send) | Optional | +| `MTA_OUT_DIRECT_PORT` | `25` | TCP port for direct MX delivery | Optional |src/mta-out/tests/test_smtp_auth.py (1)
11-12
: Make env parsing robust (None/portless/IPv6-safe).Current split() crashes if var unset or port missing; also breaks IPv6 without brackets.
-MTA_OUT_HOSTNAME = os.getenv("MTA_OUT_SMTP_HOST").split(":")[0] -MTA_OUT_PORT = int(os.getenv("MTA_OUT_SMTP_HOST").split(":")[1]) +_raw = (os.getenv("MTA_OUT_SMTP_HOST", "localhost:8911") or "localhost:8911").strip() +# Support "host:port" (prefer last colon), default port 25 if missing; strip [] for IPv6-in-brackets. +_host, _port = (_raw.rsplit(":", 1) if ":" in _raw else (_raw, "25")) +MTA_OUT_HOSTNAME = _host.strip("[]") +MTA_OUT_PORT = int(_port)src/socks-proxy/entrypoint.sh (2)
12-21
: Harden PROXY_USERS parsing/validation (reject empty and extra colons).IFS=',' read -ra USERS <<< "$PROXY_USERS" for entry in "${USERS[@]}"; do - IFS=':' read -r user pass <<< "$entry" + IFS=':' read -r user pass extra <<< "$entry" + if [[ -z "${user:-}" || -z "${pass:-}" || -n "${extra:-}" ]]; then + echo "Error: invalid PROXY_USERS entry '$entry'. Expected 'user:pass' with non-empty parts." >&2 + exit 1 + fi + if [[ "$user" == *:* || "$user" == *,* || "$pass" == *:* || "$pass" == *,* ]]; then + echo "Error: username/password must not contain ':' or ','." >&2 + exit 1 + fi if id "$user" &>/dev/null; then echo "User $user already exists, skipping" else useradd -M -s /usr/sbin/nologin "$user" fi echo "$user:$pass" | chpasswd done
29-29
: Remove quotes in Dante defaults (invalid quoted IP/port).-internal: ${PROXY_INTERNAL:-"0.0.0.0"} port = ${PROXY_INTERNAL_PORT:-"1080"} +internal: ${PROXY_INTERNAL:-0.0.0.0} port = ${PROXY_INTERNAL_PORT:-1080}docs/selfcheck.md (1)
95-97
: Fix typos: “succesful” → “successful”.-- `selfcheck_send_duration_seconds`: Time taken to send the test message (seconds), only on succesful send -- `selfcheck_reception_duration_seconds`: Time taken to receive the test message (seconds), only on succesful reception +- `selfcheck_send_duration_seconds`: Time taken to send the test message (seconds), only on successful send +- `selfcheck_reception_duration_seconds`: Time taken to receive the test message (seconds), only on successful receptionMakefile (1)
307-311
: .PHONY target mismatch (breaks make help/consistency).back-poetry-update-indirect: ## update indirect dependencies rm src/backend/poetry.lock make back-poetry-lock -.PHONY: back-poetry-update +.PHONY: back-poetry-update-indirectsrc/backend/core/mda/draft.py (5)
19-20
: Avoid empty blobs: use non-empty JSON default (“{}”).Mailbox.create_blob typically rejects empty content; defaulting to "{}" prevents ValueError and keeps type-consistency.
- draft_body: str = "", + draft_body: str = "{}", @@ - draft_blob=mailbox.create_blob( - content=(draft_body or "").encode("utf-8"), + draft_blob=mailbox.create_blob( + content=(draft_body or "{}").encode("utf-8"), content_type="application/json", ), @@ - message.draft_blob = mailbox.create_blob( - content=(update_data.get("draftBody") or "").encode("utf-8"), + message.draft_blob = mailbox.create_blob( + content=(update_data.get("draftBody") or "{}").encode("utf-8"), content_type="application/json", )Also applies to: 96-100, 201-205
49-56
: get_or_create misuse with __iexact will raise FieldError on create.Use filter(...__iexact).first() then create with concrete fields.
- sender_contact, _ = models.Contact.objects.get_or_create( - email__iexact=mailbox_email, - mailbox=mailbox, - defaults={ - "email": mailbox_email, - "name": mailbox.local_part, - }, - ) + sender_contact = models.Contact.objects.filter( + mailbox=mailbox, email__iexact=mailbox_email + ).first() + if not sender_contact: + sender_contact = models.Contact.objects.create( + mailbox=mailbox, email=mailbox_email, name=mailbox.local_part + )
160-193
: Same __iexact get_or_create bug when creating recipients.This will crash on create; switch to filter-first then create.
- for email in emails: - contact, _ = models.Contact.objects.get_or_create( - email__iexact=email, - mailbox=mailbox, - defaults={ - "email": email, - "name": email.split("@")[0], - }, - ) + for email in emails: + contact = models.Contact.objects.filter( + mailbox=mailbox, email__iexact=email + ).first() + if not contact: + contact = models.Contact.objects.create( + mailbox=mailbox, + email=email, + name=email.split("@")[0], + ) # Only create MessageRecipient if message has been saved if message.pk: models.MessageRecipient.objects.get_or_create( message=message, contact=contact, type=recipient_type_mapping[recipient_type], )
239-246
: Enforce blob ownership to prevent cross-mailbox leakage.- blob = models.Blob.objects.get(id=blob_id) + blob = models.Blob.objects.get(id=blob_id) + if blob.mailbox_id != mailbox.id: + logger.warning( + "Blob %s belongs to mailbox %s, not %s; skipping.", + blob_id, + blob.mailbox_id, + mailbox.id, + ) + continue
137-149
: Wrap mutations in a transaction to avoid partial state.Recipients, blobs, and thread updates should commit atomically.
@@ - updated_fields = [] - thread_updated_fields = [] + from django.db import transaction + updated_fields = [] + thread_updated_fields = [] @@ - # Update subject if provided - if "subject" in update_data and update_data["subject"] != message.subject: + with transaction.atomic(): + # Update subject if provided + if "subject" in update_data and update_data["subject"] != message.subject: message.subject = update_data["subject"] updated_fields.append("subject") # Also update thread subject if this is the first message - if message.pk and message.thread.messages.count() == 1: - message.thread.subject = update_data["subject"] - thread_updated_fields.append("subject") + if message.pk and message.thread.messages.count() == 1: + message.thread.subject = update_data["subject"] + thread_updated_fields.append("subject") @@ - # Save message and thread if changes were made - if len(updated_fields) > 0 and message.pk: # Only save if message exists - logger.info("Saving message %s with fields %s", message.id, updated_fields) - message.save(update_fields=updated_fields + ["updated_at"]) - if len(thread_updated_fields) > 0 and message.thread.pk: # Check thread exists - message.thread.save(update_fields=thread_updated_fields + ["updated_at"]) + # Save message and thread if changes were made + if len(updated_fields) > 0 and message.pk: # Only save if message exists + logger.info("Saving message %s with fields %s", message.id, updated_fields) + message.save(update_fields=updated_fields + ["updated_at"]) + if len(thread_updated_fields) > 0 and message.thread.pk: # Check thread exists + message.thread.save(update_fields=thread_updated_fields + ["updated_at"])src/backend/messages/celery_app.py (2)
5-6
: Import settings after configurations.install().Importing Django settings before configurations.install() can misconfigure django-configurations. Move the import below install().
Apply within this hunk:
-from django.conf import settings -Then add after the install() call (e.g., below Line 16):
from django.conf import settings # import after configurations.install()
37-41
: Guard and coerce selfcheck interval to a numeric schedule.Accessing a missing or non-numeric MESSAGES_SELFCHECK_INTERVAL can break beat scheduling. Use getattr + float() with a safe default.
- "selfcheck": { - "task": "core.mda.tasks.selfcheck_task", - "schedule": settings.MESSAGES_SELFCHECK_INTERVAL, - "options": {"queue": "default"}, - }, + "selfcheck": { + "task": "core.mda.tasks.selfcheck_task", + "schedule": float(getattr(settings, "MESSAGES_SELFCHECK_INTERVAL", 300.0)), + "options": {"queue": "default"}, + },src/backend/core/mda/outbound.py (4)
284-289
: Harden against task crash: catch exceptions from send_outbound_message and mark all as failed.Uncaught exceptions will leave recipients unmarked and the task in error. Wrap call and mark statuses.
- if len(external_recipients) > 0: - statuses = send_outbound_message(external_recipients, message) - for recipient_email, status in statuses.items(): - _mark_delivered( - recipient_email, status["delivered"], False, status.get("error") - ) + if len(external_recipients) > 0: + try: + statuses = send_outbound_message(external_recipients, message) + except Exception as e: + logger.error("send_outbound_message failed: %s", e, exc_info=True) + statuses = { + email: {"delivered": False, "error": str(e)} + for email in external_recipients + } + for recipient_email, status in statuses.items(): + _mark_delivered( + recipient_email, status["delivered"], False, status.get("error") + )
310-313
: Direct MX path ignores the provided recipient list; restrict to intended externals.Returning all statuses from send_message_via_mx(message) can mark unintended recipients. Filter to recipient_emails.
- if mta_out_mode == "direct": - # Use direct MX delivery - return send_message_via_mx(message) + if mta_out_mode == "direct": + # Use direct MX delivery; restrict to the provided recipients + statuses = send_message_via_mx(message) + return { + email: statuses.get( + email, {"delivered": False, "error": "not queued for MX"} + ) + for email in recipient_emails + }
342-342
: Don’t raise on invalid mode; return per-recipient error for proper status marking.Raising aborts the task and prevents delivery status updates. Return a structured error.
- raise ValueError(f"Invalid MTA out mode: {mta_out_mode}") + return _global_error(f"Invalid MTA out mode: {mta_out_mode}")
327-341
: None-guard bug in host/port parsing and IPv6-unfriendly split; add robust parsing (and optional proxy plumb).
":" in mta_out_smtp_host
raises when host is None. Use a safe split and default port.- statuses = send_smtp_mail( - smtp_host=(mta_out_smtp_host or "").split(":")[0], - smtp_port=int( - (mta_out_smtp_host or "").split(":")[1] - if ":" in mta_out_smtp_host - else 587 - ), + host_str = (mta_out_smtp_host or "").strip() + if not host_str: + return _global_error("SMTP relay host not configured") + parts = host_str.rsplit(":", 1) # IPv6-safe-ish + relay_host = parts[0] + relay_port = int(parts[1]) if len(parts) == 2 and parts[1] else 587 + + statuses = send_smtp_mail( + smtp_host=relay_host, + smtp_port=relay_port, envelope_from=message.sender.email, recipient_emails=recipient_emails, message_content=message.blob.get_content(), smtp_username=mta_out_smtp_username, smtp_password=mta_out_smtp_password, )Optional (if available in your stack): plumb SOCKS proxy settings through to send_smtp_mail.
src/backend/core/mda/smtp.py (3)
12-35
: Make RDNS behavior configurable to prevent DNS leakage.Hard-coded rdns=False forces local DNS resolution; expose rdns in API and plumb it through.
def create_proxied_socket( proxy_host, proxy_port, target_host, target_port, username=None, password=None, - timeout=None, + timeout=None, + rdns=True, ): """Create a socket connected through a SOCKS proxy""" proxy = socks.socksocket() if type(timeout) in {int, float}: proxy.settimeout(timeout) proxy.set_proxy( socks.PROXY_TYPE_SOCKS5, proxy_host, proxy_port, - rdns=False, + rdns=bool(rdns), username=username, password=password, ) proxy.connect((target_host, target_port))
41-47
: Expose proxy_rdns in ProxySMTP and forward to socket creation.Allow callers to control DNS resolution location.
def __init__(self, host, port, *args, **kwargs): self.proxy_host = kwargs.pop("proxy_host", None) self.proxy_port = kwargs.pop("proxy_port", None) self.proxy_username = kwargs.pop("proxy_username", None) self.proxy_password = kwargs.pop("proxy_password", None) + self.proxy_rdns = kwargs.pop("proxy_rdns", True) super().__init__(host, port, *args, **kwargs) @@ return create_proxied_socket( self.proxy_host, self.proxy_port, host, port, self.proxy_username, self.proxy_password, - timeout, + timeout, + self.proxy_rdns, )Also applies to: 49-77
79-95
: Support implicit TLS (SMTPS/465) and avoid STARTTLS under SSL.Add ProxySMTP_SSL and extend send_smtp_mail with use_ssl/ssl_context/proxy_rdns. Only call starttls() in plain mode.
+class ProxySMTP_SSL(smtplib.SMTP_SSL): + """SMTP over implicit TLS with SOCKS5 proxy support.""" + def __init__(self, host, port, *args, **kwargs): + self.proxy_host = kwargs.pop("proxy_host", None) + self.proxy_port = kwargs.pop("proxy_port", None) + self.proxy_username = kwargs.pop("proxy_username", None) + self.proxy_password = kwargs.pop("proxy_password", None) + self.proxy_rdns = kwargs.pop("proxy_rdns", True) + super().__init__(host, port, *args, **kwargs) + + def _get_socket(self, host, port, timeout): + if self.proxy_host is None: + return super()._get_socket(host, port, timeout) + if timeout is not None and not timeout: + raise ValueError("Non-blocking socket (timeout=0) is not supported") + if self.debuglevel > 0: + self._print_debug("connect(SSL): to", (host, port), self.source_address) + raw = create_proxied_socket( + self.proxy_host, + self.proxy_port, + host, + port, + self.proxy_username, + self.proxy_password, + timeout, + self.proxy_rdns, + ) + return self.context.wrap_socket(raw, server_hostname=self._host) @@ def send_smtp_mail( smtp_host: str, smtp_port: int, envelope_from: str, recipient_emails: List[str], message_content: bytes, smtp_username: Optional[str] = None, smtp_password: Optional[str] = None, timeout: int = 60, proxy_host: Optional[str] = None, proxy_port: Optional[int] = None, proxy_username: Optional[str] = None, proxy_password: Optional[str] = None, - sender_hostname: Optional[str] = None, + sender_hostname: Optional[str] = None, + use_ssl: Optional[bool] = None, + ssl_context: Optional["ssl.SSLContext"] = None, + proxy_rdns: bool = True, ) -> Dict[str, Any]: @@ - try: - with ProxySMTP( + try: + if use_ssl is None: + use_ssl = smtp_port == 465 + Client = ProxySMTP_SSL if use_ssl else ProxySMTP + client_kwargs = dict( host=None, port=None, timeout=timeout, proxy_host=proxy_host, proxy_port=proxy_port, proxy_username=proxy_username, proxy_password=proxy_password, - local_hostname=sender_hostname, - ) as client: + proxy_rdns=proxy_rdns, + local_hostname=sender_hostname, + ) + if use_ssl: + import ssl as _ssl + client_kwargs["context"] = ssl_context or _ssl.create_default_context() + with Client(**client_kwargs) as client: (code, msg) = client.connect(smtp_host, smtp_port) @@ - if client.has_extn("starttls"): + if not use_ssl and client.has_extn("starttls"): (code, msg) = client.starttls() logger.debug("SMTP: STARTTLS response: %s %s", code, msg) if not 200 <= code <= 299: client.close() - raise smtplib.SMTPNotSupportedError(code, str(msg)) + raise smtplib.SMTPException(f"STARTTLS failed: {code} {msg}") @@ smtp_response = client.sendmail( envelope_from, recipient_emails, message_content )Also applies to: 117-184
src/backend/core/mda/outbound_mta.py (2)
85-99
: Don’t use proxy hostname as EHLO/HELO; pick a deterministic sender hostname.Use a configured HELO or sender’s domain rather than the proxy host.
if len(settings.MTA_OUT_PROXIES) > 0: proxy = random.choice(settings.MTA_OUT_PROXIES) # noqa: S311 parsed = urlparse(proxy) return { "proxy_host": parsed.hostname, "proxy_port": parsed.port, "proxy_username": parsed.username, "proxy_password": parsed.password, - "sender_hostname": parsed.hostname, } return {}
And in send_message_via_mx:
- proxy_settings = select_smtp_proxy() + proxy_settings = select_smtp_proxy() + helo_host = getattr(settings, "MTA_OUT_HELO_HOSTNAME", None) or message.sender.email.split("@")[-1] @@ - **proxy_settings, + sender_hostname=helo_host, + **proxy_settings,
26-51
: A-record fallback returns IPs; avoid re-resolving them as hostnames.When MX is absent, you return IPs as “hostnames”, later re-resolving fails. Detect IP literals and skip resolution.
@@ import logging import random +import ipaddress @@ # Fallback to A record try: answers = dns.resolver.resolve(domain, "A") return [(0, str(r)) for r in answers]
compose.yaml (1)
302-317
: Fix SMTP relay target inside mta-out-test container.localhost points to the container itself; target the mailcatcher service.
- EXEC_CMD=true - - MTA_OUT_SMTP_HOST=localhost:587 + - MTA_OUT_SMTP_HOST=mailcatcher:1025 - MTA_OUT_SMTP_USERNAME=user - MTA_OUT_SMTP_PASSWORD=pass - SMTP_RELAY_HOST=localhost:2525src/backend/core/management/commands/selfcheck.py (1)
116-121
: Avoid printing secrets to stdout.Do not log MESSAGES_SELFCHECK_SECRET.
self.stdout.write("Starting selfcheck...") self.stdout.write(f"FROM: {settings.MESSAGES_SELFCHECK_FROM}") self.stdout.write(f"TO: {settings.MESSAGES_SELFCHECK_TO}") - self.stdout.write(f"SECRET: {settings.MESSAGES_SELFCHECK_SECRET}") + self.stdout.write("SECRET: [redacted]") self.stdout.write("")src/backend/core/mda/selfcheck.py (2)
58-61
: Pass HTML fallback to the composer and preparer.You send htmlBody in the draft, but call prepare_outbound_message with possibly None html. Use the same fallback to ensure consistent MIME.
- draft_body = { + draft_body = { "textBody": text_body, - "htmlBody": html_body or text_body, + "htmlBody": html_body or text_body, } @@ - if not prepare_outbound_message(from_mailbox, message, text_body, html_body): + if not prepare_outbound_message( + from_mailbox, message, text_body, html_body or text_body + ): raise SelfCheckError("Failed to prepare outbound message")Also applies to: 76-79
84-88
: Check delivery for all recipients, not just the first.Selfcheck should fail if any recipient wasn’t SENT.
- recipient_status = message.recipients.first().delivery_status # pylint: disable=no-member - - if recipient_status != models.MessageDeliveryStatusChoices.SENT: - raise SelfCheckError("Message not delivered") + statuses = list( + message.recipients.values_list("delivery_status", flat=True) + ) + if not statuses or any( + s != models.MessageDeliveryStatusChoices.SENT for s in statuses + ): + raise SelfCheckError("Message not delivered")
🧹 Nitpick comments (34)
src/backend/core/tests/api/test_messages_delete.py (1)
172-176
: Save only changed fields; minor test hygieneUse update_fields to avoid unintended signal side-effects.
- message.save() + message.save(update_fields=["blob", "draft_blob"])docs/architecture.md (2)
55-56
: Grammar/clarity tweak for outbound flow sentence.
Rephrase for readability and to satisfy the linter hint.-4. **Celery** processes sending via **MTA-Out** if `MTA_OUT_MODE=relay`, or directly to MX servers if `MTA_OUT_MODE=direct` -5. In development, MailCatcher intercepts messages and displays them in its web interface +4. Celery sends via the Postfix relay when `MTA_OUT_MODE=relay`, or directly to MX servers when `MTA_OUT_MODE=direct`. +5. In development, MailCatcher intercepts messages and displays them in its web interface.
24-27
: Reflect mode-based, optional nature of MTA-Out.
Avoids suggesting Postfix is always in the path.-**MTA-Out (Outbound)**: Postfix server for email delivery and relay +**MTA-Out (Outbound) [optional]**: Postfix relay used only when `MTA_OUT_MODE=relay`; in `direct` mode, Celery delivers straight to recipient MX without Postfix.src/socks-proxy/tests/requirements.txt (1)
1-5
: Align versions and add trailing newline.
Keeprequests
aligned with backend and ensure POSIX newline at EOF for tooling.pytest==8.3.5 pytest-cov==6.0.0 aiosmtpd==1.4.4 -requests==2.32.3 +requests==2.32.5 PySocks==1.7.1 +src/backend/pyproject.toml (1)
86-86
: Unify pytest version with tests suite.
Minor, but avoids confusion across environments.-"pytest==8.3.4", +"pytest==8.3.5",env.d/development/socks-proxy.defaults (1)
1-4
: Remove baked-in demo credentials from committed defaults; force local override.Shipping username:password pairs in a committed defaults file is risky even for dev. Make the default empty and document setting them in socks-proxy.local.
-PROXY_USERS="user1:pwd1,user2:pwd2" +# Set in env.d/development/socks-proxy.local (gitignored) +PROXY_USERS="" # This should fit most local development environments with Docker. PROXY_SOURCE_IP_WHITELIST="127.0.0.1/32,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16"docs/selfcheck.md (1)
27-33
: Minor wording/grammar nits (Prometheus-specific; metric names).-As well as these prometheus specific environment variables: +Additionally, these Prometheus-specific environment variables: @@ -- `MESSAGES_SELFCHECK_PROMETHEUS_METRICS_PREFIX`: Prefix for all Prometheus metrics names (default: empty string) +- `MESSAGES_SELFCHECK_PROMETHEUS_METRICS_PREFIX`: Prefix for all Prometheus metric names (default: empty string)src/socks-proxy/tests/Dockerfile (2)
4-6
: Use --no-install-recommends and cleanup apt in the same layer to reduce image size.-RUN apt-get update && apt-get install -y \ - netcat-openbsd \ - && rm -rf /var/lib/apt/lists/* +RUN apt-get update \ + && apt-get install -y --no-install-recommends netcat-openbsd \ + && rm -rf /var/lib/apt/lists/*
11-19
: Run tests as non-root and set sane Python env defaults.# Copy requirements and install Python dependencies COPY requirements.txt . -RUN pip install --no-cache-dir -r requirements.txt +ENV PYTHONDONTWRITEBYTECODE=1 PYTHONUNBUFFERED=1 PIP_NO_CACHE_DIR=1 +RUN python -m pip install --no-cache-dir -r requirements.txt @@ # Copy test files COPY . . -# Default command (can be overridden) -CMD ["pytest", "-v"] +# Create non-root user and adjust permissions +RUN useradd -r -u 10001 -g users -d /app tester && chown -R tester:users /app +USER 10001 + +# Default command (can be overridden) +CMD ["pytest", "-q"]Please confirm tests don’t require privileged sockets or ports.
src/socks-proxy/Dockerfile (3)
7-11
: Trim builder deps install and cache cleanup.-RUN <<EOR -apt-get update -apt-get install -y build-essential curl -EOR +RUN <<'EOR' +set -eu +apt-get update +apt-get install -y --no-install-recommends build-essential curl ca-certificates +rm -rf /var/lib/apt/lists/* +EOR
12-21
: Strengthen supply-chain verification (pipefail) and fail fast.-RUN <<EOR -set -eu +RUN <<'EOR' +set -euo pipefail curl -fsSL -o dante.tar.gz $DANTE_URL echo "$DANTE_SHA256 dante.tar.gz" | sha256sum -c - tar -xzf dante.tar.gz cd dante-$DANTE_VER ./configure make make install EOR
23-30
: Drop root in runtime; add basic healthcheck.FROM debian:trixie-slim AS runtime COPY --from=base /usr/local/sbin/sockd /usr/local/sbin/sockd COPY --chmod=0755 entrypoint.sh /entrypoint.sh +RUN useradd -r -u 65532 -g nogroup sockd +USER 65532:65532 +HEALTHCHECK --interval=30s --timeout=3s --retries=3 CMD nc -z 127.0.0.1 1080 || exit 1 + ENTRYPOINT ["/entrypoint.sh"] CMD ["/usr/local/sbin/sockd"]Verify entrypoint writes config to a path writable by non-root (or pass a writable -f path).
src/backend/core/mda/tasks.py (1)
65-104
: Prevent overlapping self-check runs (singleton lock).@celery_app.task(bind=True) def selfcheck_task(self): @@ - try: + from django.core.cache import cache + lock_key = "core:selfcheck:lock" + lock_ttl = 300 # seconds + if not cache.add(lock_key, "1", timeout=lock_ttl): + logger.info("selfcheck_task skipped: another run is in progress") + return {"success": True, "skipped": True, "reason": "already_running"} + try: result = run_selfcheck() @@ - return result + return result # pylint: disable=broad-exception-caught except Exception as e: @@ - raise + raise finally: - pass + cache.delete(lock_key)Optional: add
rate_limit="1/m"
orsoft_time_limit=300
to the task decorator if appropriate.README.md (2)
142-144
: Clarify SOCKS5 URI and warn dev-only creds.Consider amending the row to show a ready-to-copy URI and a short warning:
-| **SOCKS Proxy** | 8916 | SOCKS5 proxy | `user1` / `pwd1` | +| **SOCKS Proxy** | 8916 | SOCKS5 proxy (example: `socks5h://user1:pwd1@localhost:8916`) | `user1` / `pwd1` |Add a note “Do not reuse these credentials in production.”
186-191
: Nice additions; consider adding a SOCKS usage example for completeness.Example (if your tools respect ALL_PROXY):
export ALL_PROXY="socks5h://user1:pwd1@localhost:8916" swaks [email protected] --server localhost:8917 unset ALL_PROXY
.github/workflows/messages.yml (1)
123-133
: Gate docker-publish-socks-proxy job to main pushes only
Thetarget
input is optional (default: empty string), so omitting it is fine. Add this under thedocker-publish-socks-proxy
job in .github/workflows/messages.yml:if: github.event_name == 'push' && github.ref == 'refs/heads/main'
to prevent it from running on pull-request events.
src/socks-proxy/tests/test_smtp.py (3)
1-7
: Remove unused imports flagged by Ruff.
pytest
,socket
, andsmtplib
are unused in this module. Keep logging, time, MIME.-import pytest import logging -import socket import time -import smtplib from email.mime.text import MIMEText
28-39
: Avoid brittle KeyError on connection_info and reduce sleep flakiness.
- Accessing connection_info["peer_host"] can KeyError if the server didn’t attach it. Assert presence first.
- Replace fixed sleep with a short poll loop to reduce flakes.
- # Wait for message and get connection info - time.sleep(1) + # Wait for message and get connection info (short poll to reduce flakes) + for _ in range(20): + time.sleep(0.05) + messages = mock_smtp_server.get_messages() + if messages: + break - messages = mock_smtp_server.get_messages() assert len(messages) == 1, "Message should be received" assert messages[0]["subject"] == "Test Direct Connection" # Log connection info for debugging connection_info = messages[0].get("connection_info", {}) logger.info(f"Direct SMTP connection info: {connection_info}") - assert connection_info["peer_host"] == "127.0.0.1" + assert "peer_host" in connection_info, "Mock server should expose peer_host" + assert connection_info["peer_host"] == "127.0.0.1"
63-68
: Mirror the defensive check in proxy test.Ensure key existence and provide clearer assertion.
- assert connection_info["peer_host"] != "127.0.0.1", "Proxy SMTP connection should not be direct" + assert "peer_host" in connection_info, "Mock server should expose peer_host" + assert connection_info["peer_host"] != "127.0.0.1", "Proxy SMTP connection should not be direct"src/backend/core/tests/mda/test_outbound_e2e.py (1)
80-101
: Stabilize DNS mocking for direct path; avoid reliance on runtime DNS.Using socket.gethostbyname("mailcatcher") introduces an external dependency. Consider mocking it or returning an A-record-like object from the resolver to decouple from network/Docker DNS.
You can either:
- Patch gethostbyname:
with patch("socket.gethostbyname", return_value="127.0.0.1"): ...
- Or return a minimal object with .to_text()/address for the A query if the resolver consumer expects it.
src/backend/core/api/viewsets/draft.py (1)
214-218
: Use serializer context in POST for parity with PUT.POST response omits context={"request": request}, which can break hyperlinks/fields relying on request.
- return Response( - serializers.MessageSerializer(message).data, status=status.HTTP_201_CREATED - ) + return Response( + serializers.MessageSerializer(message, context={"request": request}).data, + status=status.HTTP_201_CREATED, + )src/socks-proxy/tests/test_socks_proxy.py (2)
6-6
: Remove unused import.
.conftest.PROXY1_CONFIG
is imported but never used.-from .conftest import SOCKSClient, PROXY1_CONFIG +from .conftest import SOCKSClient
94-98
: Bound DNS error test with a short timeout.Prevent slow CI by setting a low timeout.
- assert not socks_client.test_connection("invalid.ip.address", 80) + assert not socks_client.test_connection("invalid.ip.address", 80, timeout=1)src/backend/core/tests/mda/test_outbound.py (3)
19-33
: Fixture name/docstring mismatch (is_draft=False).Either set is_draft=True or rename the fixture to reflect it’s not a draft to avoid confusion in future maintenance.
- """Create a valid (not actually draft) message with sender and recipients.""" + """Create a valid message with sender and recipients (not a draft)."""
157-181
: Mocking dns.resolver.resolve return shape more realistically.Returning plain strings for A records may diverge from dnspython objects. Prefer mocks with .to_text() to reduce test brittleness across resolver implementations.
- "mx1.example.com A": [ - "1.1.0.9", - ], + "mx1.example.com A": [ + MagicMock(to_text=MagicMock(return_value="1.1.0.9")), + ], - "mx2.example.com A": [ - "1.2.0.9", - ], + "mx2.example.com A": [ + MagicMock(to_text=MagicMock(return_value="1.2.0.9")), + ], - "mx1.example2.com A": [ - "2.1.0.9", - ], + "mx1.example2.com A": [ + MagicMock(to_text=MagicMock(return_value="2.1.0.9")), + ], - "mx2.example2.com A": [ - "2.2.0.9", - ], + "mx2.example2.com A": [ + MagicMock(to_text=MagicMock(return_value="2.2.0.9")), + ],
193-206
: Overly strict assertion on sender_hostname.Hard-coding sender_hostname="smtp.proxy" encodes internal behavior and may break with legitimate changes (e.g., EHLO name). Consider asserting presence/type or omitting it from strict equality.
Also applies to: 207-218
src/backend/core/management/commands/selfcheck.py (2)
66-71
: Minor wording fix.“Receptions duration” → “Reception duration”.
- "Receptions duration of the self check", + "Reception duration of the self check",
90-96
: Harden metrics push (don’t fail the command on push errors).Wrap push_to_gateway with try/except to avoid a failing metrics backend impacting selfcheck results.
def send_metrics(self): - return push_to_gateway( - settings.MESSAGES_SELFCHECK_PROMETHEUS_METRICS_PUSHGATEWAY_URL, - job="selfcheck", - registry=self.registry, - ) + try: + return push_to_gateway( + settings.MESSAGES_SELFCHECK_PROMETHEUS_METRICS_PUSHGATEWAY_URL, + job="selfcheck", + registry=self.registry, + ) + except Exception as exc: # best-effort + # Use BaseCommand style only where we have access; here just print. + print(f"Warning: failed to push metrics: {exc}") + return NoneAlso applies to: 141-148
src/backend/core/mda/selfcheck.py (2)
241-244
: Validate both FROM and TO before proceeding.Fail fast with a clear message if either is missing.
- # Don't do anything if FROM is empty - if not from_email: - logger.info("MESSAGES_SELFCHECK_FROM is empty, skipping selfcheck") + # Don't do anything if FROM/TO are empty + if not from_email or not to_email: + which = "FROM" if not from_email else "TO" + logger.info("MESSAGES_SELFCHECK_%s is empty, skipping selfcheck", which) return result
137-175
: Consider exponential backoff to reduce DB load.Polling every 100ms for up to 60s issues up to 600 queries. Backoff (e.g., 0.1, 0.2, … up to 2s) would be gentler without losing responsiveness.
src/backend/messages/settings.py (1)
162-166
: Document accepted values for MTA_OUT_MODE.Tests use “relay”; comments and other places mention “external”. Clarify and keep consistent.
- # "direct" or "relay" + # Accepted values: "direct" (deliver to MX) or "relay" (use MTA_OUT_SMTP_HOST)src/socks-proxy/tests/conftest.py (3)
1-14
: Trim unused imports and add contextlib.Clean up to satisfy Ruff and reduce noise.
-import pytest -import smtplib -import time -import logging -import os -import socket -import subprocess -import ssl -import struct -import socks +import pytest +import smtplib +import logging +import os +import subprocess +import socks +import contextlib
58-66
: Avoid bare except in get_container_ip.Catch Exception and log; keep a safe fallback.
- except: - return "127.0.0.1" # fallback + except Exception: + logger.warning("Could not determine container IP; falling back to 127.0.0.1") + return "127.0.0.1"
214-219
: Use contextlib.suppress for cleanup and avoid bare except.Cleaner teardown without masking unexpected errors elsewhere.
- try: - client.quit() - except: - pass + with contextlib.suppress(Exception): + client.quit()Also applies to: 259-262
MTA_OUT_MODE=relay | ||
MTA_OUT_SMTP_HOST=mailcatcher:1025 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Dev config OK; add guardrails in code for “host:port” parsing and verify no stale env keys.
Value includes a port (good). Ensure code validates format before splitting; also confirm no remaining MTA_OUT_HOST
users.
🏁 Script executed:
#!/bin/bash
# Find any stale references or unsafe splits
rg -nP '\bMTA_OUT_HOST\b' -g '!**/site-packages/**'
rg -nP '\bMTA_OUT_SMTP_HOST\b' -C3 -g '!**/site-packages/**'
rg -nP 'split\(\s*[\"\']:\s*[\"\']\s*\)' -C2 --type=py
Length of output: 5698
Validate and parse MTA_OUT_SMTP_HOST before splitting
No stale MTA_OUT_HOST
references found. Tests (e.g. src/mta-out/tests/test_smtp_auth.py, src/mta-out/tests/conftest.py) currently call .split(":")
on MTA_OUT_SMTP_HOST
without safeguards; introduce a dedicated parser/validator in backend/messages/settings.py (or a shared helper) to enforce host:port
format and extract host/port with clear errors.
🤖 Prompt for AI Agents
In env.d/development/backend.defaults around lines 70-71, MTA_OUT_SMTP_HOST is
set to "mailcatcher:1025" but calling .split(":") directly elsewhere is unsafe;
add a parser/validator function in backend/messages/settings.py (or a shared
utils module) that validates the value matches host:port, returns {host,
port:int}, and raises a clear, test-friendly error if missing/invalid or port
not numeric. Replace direct .split(":") usages in mta-out tests and code with
this parser, and update tests to assert the parser's error messages for
malformed inputs.
# Validate required fields | ||
sender_id = request.data.get("senderId") | ||
self.mailbox = models.Mailbox.objects.get(id=sender_id) | ||
subject = request.data.get("subject") | ||
|
||
sender_mailbox = self.mailbox | ||
|
||
# Then get the parent message if it's a reply | ||
parent_id = request.data.get("parentId") | ||
reply_to_message = None | ||
if parent_id: | ||
try: | ||
# Reply to an existing message in a thread | ||
reply_to_message = models.Message.objects.select_related("thread").get( | ||
id=parent_id | ||
) | ||
# Ensure user has access to parent thread (already checked by permission, but safe) | ||
if not models.ThreadAccess.objects.filter( | ||
thread=reply_to_message.thread, | ||
mailbox=sender_mailbox, | ||
role=models.ThreadAccessRoleChoices.EDITOR, | ||
).exists(): | ||
raise drf.exceptions.PermissionDenied( | ||
"Access denied to the thread you are replying to." | ||
) | ||
thread = reply_to_message.thread | ||
if not sender_id: | ||
raise drf.exceptions.ValidationError("senderId is required") | ||
|
||
except models.Message.DoesNotExist as exc: | ||
raise drf.exceptions.NotFound("Parent message not found.") from exc | ||
else: | ||
# Create a new thread for the new draft | ||
thread = models.Thread.objects.create( | ||
subject=subject, | ||
) | ||
# Grant access to the creator via the sending mailbox context | ||
# permission to create a draft message if already check with permission class | ||
models.ThreadAccess.objects.create( | ||
thread=thread, | ||
mailbox=sender_mailbox, | ||
role=enums.ThreadAccessRoleChoices.EDITOR, | ||
) | ||
subject = request.data.get("subject") | ||
|
||
# --- Get Sender Contact --- # | ||
# Find the contact associated with the sending mailbox | ||
# Construct the email address from mailbox parts | ||
mailbox_email = f"{self.mailbox.local_part}@{self.mailbox.domain.name}" | ||
sender_contact, _ = models.Contact.objects.get_or_create( | ||
email__iexact=mailbox_email, | ||
mailbox=self.mailbox, # Ensure contact is linked to this mailbox | ||
defaults={ # Provide defaults for creation | ||
"email": mailbox_email, | ||
"name": self.mailbox.local_part, # Basic default name | ||
}, | ||
) | ||
# Get mailbox (permission class validates access) | ||
try: | ||
sender_mailbox = models.Mailbox.objects.get(id=sender_id) | ||
except models.Mailbox.DoesNotExist as exc: | ||
raise drf.exceptions.NotFound( | ||
f"Mailbox with senderId {sender_id} not found." | ||
) from exc | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Validate required subject in POST (avoid silent None).
Only senderId is validated; subject is required by the schema but not enforced here. Add a check and a clear error.
# Validate required fields
sender_id = request.data.get("senderId")
if not sender_id:
raise drf.exceptions.ValidationError("senderId is required")
- subject = request.data.get("subject")
+ subject = request.data.get("subject")
+ if subject is None or (isinstance(subject, str) and subject.strip() == ""):
+ raise drf.exceptions.ValidationError("subject is required")
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
# Validate required fields | |
sender_id = request.data.get("senderId") | |
self.mailbox = models.Mailbox.objects.get(id=sender_id) | |
subject = request.data.get("subject") | |
sender_mailbox = self.mailbox | |
# Then get the parent message if it's a reply | |
parent_id = request.data.get("parentId") | |
reply_to_message = None | |
if parent_id: | |
try: | |
# Reply to an existing message in a thread | |
reply_to_message = models.Message.objects.select_related("thread").get( | |
id=parent_id | |
) | |
# Ensure user has access to parent thread (already checked by permission, but safe) | |
if not models.ThreadAccess.objects.filter( | |
thread=reply_to_message.thread, | |
mailbox=sender_mailbox, | |
role=models.ThreadAccessRoleChoices.EDITOR, | |
).exists(): | |
raise drf.exceptions.PermissionDenied( | |
"Access denied to the thread you are replying to." | |
) | |
thread = reply_to_message.thread | |
if not sender_id: | |
raise drf.exceptions.ValidationError("senderId is required") | |
except models.Message.DoesNotExist as exc: | |
raise drf.exceptions.NotFound("Parent message not found.") from exc | |
else: | |
# Create a new thread for the new draft | |
thread = models.Thread.objects.create( | |
subject=subject, | |
) | |
# Grant access to the creator via the sending mailbox context | |
# permission to create a draft message if already check with permission class | |
models.ThreadAccess.objects.create( | |
thread=thread, | |
mailbox=sender_mailbox, | |
role=enums.ThreadAccessRoleChoices.EDITOR, | |
) | |
subject = request.data.get("subject") | |
# --- Get Sender Contact --- # | |
# Find the contact associated with the sending mailbox | |
# Construct the email address from mailbox parts | |
mailbox_email = f"{self.mailbox.local_part}@{self.mailbox.domain.name}" | |
sender_contact, _ = models.Contact.objects.get_or_create( | |
email__iexact=mailbox_email, | |
mailbox=self.mailbox, # Ensure contact is linked to this mailbox | |
defaults={ # Provide defaults for creation | |
"email": mailbox_email, | |
"name": self.mailbox.local_part, # Basic default name | |
}, | |
) | |
# Get mailbox (permission class validates access) | |
try: | |
sender_mailbox = models.Mailbox.objects.get(id=sender_id) | |
except models.Mailbox.DoesNotExist as exc: | |
raise drf.exceptions.NotFound( | |
f"Mailbox with senderId {sender_id} not found." | |
) from exc | |
# Validate required fields | |
sender_id = request.data.get("senderId") | |
if not sender_id: | |
raise drf.exceptions.ValidationError("senderId is required") | |
subject = request.data.get("subject") | |
if subject is None or (isinstance(subject, str) and subject.strip() == ""): | |
raise drf.exceptions.ValidationError("subject is required") | |
# Get mailbox (permission class validates access) | |
try: | |
sender_mailbox = models.Mailbox.objects.get(id=sender_id) | |
except models.Mailbox.DoesNotExist as exc: | |
raise drf.exceptions.NotFound( | |
f"Mailbox with senderId {sender_id} not found." | |
) from exc |
🤖 Prompt for AI Agents
In src/backend/core/api/viewsets/draft.py around lines 187 to 201, the code
validates senderId but does not enforce that subject is provided; add a
required-field check for subject after reading request.data.get("subject") and
raise drf.exceptions.ValidationError("subject is required") (or similar clear
message) when it's falsy, keeping the rest of the mailbox lookup logic
unchanged.
def resolve_hostname_ip(hostname: str) -> Optional[str]: | ||
""" | ||
Resolve a hostname to its first A record IP address, with a direct DNS query | ||
""" | ||
try: | ||
answers = dns.resolver.resolve(hostname, "A") | ||
for r in answers: | ||
return str(r) | ||
except Exception as e: # pylint: disable=broad-exception-caught | ||
logger.error("Error resolving IP for %s: %s", hostname, e) | ||
return None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Resolve both A and AAAA; reduce noisy logging.
Support IPv6-only MX targets and only error after both queries fail.
def resolve_hostname_ip(hostname: str) -> Optional[str]:
- """
- Resolve a hostname to its first A record IP address, with a direct DNS query
- """
- try:
- answers = dns.resolver.resolve(hostname, "A")
- for r in answers:
- return str(r)
- except Exception as e: # pylint: disable=broad-exception-caught
- logger.error("Error resolving IP for %s: %s", hostname, e)
- return None
+ """
+ Resolve a hostname to its first A or AAAA record IP. If hostname is an IP literal, return as-is.
+ """
+ try:
+ ipaddress.ip_address(hostname)
+ return hostname
+ except ValueError:
+ pass
+ for rtype in ("A", "AAAA"):
+ try:
+ answers = dns.resolver.resolve(hostname, rtype)
+ for r in answers:
+ return str(r)
+ except Exception as e: # pylint: disable=broad-exception-caught
+ logger.debug("No %s record for %s: %s", rtype, hostname, e)
+ logger.error("Error resolving IP for %s: no A/AAAA record", hostname)
+ return None
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def resolve_hostname_ip(hostname: str) -> Optional[str]: | |
""" | |
Resolve a hostname to its first A record IP address, with a direct DNS query | |
""" | |
try: | |
answers = dns.resolver.resolve(hostname, "A") | |
for r in answers: | |
return str(r) | |
except Exception as e: # pylint: disable=broad-exception-caught | |
logger.error("Error resolving IP for %s: %s", hostname, e) | |
return None | |
def resolve_hostname_ip(hostname: str) -> Optional[str]: | |
""" | |
Resolve a hostname to its first A or AAAA record IP. If hostname is an IP literal, return as-is. | |
""" | |
try: | |
# If the input is already an IP literal, use it directly | |
ipaddress.ip_address(hostname) | |
return hostname | |
except ValueError: | |
pass | |
for rtype in ("A", "AAAA"): | |
try: | |
answers = dns.resolver.resolve(hostname, rtype) | |
for r in answers: | |
return str(r) | |
except Exception as e: # pylint: disable=broad-exception-caught | |
logger.debug("No %s record for %s: %s", rtype, hostname, e) | |
logger.error("Error resolving IP for %s: no A/AAAA record", hostname) | |
return None |
🤖 Prompt for AI Agents
In src/backend/core/mda/outbound_mta.py around lines 53 to 63, the resolver
currently only queries A records and logs an error on any exception; update it
to attempt an A query first and if that yields no addresses then attempt an AAAA
query, return the first successful result (preferring A over AAAA), and only log
an error once both queries fail. Catch and suppress non-fatal resolver
exceptions per-query (do not log each), but when both queries produced no
addresses or raised fatal errors, log a single concise message including the
hostname and combined error info or that no addresses were found; ensure the
function still returns None on failure.
def group_recipients_by_mx(recipients: List[str]) -> Dict[str, List[str]]: | ||
""" | ||
Group recipient emails by their MX hostname (not IP). | ||
Returns a dict: {mx_hostname: [recipient_emails]} | ||
""" | ||
mx_map = defaultdict(list) | ||
for email in recipients: | ||
domain = email.split("@")[-1] | ||
mx_records = resolve_mx_records(domain) | ||
if mx_records: | ||
# Use the lowest-priority MX | ||
# TODO: In case of failure, we should try the next MX | ||
_, mx_hostname = mx_records[0] | ||
mx_map[mx_hostname].append(email) | ||
else: | ||
logger.error("No MX or A record found for domain %s", domain) | ||
return mx_map | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Implement MX failover instead of always picking the first record.
Only lowest-preference MX is used; add retry-on-next-MX behavior to improve resiliency per RFC 5321.
I can provide a focused diff to refactor grouping to domains and iterate MX by preference if you confirm desired retry/backoff policy (attempts per MX, per-recipient).
@celery_app.task(bind=True) | ||
def selfcheck_task(self): | ||
"""Run a selfcheck of the mail delivery system. | ||
|
||
This task performs an end-to-end test of the mail delivery pipeline: | ||
1. Creates test mailboxes if they don't exist | ||
2. Creates a test message with a secret | ||
3. Sends the message via the outbound system | ||
4. Waits for the message to be received | ||
5. Verifies the integrity of the received message | ||
6. Cleans up test data | ||
7. Returns timing metrics | ||
|
||
Returns: | ||
dict: A dictionary with success status, timings, and metrics | ||
""" | ||
try: | ||
result = run_selfcheck() | ||
|
||
# Update task state with progress information | ||
self.update_state( | ||
state="SUCCESS", | ||
meta={ | ||
"status": "completed", | ||
"success": result["success"], | ||
"send_time": result["send_time"], | ||
"reception_time": result["reception_time"], | ||
}, | ||
) | ||
|
||
return result | ||
# pylint: disable=broad-exception-caught |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ensure Celery task meta is JSON-serializable (datetimes -> ISO 8601).
If run_selfcheck()
returns datetimes, some result backends using JSON will choke. Convert to isoformat()
:
@celery_app.task(bind=True)
def selfcheck_task(self):
@@
- result = run_selfcheck()
+ result = run_selfcheck()
+ send_time = result.get("send_time")
+ recv_time = result.get("reception_time")
+ if hasattr(send_time, "isoformat"):
+ send_time = send_time.isoformat()
+ if hasattr(recv_time, "isoformat"):
+ recv_time = recv_time.isoformat()
@@
self.update_state(
state="SUCCESS",
meta={
"status": "completed",
- "success": result["success"],
- "send_time": result["send_time"],
- "reception_time": result["reception_time"],
+ "success": result["success"],
+ "send_time": send_time,
+ "reception_time": recv_time,
},
)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
@celery_app.task(bind=True) | |
def selfcheck_task(self): | |
"""Run a selfcheck of the mail delivery system. | |
This task performs an end-to-end test of the mail delivery pipeline: | |
1. Creates test mailboxes if they don't exist | |
2. Creates a test message with a secret | |
3. Sends the message via the outbound system | |
4. Waits for the message to be received | |
5. Verifies the integrity of the received message | |
6. Cleans up test data | |
7. Returns timing metrics | |
Returns: | |
dict: A dictionary with success status, timings, and metrics | |
""" | |
try: | |
result = run_selfcheck() | |
# Update task state with progress information | |
self.update_state( | |
state="SUCCESS", | |
meta={ | |
"status": "completed", | |
"success": result["success"], | |
"send_time": result["send_time"], | |
"reception_time": result["reception_time"], | |
}, | |
) | |
return result | |
# pylint: disable=broad-exception-caught | |
@celery_app.task(bind=True) | |
def selfcheck_task(self): | |
"""Run a selfcheck of the mail delivery system. | |
This task performs an end-to-end test of the mail delivery pipeline: | |
1. Creates test mailboxes if they don't exist | |
2. Creates a test message with a secret | |
3. Sends the message via the outbound system | |
4. Waits for the message to be received | |
5. Verifies the integrity of the received message | |
6. Cleans up test data | |
7. Returns timing metrics | |
Returns: | |
dict: A dictionary with success status, timings, and metrics | |
""" | |
try: | |
result = run_selfcheck() | |
send_time = result.get("send_time") | |
recv_time = result.get("reception_time") | |
if hasattr(send_time, "isoformat"): | |
send_time = send_time.isoformat() | |
if hasattr(recv_time, "isoformat"): | |
recv_time = recv_time.isoformat() | |
# Update task state with progress information | |
self.update_state( | |
state="SUCCESS", | |
meta={ | |
"status": "completed", | |
"success": result["success"], | |
"send_time": send_time, | |
"reception_time": recv_time, | |
}, | |
) | |
return result | |
# pylint: disable=broad-exception-caught |
🤖 Prompt for AI Agents
In src/backend/core/mda/tasks.py around lines 65 to 96, the Celery task is
putting datetime objects into task meta and returning them directly which can
break JSON-only backends; before calling self.update_state and before returning
result, walk the result dict (at least keys like send_time and reception_time
and any other datetime values) and replace datetime objects with ISO 8601
strings via .isoformat(); ensure nested dicts/lists are handled or explicitly
convert known datetime fields so the meta and return value are
JSON-serializable.
if [ -z "${PROXY_USERS}" ]; then | ||
echo "Error: PROXY_USERS env var is not set (format: user1:pass1,user2:pass2)" | ||
exit 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
set -u causes unbound var on PROXY_USERS check.
Use default expansion to avoid immediate exit before -z test.
-if [ -z "${PROXY_USERS}" ]; then
+if [ -z "${PROXY_USERS:-}" ]; then
echo "Error: PROXY_USERS env var is not set (format: user1:pass1,user2:pass2)"
exit 1
fi
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if [ -z "${PROXY_USERS}" ]; then | |
echo "Error: PROXY_USERS env var is not set (format: user1:pass1,user2:pass2)" | |
exit 1 | |
if [ -z "${PROXY_USERS:-}" ]; then | |
echo "Error: PROXY_USERS env var is not set (format: user1:pass1,user2:pass2)" | |
exit 1 | |
fi |
🤖 Prompt for AI Agents
In src/socks-proxy/entrypoint.sh around lines 4 to 6, the script's set -u will
cause an immediate exit when referencing PROXY_USERS in the -z test if the
variable is unset; change the test to use default expansion (e.g. use
"${PROXY_USERS:-}" instead of "${PROXY_USERS}") so the check safely treats an
unset variable as empty and does not trigger an unbound variable error, keeping
the rest of the echo and exit behavior the same.
# Replace the placeholder with the generated configuration | ||
echo "$DANTE_CONFIG" > /etc/sockd.conf | ||
|
||
exec "$@" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Tighten permissions on generated config (avoid leaking creds).
-echo "$DANTE_CONFIG" > /etc/sockd.conf
+umask 077
+printf "%s" "$DANTE_CONFIG" > /etc/sockd.conf
+chmod 600 /etc/sockd.conf
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
# Replace the placeholder with the generated configuration | |
echo "$DANTE_CONFIG" > /etc/sockd.conf | |
exec "$@" | |
# Replace the placeholder with the generated configuration | |
umask 077 | |
printf "%s" "$DANTE_CONFIG" > /etc/sockd.conf | |
chmod 600 /etc/sockd.conf | |
exec "$@" |
🤖 Prompt for AI Agents
In src/socks-proxy/entrypoint.sh around lines 65 to 68, the generated
/etc/sockd.conf is written with default permissions which can leak credentials;
write the file with restrictive permissions and ownership (e.g. set umask 077
before writing OR write to a temp file, chmod 600 and chown root:root, then
atomically move it into /etc/sockd.conf) and ensure the final file is not
world-readable or writable.
def parse_proxy_env(proxy_env): | ||
"""Parse SOCKS_PROXY1 or SOCKS_PROXY2 environment variable | ||
Format: username:password@host:port | ||
""" | ||
if not proxy_env: | ||
return ProxyConfig() | ||
|
||
try: | ||
# Add scheme to make it a valid URL for urlparse | ||
if not proxy_env.startswith(('http://', 'https://', 'socks://')): | ||
proxy_env = f"socks://{proxy_env}" | ||
|
||
parsed = urlparse(proxy_env) | ||
|
||
return ProxyConfig( | ||
username=parsed.username, | ||
password=parsed.password, | ||
host=parsed.hostname or "localhost", | ||
port=parsed.port or 1080 | ||
) | ||
except Exception: | ||
return ProxyConfig() | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Support socks5/socks4 schemes and already-schemed env values.
Current prefixing misses “socks5://” and can double-prefix valid URLs.
- """Parse SOCKS_PROXY1 or SOCKS_PROXY2 environment variable
- Format: username:password@host:port
+ """Parse SOCKS_PROXY1 or SOCKS_PROXY2 environment variable.
+ Formats supported:
+ - username:password@host:port
+ - socks5://username:password@host:port
+ - socks4://username:password@host:port
"""
@@
- # Add scheme to make it a valid URL for urlparse
- if not proxy_env.startswith(('http://', 'https://', 'socks://')):
- proxy_env = f"socks://{proxy_env}"
+ # Ensure a scheme so urlparse can extract components
+ if "://" not in proxy_env:
+ proxy_env = f"socks5://{proxy_env}"
@@
- return ProxyConfig(
+ return ProxyConfig(
username=parsed.username,
password=parsed.password,
host=parsed.hostname or "localhost",
port=parsed.port or 1080
)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def parse_proxy_env(proxy_env): | |
"""Parse SOCKS_PROXY1 or SOCKS_PROXY2 environment variable | |
Format: username:password@host:port | |
""" | |
if not proxy_env: | |
return ProxyConfig() | |
try: | |
# Add scheme to make it a valid URL for urlparse | |
if not proxy_env.startswith(('http://', 'https://', 'socks://')): | |
proxy_env = f"socks://{proxy_env}" | |
parsed = urlparse(proxy_env) | |
return ProxyConfig( | |
username=parsed.username, | |
password=parsed.password, | |
host=parsed.hostname or "localhost", | |
port=parsed.port or 1080 | |
) | |
except Exception: | |
return ProxyConfig() | |
def parse_proxy_env(proxy_env): | |
"""Parse SOCKS_PROXY1 or SOCKS_PROXY2 environment variable. | |
Formats supported: | |
- username:password@host:port | |
- socks5://username:password@host:port | |
- socks4://username:password@host:port | |
""" | |
if not proxy_env: | |
return ProxyConfig() | |
try: | |
# Ensure a scheme so urlparse can extract components | |
if "://" not in proxy_env: | |
proxy_env = f"socks5://{proxy_env}" | |
parsed = urlparse(proxy_env) | |
return ProxyConfig( | |
username=parsed.username, | |
password=parsed.password, | |
host=parsed.hostname or "localhost", | |
port=parsed.port or 1080 | |
) | |
except Exception: | |
return ProxyConfig() |
🤖 Prompt for AI Agents
In src/socks-proxy/tests/conftest.py around lines 30 to 52, the env parsing
currently only checks for http://, https:// and socks:// and will miss socks5://
and socks4:// and can double-prefix values that already include a scheme; update
the prefix check to detect common SOCKS schemes (e.g., socks5://, socks4://,
socks://) or simply check for the presence of "://" (case-insensitive) before
prefixing with "socks://", so existing schemed values are left intact and
socks4/5 schemes are recognized by urlparse.
@pytest.mark.parametrize("proxy_fixture", ["socks_client", "socks_client_proxy2"]) | ||
def test_socks_authentication_success(request, proxy_fixture): | ||
"""Test successful SOCKS authentication with correct credentials on both proxies""" | ||
socks_client = request.getfixturevalue(proxy_fixture) | ||
result = socks_client.test_connection("8.8.8.8", 53) | ||
assert result, f"SOCKS connection should succeed with {proxy_fixture}" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Avoid external egress in tests; use local mailcatcher.
Point tests to the mailcatcher service to keep runs hermetic and deterministic.
-@pytest.mark.parametrize("proxy_fixture", ["socks_client", "socks_client_proxy2"])
+@pytest.mark.parametrize("proxy_fixture", ["socks_client", "socks_client_proxy2"])
def test_socks_authentication_success(request, proxy_fixture):
@@
- result = socks_client.test_connection("8.8.8.8", 53)
+ result = socks_client.test_connection("mailcatcher", 1025)
@@
- result = client.test_connection("8.8.8.8", 53)
+ result = client.test_connection("mailcatcher", 1025)
@@
- result = client.test_connection("8.8.8.8", 53)
+ result = client.test_connection("mailcatcher", 1025)
@@
- result = client.test_connection("8.8.8.8", 53)
+ result = client.test_connection("mailcatcher", 1025)
Note: socks-proxy-test doesn’t depend_on mailcatcher; ensure mailcatcher is up when running this profile.
Also applies to: 18-30, 32-44, 46-56
🤖 Prompt for AI Agents
In src/socks-proxy/tests/test_socks_proxy.py around lines 10-16 (and also update
the other ranges 18-30, 32-44, 46-56), the tests currently perform external
egress to 8.8.8.8:53; change the target to the local mailcatcher service (use
hostname "mailcatcher" and SMTP port 1025 or 127.0.0.1:1025 if tests run
locally) in all test_connection calls so tests are hermetic, and ensure the
socks-proxy-test profile brings up the mailcatcher service (or document that
mailcatcher must be up) before running these tests.
def test_socks_proxy_connection_establishment(socks_client): | ||
"""Test that SOCKS proxy connection can be established""" | ||
try: | ||
socks_client.test_connection("8.8.8.8", 53) | ||
assert True, "SOCKS connection should be established successfully" | ||
except Exception as e: | ||
pytest.fail(f"Failed to establish SOCKS connection: {e}") | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Make the test assert the connection result.
As written, the test passes even if test_connection() returns False (no exception). Assert the boolean.
def test_socks_proxy_connection_establishment(socks_client):
"""Test that SOCKS proxy connection can be established"""
- try:
- socks_client.test_connection("8.8.8.8", 53)
- assert True, "SOCKS connection should be established successfully"
+ try:
+ ok = socks_client.test_connection("mailcatcher", 1025)
+ assert ok, "SOCKS connection should be established successfully"
except Exception as e:
pytest.fail(f"Failed to establish SOCKS connection: {e}")
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def test_socks_proxy_connection_establishment(socks_client): | |
"""Test that SOCKS proxy connection can be established""" | |
try: | |
socks_client.test_connection("8.8.8.8", 53) | |
assert True, "SOCKS connection should be established successfully" | |
except Exception as e: | |
pytest.fail(f"Failed to establish SOCKS connection: {e}") | |
def test_socks_proxy_connection_establishment(socks_client): | |
"""Test that SOCKS proxy connection can be established""" | |
try: | |
ok = socks_client.test_connection("mailcatcher", 1025) | |
assert ok, "SOCKS connection should be established successfully" | |
except Exception as e: | |
pytest.fail(f"Failed to establish SOCKS connection: {e}") |
🤖 Prompt for AI Agents
In src/socks-proxy/tests/test_socks_proxy.py around lines 73 to 80, the test
currently calls socks_client.test_connection(...) but doesn't assert its boolean
return value, so a False result will still let the test pass; change it to store
the return (e.g. result = socks_client.test_connection("8.8.8.8", 53)) and
assert that result is True with a clear failure message (keep the existing
exception catch to fail on unexpected exceptions).
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
Makefile (1)
307-311
: Make lockfile removal idempotent; previous PHONY mismatch is now fixed.Use “rm -f” so the target works even if poetry.lock is absent. Good job adding the correct .PHONY to address the earlier mismatch.
back-poetry-update-indirect: ## update indirect dependencies - rm src/backend/poetry.lock + rm -f src/backend/poetry.lock make back-poetry-lock .PHONY: back-poetry-update-indirect
🧹 Nitpick comments (3)
Makefile (3)
76-78
: Add env files for mta-out and socks-proxy; consider keeping deps alphabetized.Nice addition. For maintainability, keep the dependency list alphabetically sorted to make diffs smaller and future edits clearer.
Apply:
create-env-files: ## Create empty .local env files for local development create-env-files: \ - env.d/development/crowdin.local \ - env.d/development/postgresql.local \ - env.d/development/keycloak.local \ - env.d/development/backend.local \ - env.d/development/frontend.local \ - env.d/development/mta-in.local \ - env.d/development/mta-out.local \ - env.d/development/socks-proxy.local + env.d/development/backend.local \ + env.d/development/crowdin.local \ + env.d/development/frontend.local \ + env.d/development/keycloak.local \ + env.d/development/mta-in.local \ + env.d/development/mta-out.local \ + env.d/development/postgresql.local \ + env.d/development/socks-proxy.local
257-260
: Add socks-proxy-test to the aggregate test target.This makes “make test” comprehensive and CI-friendly.
Proposed change (outside this hunk):
test: ## run all tests test: \ back-test \ front-test \ mta-in-test \ - mta-out-test + mta-out-test \ + socks-proxy-test .PHONY: testAlso verify the compose service name is exactly “socks-proxy-test” to match this rule.
319-322
: Run pipdeptree in the Poetry environment for accuracy.Using the poetry image avoids divergence between pip and poetry envs.
-back-poetry-tree: ## show dependencies as a tree - @$(COMPOSE) run --rm --build backend-dev pipdeptree +back-poetry-tree: ## show dependencies as a tree + @$(COMPOSE) run --rm --build backend-poetry poetry run pipdeptree .PHONY: back-poetry-treeAlternatively, ensure pipdeptree is installed in backend-dev if you prefer keeping it there.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
Makefile
(4 hunks)
Fixes #195
Summary by CodeRabbit
New Features
Configuration
Documentation
Tests
Chores