Skip to content

Conversation

MarceloRobert
Copy link
Collaborator

Description

Because the integrations tests and linting+formatting+unit tests were in a single job, we weren't able to identify which of them failed if just one of them did. This occurred specially when we expect the integration tests to fail in PRs from forks, but then we didn't see that the linting and formatting had also failed.

Changes

  • Adds a pytest marker to the tests in order to tell which ones are unit tests (we can then run tests which are simply "not unit", meaning integration tests)
  • Adds a composite action to setup the backend
  • Splits the build-django job into lint-and-unit-tests-django and integration-tests-django, both using the composite action

How to test

Check the CI on this PR, and see that it passes

Closes #1402

- name: Run tests
run: poetry run pytest --run-all
- name: Run integration tests
run: poetry run pytest -m "not unit" --run-all
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't it be better if we created a tag for integrations as well?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tag every test to follow a pattern, I mean

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it, I was able to make a "dynamic" marker where if the tests are in a "integration" or "unit" folders, they are automatically marked as that. And if they aren't marked as anything, an error will be raised

lint-and-unit-test-django:
if: github.event.pull_request.draft != true
runs-on: ubuntu-latest
timeout-minutes: 10
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: 10 minutes for unit tests and lint seems to me an overworking. What do you think?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, makes sense, I'll lower to 3 minutes. Sounds good?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

3/5 minutes sounds better. Nice!

@MarceloRobert MarceloRobert force-pushed the feat/split-ci-tests branch 2 times, most recently from d374535 to 3271e56 Compare August 19, 2025 15:11
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR splits the CI pipeline to separate lint/format/unit tests from integration tests, enabling better identification of which test category failed. This addresses the issue where integration test failures (especially from forks) would mask linting and formatting failures.

  • Adds pytest markers to distinguish unit tests from integration tests
  • Creates a composite GitHub action for backend setup to reduce duplication
  • Splits the single build-django job into two separate jobs with appropriate timeouts

Reviewed Changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 4 comments.

File Description
backend/pyproject.toml Adds pytest markers for unit and integration test categorization
backend/conftest.py Implements automatic test marking based on folder structure
.github/workflows/ci.yaml Splits Django job into separate lint/unit and integration test jobs
.github/actions/setup-backend/action.yaml Creates reusable composite action for backend setup

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

for marker in TEST_MARKERS:
if marker in parent_folder:
if len(added_markers) >= 1:
raise Exception(
Copy link
Preview

Copilot AI Aug 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider using a more specific exception type like ValueError or RuntimeError instead of the generic Exception class.

Suggested change
raise Exception(
raise ValueError(

Copilot uses AI. Check for mistakes.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if ValueError would be the correct type of exception here

added_markers.append(marker)

if not added_markers:
raise Exception("Test %s must have a type" % item.name)

This comment was marked as duplicate.

@@ -1,4 +1,36 @@
from pytest import Item

TEST_MARKERS = ["unit", "integration"]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we get the markers from pyproject.toml? It could be another script to get the markers

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Makes sense to get it from there since I actually had to add the markers there otherwise I got warnings in the tests

Copy link
Collaborator

@LucasSantos27 LucasSantos27 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@MarceloRobert MarceloRobert merged commit 763578e into main Aug 21, 2025
6 checks passed
@MarceloRobert MarceloRobert deleted the feat/split-ci-tests branch August 28, 2025 18:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Split GH CI integration and unit tests
3 participants