-
Notifications
You must be signed in to change notification settings - Fork 5
Making testing requirements clearer #673
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
|
@jamesturner246 @Aurashk @miruuna this is the PR to clarify what tests should be performed for PR merges. This will update the PR template so the testing requiremenst are clearer, this is sometthing that I would value feedback on. The tests that should be performed are:
Looking forward to feedback from you |
Thanks Pawel, that's very helpful. |
|
There are no hardware restrictions, but this code has been written to run on AlmaLinux9. If I remember correctly, @jamesturner246 got CVMFS set up on his laptop, and should be able to run these tests locally |
|
This looks good and it'll help standardize the workflow. However, running |
Thanks Miruna. Yes, it is expected that this test takes very long, but this should be only have to be run when major changes are made to the core codebase, prior to merging a PR. I will make this clearer in the testing list requirements |
|
In case it is useful, I will contribute a couple of notes regarding where we can run the
|
Thanks Kurt, that's very helpful, it's possible 2. explains some test failures I was seeing locally last time I tried. Is it totally unpredictable what will happen in the tests without these checks for computational resources or is there a common point of failure? Also one other more general thing that might be useful to know is what makes the tests run long? Is it a timed simulation of everything working meaninfully together or is it doing a lot of computational work? |
|
Also another thing came to mind. What's the situation with this MSQT test in the CI https://github.com/DUNE-DAQ/drunc/actions/workflows/run_mqst.yml? It seems like it was abandoned some time earlier in the year judging by the actions runs. Is this something we want to get working again? |
When a session is running on a host with insufficient resources, a session will likley throw errors with the number of missing/empty data products. This takes time as we run a variety of configurations with many runs - there are 9 tests and multiple configurations for some of these tests. Supposing each tests takes 3 mins, this will get you the approximate half hour for running. |
This is something held back by the development of the Subprocess process manager, in this PR |
|
In principle, the time that each regression test takes to run is dominated by the amount of time spent waiting in each FSM state (e.g. trigger-enabled), as much time as the writer of the test chose. Of course, if lots of failures happen and/or a process either stalls or crashes, run control transitions can take longer than usual (e.g. some of the "stop-run" transitions), and those might produce a noticeable extra amount of time. |
|
Hi all. As discussed last meeting, I think a special cluster account just for testing PRs would be invaluable for this workflow. Something we could perhaps hook into CI -- e.g. manually (or even auto, but maybe too noisy) trigger the full integration test suite on the cluster once the PR is marked ready for review. |
Description
Changes the structure of the PR template to prioritize the testing, introduces the requirements from other repos as a field.
No tests or further checks have been run, as this is a template issue, and does not affect the core code.
Type of change
Key checklist
python -m pytest)pre-commit run --all-files)Further checks
(Indicate issue here: # (issue))