Skip to content

Conversation

@alexbelgium
Copy link

@alexbelgium
Copy link
Author

Allow to have a single logic for deleting old files.

Old logic : "delete all files older than 90 days not in the exclude_files until reaching the right free space ; if not enough, purge PROCESSED folder (that was already removed from this fork)"
New logic : "delete files in order to free 10% of additional space with the following priorities : protect files that have less than 7 days, protect files in the exclude_files, and start deleting files with the lowest confidences with first the species with highest files".

This should allow to standardize disk clearance in a way that saveguards important files : species with less recordings, and recordings with higher confidence

@Nachtzuster
Copy link
Owner

So I finally got around testing this... Seems your arch nemesis, the executable bit, is here again 😄
In this case scripts/disk_check.sh lost its executable bit.
image

@alexbelgium
Copy link
Author

alexbelgium commented May 18, 2025

For the last time : I've set up a ubuntu vscode which will sort that. Should be fine from now on

@alexbelgium
Copy link
Author

Closed to clean PR list

@alexbelgium alexbelgium deleted the patch-2 branch August 29, 2025 12:24
@alexbelgium alexbelgium restored the patch-2 branch August 29, 2025 12:24
@alexbelgium alexbelgium deleted the patch-2 branch August 29, 2025 12:24
@alexbelgium alexbelgium restored the patch-2 branch September 8, 2025 09:52
@alexbelgium alexbelgium reopened this Sep 8, 2025
@alexbelgium
Copy link
Author

So I finally got around testing this... Seems your arch nemesis, the executable bit, is here again 😄 In this case scripts/disk_check.sh lost its executable bit. image

Hi, I have reactivated this following a user feedback that "good" files were deleting while "worst" existed in terms of confidence. I think this should fix it. I took the opportunity to fix a couple typos

@Nachtzuster
Copy link
Owner

Nachtzuster commented Sep 13, 2025

I'm getting this (filesystem is 76% full):
image

with:
image

edit: while running ./scripts/disk_check.sh

@alexbelgium
Copy link
Author

Thanks, I see. My "safe threshold", designed to avoid deleting too much files, was set at disk size * 0.9 which led to the 85% value. I'll fix this

@alexbelgium
Copy link
Author

This should be better - I reviewed the logic, improved a bit where I saw potential failures (symlinks ; decimals...) and added comments for the script logic

@Nachtzuster
Copy link
Owner

I'm now getting:
image
Purge threshold is at 70%.

To be honest, I'm not entirely sure what the expectation is?

@alexbelgium
Copy link
Author

Well the logic of the script is that :

  • If the disk size is above the threshold (70%)
  • It executes the species cleaning script while reducing the number of files kept per species until the disk size is below the threshold
  • However, to avoid totally deleting all observations, it stops its action when there is only 30 files remaining per species, which is the safeguard trigger that you see in your log above. It then stops the core services to avoid continuing to fill the drive

If this doesn't happen then my code is flawed ;-) I'll pass it in draft, do some test, and put it again in review once working

Or I just close it; as you prefer (honestly I don't use this feature as I'm using the "max number of files per species" limitator which allows keeping the disk size under control)

@alexbelgium alexbelgium marked this pull request as draft September 24, 2025 07:06
Refactor disk space management script for improved error handling and integer sanitization.
@alexbelgium alexbelgium marked this pull request as ready for review January 7, 2026 19:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants