-
Notifications
You must be signed in to change notification settings - Fork 4
testcases/containers: use 'fio' utility #106
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ACK
Edit: Removing the ACK as we have seen issues when running.
built and pushed the container image to quay. Will retry tests to confirm that they are running as expected. |
/retest all |
@synarete, can you rebase and push. We should be able to see the output of the tests in the checks now that the fio containers have been setup. |
ed2e014
to
be19a8e
Compare
Done |
We have a problem w.r.t disk space.
Our disks are configured with 10G. So either we increase it or limit the number of jobs. Or do both? |
I see |
be19a8e
to
4f1e8c7
Compare
Thanks, I hope we can use
This is not explicitly set rather comes from the host system within the fio container. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@spuiuk Can you rebuild and push the fio container using the latest version?
Pushed the new version of the container. |
/retest all |
@synarete Does verify workloads consume more space than normal? Now it fails as below:
|
It looks like it. I will reduce the |
4f1e8c7
to
7616b48
Compare
new version of container pushed. |
/retest all |
XFS (and GPFS) tests are looking good but CephFS tests show different errors:
To me it looks like new ceph module doesn't complete the fio workload. Old vfs module based share could get past the fio test.
|
Interesting -- I did not encounter those failures on my setup (without proxy). Will dig into it. |
/retest centos-ci/cephfs |
Use a wrapper shell script over 'fio' + a container to perform a set of I/O testing. Execute both I/O throughput workload as well as random I/O with data verification. Signed-off-by: Shachar Sharon <[email protected]>
7616b48
to
5a3f2c1
Compare
I tried to reproduce on my local ceph cluster (without proxy) but failed to do so. Indeed, I had a crash because I used non-standard ceph images, but as soon as I switched back to normal ceph (image: |
Whatever is the latest build available from ceph main branch. |
Use a wrapper shell script over 'fio' + a container to perform a set of I/O testing. Execute both I/O throughput workload as well as random I/O with data verification.