-
Notifications
You must be signed in to change notification settings - Fork 17
Call start time performance tests #1278
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is very interesting!
Unfortunately I don't think we can have this as an e2e test on the SDK CI pipeline.
This will be great tools for monitoring the whole platform performance, and we should keep it for that.
The problem is, If the goal is to verify any performance issues on the SDK implementation, and make sure we don't introduce code that delays the call setup. Then we need a test that excludes the external variables like network, the server, etc... And test only the SDK implementation performance. Otherwise, we risk introducing an even more flakier test on the SDK CI.
You can verify this in the last run... this test passed and production but failed on staging.
Since its the same SDK implementation you can see that other variables(server, runner, network) will influence this more then the SDK implementation.
We either need to isolate only the SDK implementation performance or we should keep this test disable in the CI pipeline and only run it in production health checks.
Well, I agree that ideally the test should exclude external variables like network, region, etc. This was just an initial performance test so we can actually determine whether the PR introduces any delays. The test fails in staging because staging is slow. Tomorrow, if someone creates a PR and the performance test in production consistently fails, we will know there’s something wrong with the PR. |
Unfortunately, that's not true I think it's a good test. We should keep it. Just not in the CI pipeline. Just remove them from the pipeline setup. |
Could we run this and not fail? Like, just add it as a report. In a separate GHA workflow? |
const call = await client.dial({ | ||
to: address, | ||
rootElement: document.getElementById('rootElement'), | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you want to measure the dial()
as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure, dial()
is a sync method, which just initiates the JS class and some Saga workers. Maybe we can write a different test for that? Or maybe one more test including both dial
and start
? 👀
const call = await client.dial({ | ||
to: address, | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same as above comment: just wondering if we also want to have seperate measurements that include dial()
?
You can report it but using a "soft assertion" You might want to check what it looks like when it fails in CI otherwise it might go unnoticed if it's just a ✅ in CI anyway. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 👍 , but I'll defer to @jpsantosbh and the feedback provided
This is interesting to explore and might also be useful in other e2e tests, too. |
Description
An E2E Test to make sure the call start time does not exceed 5 seconds;
Type of change
Code snippets
In case of new feature or breaking changes, please include code snippets.