You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: exercises/02.context/01.solution.custom-fixtures/README.mdx
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -99,7 +99,7 @@ export const test = testBase.extend<Fixtures>({
99
99
})
100
100
```
101
101
102
-
Here, I am maping over the given `items` and making sure that each cart item has complete values. I am using the `faker` object to generate random values so I don't have to describe the entire cart item if my test case is interested only in some of its properties, like `price` and `quantity`, for example.
102
+
Here, I am mapping over the given `items` and making sure that each cart item has complete values. I am using the `faker` object to generate random values so I don't have to describe the entire cart item if my test case is interested only in some of its properties, like `price` and `quantity`, for example.
103
103
104
104
Finally, to use this custom test context and my fixture, I'll go to the `src/cart-utils.test.ts` test file and import the custom `test` function I've created:
105
105
@@ -154,7 +154,7 @@ The `cart` fixture effectively becomes a _shared state_. If you change its value
154
154
155
155
<callout-success>Use fixtures to _help create values_ but always make the values _known in the context of the test_. Everything the test needs has to be known and controlled within that test.</callout-success>
156
156
157
-
Once exception to this rule is _resused value that is never going to change_ within the same test run. For example, if you're testing against different locales of your application, you might want to set the `locale` before the test run and expose its value as a fixture:
157
+
Once exception to this rule is _reused value that is never going to change_ within the same test run. For example, if you're testing against different locales of your application, you might want to set the `locale` before the test run and expose its value as a fixture:
Copy file name to clipboardExpand all lines: exercises/02.context/02.problem.automatic-fixtures/README.mdx
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -52,7 +52,7 @@ Once you access that fixture from the test context object, Vitest will know that
52
52
53
53
But what about the tests that _don't_ use that fixture?
54
54
55
-
Since they never reference it, _Vitest will skip its initalization_. That makes sense. If you don't need a temporary file for this test, there's no need to create and delete the temporary directory. Nothing is going to use it.
55
+
Since they never reference it, _Vitest will skip its initialization_. That makes sense. If you don't need a temporary file for this test, there's no need to create and delete the temporary directory. Nothing is going to use it.
56
56
57
57
<callout-info>In other words, all fixtures are _lazy_ by default. Their implementation won't be called unless you reference that fixture in your test.</callout-info>
Copy file name to clipboardExpand all lines: exercises/03.assertions/01.problem.custom-matchers/README.mdx
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -57,7 +57,7 @@ While this testing strategy works, there are two issues with it:
57
57
1.**It's quite verbose**. Imagine employing this strategy to verify dozens of scenarios. You are paying 3 LOC for what is, conceptually, a single assertion;
58
58
1.**It's distracting**. Parsing the object and validating the parsed result are _technical details_ exclusive to the intention. It's not the intention itself. It has nothing to do with the `fetchUser()` behaviors you're testing.
59
59
60
-
Luckily, there are ways to redesign this approach to be more declartive and expressive by using a _custom matcher_.
60
+
Luckily, there are ways to redesign this approach to be more declarative and expressive by using a _custom matcher_.
_Assymetric matcher_ is the one where the `actual` value is literal while the `expected` value is an _expression_.
5
+
_Asymmetric matcher_ is the one where the `actual` value is literal while the `expected` value is an _expression_.
6
6
7
7
```ts nonumber
8
8
// 👇 Literal string
@@ -48,7 +48,7 @@ expect(user).toEqual({
48
48
})
49
49
```
50
50
51
-
> Here, the `user` object is expected to literally match the object with the `id` and `posts` properties. While the expectation toward the `id` property is literal, the `posts`proprety is described as an abstract `Array<{ id: string }>` object.
51
+
> Here, the `user` object is expected to literally match the object with the `id` and `posts` properties. While the expectation toward the `id` property is literal, the `posts`property is described as an abstract `Array<{ id: string }>` object.
Semantically, these two measurements _are_ equal. 1 inch is, indeed, 2.54 centimeters. But syntanctically these two measurements produce different class instances that cannot be compared literally:
19
+
Semantically, these two measurements _are_ equal. 1 inch is, indeed, 2.54 centimeters. But syntactically these two measurements produce different class instances that cannot be compared literally:
20
20
21
21
```ts nonumber
22
22
// If you unwrap measurements, you can imagine them as plain objects.
| Automatically applied recursively (e.g. if your measurement is nested in an object). | Always applied explicitly. Nested usage is enabled through asymmetric matchers (`{ value: expect.myMatcher() }`). |
93
-
| Must always be _synchronous_. | Can be both synchronous and asynchronous, utilizing the `.resolves.` and `.rejects.`chaning. |
93
+
| Must always be _synchronous_. | Can be both synchronous and asynchronous, utilizing the `.resolves.` and `.rejects.`chaining. |
94
94
95
95
Custom equality testers, as the name implies, are your go-to choice to help Vitest compare values that cannot otherwise be compared by sheer serialization (like our `Measurement`, or, for example, `Response` instances).
> While fetching the list of songs takes time, that eventuality is represented as a Promise that you can `await`. This guaratees that your test will not continue until the data is fetched. Quite the same applies to reading the response body stream.
14
+
> While fetching the list of songs takes time, that eventuality is represented as a Promise that you can `await`. This guarantees that your test will not continue until the data is fetched. Quite the same applies to reading the response body stream.
15
15
16
16
But not all systems are designed like that. And even the systems that _are_ designed like that may not expose you the right Promises to await.
Copy file name to clipboardExpand all lines: exercises/04.performance/01.solution.profiling-slow-tests/README.mdx
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -61,7 +61,7 @@ This information is available each time you run your tests and is not exclusive
61
61
-`environment`, the time it took to set up and tear down your test environment;
62
62
-`prepare`, the time it took for Vitest to prepare the test runner.
63
63
64
-
This overview is a fantastic starting point in indentifying which areas of your test run are the slowest. For instance, I can see that Vitest spent most time _running the tests_:
64
+
This overview is a fantastic starting point in identifying which areas of your test run are the slowest. For instance, I can see that Vitest spent most time _running the tests_:
65
65
66
66
```txt nonumber highlight=4
67
67
transform 18ms,
@@ -72,7 +72,7 @@ environment 0ms,
72
72
prepare 32ms
73
73
```
74
74
75
-
> Your test duration summary will likely be different. See which phases took the most time to know where you should direct your further investigation. For example, if the `setup` phase is too slow, it may be because your test setup is too heavy and should be refactored. If `collect` is lagging behind, it may mean that Vitest has trouble scrapping your large monorepo and you should help it locate the test files by providing explicit `include` and `exclude` options in your Vitest configuration.
75
+
> Your test duration summary will likely be different. See which phases took the most time to know where you should direct your further investigation. For example, if the `setup` phase is too slow, it may be because your test setup is too heavy and should be refactored. If `collect` is lagging behind, it may mean that Vitest has trouble scraping your large monorepo and you should help it locate the test files by providing explicit `include` and `exclude` options in your Vitest configuration.
76
76
77
77
With this covered, let's move on to the `vitest-profiler` report.
78
78
@@ -83,18 +83,18 @@ With this covered, let's move on to the `vitest-profiler` report.
83
83
-**Main thread**, which is a Node.js process that spawned Vitest. This roughly corresponds to the `prepare`, `collect`, `transform`, and `environment` phases from the Vitest's time metrics;
84
84
-**Tests**, which is individual threads/forks that ran your test files. This roughly corresponds to the `tests` time metric.
85
85
86
-
These separate profiles allows you to take a peek behind the curtain of your test runtime. You can get an idea of what your testing framework is doing and what your tests are doing, and, hopefully, spot the root cause for that nasty parformance degradation.
86
+
These separate profiles allow you to take a peek behind the curtain of your test runtime. You can get an idea of what your testing framework is doing and what your tests are doing, and, hopefully, spot the root cause for that nasty performance degradation.
87
87
88
88
CPU and memory profiles reflect different aspects of your test run:
89
89
90
90
-**CPU profile** shows you the CPU consumption. This will generally point you toward code that takes too much time to run;
91
-
-**Memory (or heap) profile** shows you the memory consumption. This is handy to watch for memory leaks and heaps that can also negavtively impact your test performance.
91
+
-**Memory (or heap) profile** shows you the memory consumption. This is handy to watch for memory leaks and heaps that can also negatively impact your test performance.
92
92
93
93
Next, I will explore each individual profile in more detail.
94
94
95
95
### Main thread profiles
96
96
97
-
One of the firts things the profiler reports is a CPU profile for the main thread:
97
+
One of the first things the profiler reports is a CPU profile for the main thread:
98
98
99
99
```txt nonumber highlight=4
100
100
Test profiling complete! Generated the following profiles:
@@ -115,7 +115,7 @@ Here's how the CPU profile for the main thread looks like:
115
115
116
116
> Now, if this looks intimidating, don't worry. Profiles will often contain a big chunk of pointers and stack traces you don't know or understand because they reflect the state of the _entire process_.
117
117
118
-
In these profiles, I am interested in spotting _abnormally long execution times_. Luckily, this report is sortred by "Total Time" automatically for me! That being said, I see nothing suspicious in the main thread so I proceed to the other profiles.
118
+
In these profiles, I am interested in spotting _abnormally long execution times_. Luckily, this report is sorted by "Total Time" automatically for me! That being said, I see nothing suspicious in the main thread so I proceed to the other profiles.
119
119
120
120
### Test profiles
121
121
@@ -144,5 +144,5 @@ What I can also do is give you a rough idea about approaching issues based on th
| Analyze your expensive logic and refactor it where appropriate. | Analyze the problematic logic to see why it leaks memory. |
147
-
| Take advantage of asynchronicity and parallelization. | Fix improper memory management (e.g. rougue child processes, unterminated loops, forgotten timers and intervals, etc). |
147
+
| Take advantage of asynchronicity and parallelization. | Fix improper memory management (e.g. rogue child processes, unterminated loops, forgotten timers and intervals, etc). |
148
148
| Use caching where appropriate. | In your test setup, be cautious about closing test servers or databases. Prefer scoping mocks to individual tests and deleting them completely once the test is done. |
By making our test cases concurrent, we can switch from a test waterfall to a flat test execution:
14
14
15
-

15
+

16
16
17
-
Now that our test run at the same time, it is absolutely crucial we provision proper _test isolation_. We don't want tests to be stepping on each other's toes and becoming flaky as a result. This often comes down to eliminating or replacing any shared (or global) state in test cases.
17
+
Now that our tests run at the same time, it is absolutely crucial we provision proper _test isolation_. We don't want tests to be stepping on each other's toes and becoming flaky as a result. This often comes down to eliminating or replacing any shared (or global) state in test cases.
18
18
19
-
For example, the `expect()` function that you use to make assertions _might contain state_. It is essential we scoped that function to individual test case by accessing it from the test context object:
19
+
For example, the `expect()` function that you use to make assertions _might contain state_. It is essential we scope that function to individual test case by accessing it from the test context object:
20
20
21
21
```ts diff remove=1 add=3
22
22
test.concurrent(`${i}`, async () => {
@@ -55,7 +55,7 @@ export default defineConfig({
55
55
56
56
> 🦉 Bigger doesn't necessarily mean better with concurrency. There is a physical limit to any concurrency dictated by your hardware. If you set a `maxConcurrency` value higher than that limit, concurrent tests will be _queued_ until they have a slot to run.
57
57
58
-
By fine-tunning`maxConcurrency`, we are able to improve the test performance even further to the whooping 123ms!
58
+
By fine-tuning`maxConcurrency`, we are able to improve the test performance even further to the whopping 123ms!
59
59
60
60
```bashremove=1add=2
61
61
Duration 271ms
@@ -66,14 +66,14 @@ By fine-tunning `maxConcurrency`, we are able to improve the test performance ev
66
66
67
67
While concurrency may improve performance, it can also make your tests _flaky_. Keep in mind that the main price you pay for concurrency is _writing isolated tests_.
68
68
69
-
Here's a few guidelines on how to keep your tests concurrency-friendly:
69
+
Here are a few guidelines on how to keep your tests concurrency-friendly:
70
70
71
-
- **Do not rely on _shared state_ of any kind**. Never have multiple test modify the same data (even the `expect` function can become a shared state!). In practice, you achieve this through actions like:
71
+
- **Do not rely on _shared state_ of any kind**. Never have multiple tests modify the same data (even the `expect` function can become a shared state!). In practice, you achieve this through actions like:
72
72
- Striving toward self-contained tests (never have one test rely on the result of another);
73
73
- Keeping the test setup next to the test itself;
74
74
- Binding mocks (e.g. databases or network) to the test.
75
75
- **Isolate side effects**. If your test absolutely must perform a side effect, like writing to a file, guarantee that those side effects are isolated and bound to the test (e.g. create a temporary file accessed only by this particular test).
76
-
- **Abolish hard-coded values**. If two tests try to establish a mock server at the same port, one is destined to fail. Once again, procude test-specific fixtures (e.g. spawn a mock server at port `0` to get a random available port number).
76
+
- **Abolish hard-coded values**. If two tests try to establish a mock server at the same port, one is destined to fail. Once again, produce test-specific fixtures (e.g. spawn a mock server at port `0` to get a random available port number).
77
77
78
78
It is worth mentioning that due to these considerations, not all test cases can be flipped to concurrent and call it a day. Concurrency can, however, be a useful factor to stress your tests and highlight the shortcomings in their design. You can then address those shortcomings in planned, incremental refactors to benefit from concurrency (among other things) once your tests are properly isolated.
0 commit comments