Skip to content

Commit 243e39c

Browse files
authored
fix typos and spelling errors (#11)
1 parent 14112f2 commit 243e39c

File tree

12 files changed

+26
-26
lines changed

12 files changed

+26
-26
lines changed

exercises/01.setup/README.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44

55
<callout-success>Proficiency with any tool starts from the proper configuration.</callout-success>
66

7-
Let's kick things off by getting your more productive with Vitest. Specifically, focusing on the following areas:
7+
Let's kick things off by getting you more productive with Vitest. Specifically, focusing on the following areas:
88

99
1. Make you write, iterate, and debug tests faster;
1010
1. Reuse Vitest for testing different code with different requirements;

exercises/02.context/01.solution.custom-fixtures/README.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -99,7 +99,7 @@ export const test = testBase.extend<Fixtures>({
9999
})
100100
```
101101

102-
Here, I am maping over the given `items` and making sure that each cart item has complete values. I am using the `faker` object to generate random values so I don't have to describe the entire cart item if my test case is interested only in some of its properties, like `price` and `quantity`, for example.
102+
Here, I am mapping over the given `items` and making sure that each cart item has complete values. I am using the `faker` object to generate random values so I don't have to describe the entire cart item if my test case is interested only in some of its properties, like `price` and `quantity`, for example.
103103

104104
Finally, to use this custom test context and my fixture, I'll go to the `src/cart-utils.test.ts` test file and import the custom `test` function I've created:
105105

@@ -154,7 +154,7 @@ The `cart` fixture effectively becomes a _shared state_. If you change its value
154154

155155
<callout-success>Use fixtures to _help create values_ but always make the values _known in the context of the test_. Everything the test needs has to be known and controlled within that test.</callout-success>
156156

157-
Once exception to this rule is _resused value that is never going to change_ within the same test run. For example, if you're testing against different locales of your application, you might want to set the `locale` before the test run and expose its value as a fixture:
157+
Once exception to this rule is _reused value that is never going to change_ within the same test run. For example, if you're testing against different locales of your application, you might want to set the `locale` before the test run and expose its value as a fixture:
158158

159159
```ts highlight=1
160160
test('...', ({ locale }) => {})

exercises/02.context/02.problem.automatic-fixtures/README.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ Once you access that fixture from the test context object, Vitest will know that
5252

5353
But what about the tests that _don't_ use that fixture?
5454

55-
Since they never reference it, _Vitest will skip its initalization_. That makes sense. If you don't need a temporary file for this test, there's no need to create and delete the temporary directory. Nothing is going to use it.
55+
Since they never reference it, _Vitest will skip its initialization_. That makes sense. If you don't need a temporary file for this test, there's no need to create and delete the temporary directory. Nothing is going to use it.
5656

5757
<callout-info>In other words, all fixtures are _lazy_ by default. Their implementation won't be called unless you reference that fixture in your test.</callout-info>
5858

exercises/03.assertions/01.problem.custom-matchers/README.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ While this testing strategy works, there are two issues with it:
5757
1. **It's quite verbose**. Imagine employing this strategy to verify dozens of scenarios. You are paying 3 LOC for what is, conceptually, a single assertion;
5858
1. **It's distracting**. Parsing the object and validating the parsed result are _technical details_ exclusive to the intention. It's not the intention itself. It has nothing to do with the `fetchUser()` behaviors you're testing.
5959

60-
Luckily, there are ways to redesign this approach to be more declartive and expressive by using a _custom matcher_.
60+
Luckily, there are ways to redesign this approach to be more declarative and expressive by using a _custom matcher_.
6161

6262
## Your task
6363

exercises/03.assertions/02.problem.asymmetric-matchers/README.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
<EpicVideo url="https://www.epicweb.dev/workshops/advanced-vitest-patterns/assertions/03-02-problem" />
44

5-
_Assymetric matcher_ is the one where the `actual` value is literal while the `expected` value is an _expression_.
5+
_Asymmetric matcher_ is the one where the `actual` value is literal while the `expected` value is an _expression_.
66

77
```ts nonumber
88
// 👇 Literal string
@@ -48,7 +48,7 @@ expect(user).toEqual({
4848
})
4949
```
5050

51-
> Here, the `user` object is expected to literally match the object with the `id` and `posts` properties. While the expectation toward the `id` property is literal, the `posts` proprety is described as an abstract `Array<{ id: string }>` object.
51+
> Here, the `user` object is expected to literally match the object with the `id` and `posts` properties. While the expectation toward the `id` property is literal, the `posts` property is described as an abstract `Array<{ id: string }>` object.
5252
5353
## `.toMatchSchema()`
5454

exercises/03.assertions/03.problem.custom-quality-testers/README.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ It's a bit harder if those two are different.
1616
expect(new Measurement(1, 'in')).toEqual(new Measurement(2.54, 'cm')) //
1717
```
1818

19-
Semantically, these two measurements _are_ equal. 1 inch is, indeed, 2.54 centimeters. But syntanctically these two measurements produce different class instances that cannot be compared literally:
19+
Semantically, these two measurements _are_ equal. 1 inch is, indeed, 2.54 centimeters. But syntactically these two measurements produce different class instances that cannot be compared literally:
2020

2121
```ts nonumber
2222
// If you unwrap measurements, you can imagine them as plain objects.

exercises/03.assertions/03.solution.custom-equality-testers/README.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -90,7 +90,7 @@ Let's iterate over the difference between _equality testers_ and _matchers_ to h
9090
| ------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------- |
9191
| Extends the `.toEqual()` logic. | Implement entirely custom logic. |
9292
| Automatically applied recursively (e.g. if your measurement is nested in an object). | Always applied explicitly. Nested usage is enabled through asymmetric matchers (`{ value: expect.myMatcher() }`). |
93-
| Must always be _synchronous_. | Can be both synchronous and asynchronous, utilizing the `.resolves.` and `.rejects.` chaning. |
93+
| Must always be _synchronous_. | Can be both synchronous and asynchronous, utilizing the `.resolves.` and `.rejects.` chaining. |
9494

9595
Custom equality testers, as the name implies, are your go-to choice to help Vitest compare values that cannot otherwise be compared by sheer serialization (like our `Measurement`, or, for example, `Response` instances).
9696

exercises/03.assertions/04.problem.retryable-assertions/README.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ const response = await fetch('/api/songs')
1111
await expect(response.json()).resolves.toEqual(favoriteSongs)
1212
```
1313

14-
> While fetching the list of songs takes time, that eventuality is represented as a Promise that you can `await`. This guaratees that your test will not continue until the data is fetched. Quite the same applies to reading the response body stream.
14+
> While fetching the list of songs takes time, that eventuality is represented as a Promise that you can `await`. This guarantees that your test will not continue until the data is fetched. Quite the same applies to reading the response body stream.
1515
1616
But not all systems are designed like that. And even the systems that _are_ designed like that may not expose you the right Promises to await.
1717

exercises/04.performance/01.solution.profiling-slow-tests/README.mdx

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ This information is available each time you run your tests and is not exclusive
6161
- `environment`, the time it took to set up and tear down your test environment;
6262
- `prepare`, the time it took for Vitest to prepare the test runner.
6363

64-
This overview is a fantastic starting point in indentifying which areas of your test run are the slowest. For instance, I can see that Vitest spent most time _running the tests_:
64+
This overview is a fantastic starting point in identifying which areas of your test run are the slowest. For instance, I can see that Vitest spent most time _running the tests_:
6565

6666
```txt nonumber highlight=4
6767
transform 18ms,
@@ -72,7 +72,7 @@ environment 0ms,
7272
prepare 32ms
7373
```
7474

75-
> Your test duration summary will likely be different. See which phases took the most time to know where you should direct your further investigation. For example, if the `setup` phase is too slow, it may be because your test setup is too heavy and should be refactored. If `collect` is lagging behind, it may mean that Vitest has trouble scrapping your large monorepo and you should help it locate the test files by providing explicit `include` and `exclude` options in your Vitest configuration.
75+
> Your test duration summary will likely be different. See which phases took the most time to know where you should direct your further investigation. For example, if the `setup` phase is too slow, it may be because your test setup is too heavy and should be refactored. If `collect` is lagging behind, it may mean that Vitest has trouble scraping your large monorepo and you should help it locate the test files by providing explicit `include` and `exclude` options in your Vitest configuration.
7676
7777
With this covered, let's move on to the `vitest-profiler` report.
7878

@@ -83,18 +83,18 @@ With this covered, let's move on to the `vitest-profiler` report.
8383
- **Main thread**, which is a Node.js process that spawned Vitest. This roughly corresponds to the `prepare`, `collect`, `transform`, and `environment` phases from the Vitest's time metrics;
8484
- **Tests**, which is individual threads/forks that ran your test files. This roughly corresponds to the `tests` time metric.
8585

86-
These separate profiles allows you to take a peek behind the curtain of your test runtime. You can get an idea of what your testing framework is doing and what your tests are doing, and, hopefully, spot the root cause for that nasty parformance degradation.
86+
These separate profiles allow you to take a peek behind the curtain of your test runtime. You can get an idea of what your testing framework is doing and what your tests are doing, and, hopefully, spot the root cause for that nasty performance degradation.
8787

8888
CPU and memory profiles reflect different aspects of your test run:
8989

9090
- **CPU profile** shows you the CPU consumption. This will generally point you toward code that takes too much time to run;
91-
- **Memory (or heap) profile** shows you the memory consumption. This is handy to watch for memory leaks and heaps that can also negavtively impact your test performance.
91+
- **Memory (or heap) profile** shows you the memory consumption. This is handy to watch for memory leaks and heaps that can also negatively impact your test performance.
9292

9393
Next, I will explore each individual profile in more detail.
9494

9595
### Main thread profiles
9696

97-
One of the firts things the profiler reports is a CPU profile for the main thread:
97+
One of the first things the profiler reports is a CPU profile for the main thread:
9898

9999
```txt nonumber highlight=4
100100
Test profiling complete! Generated the following profiles:
@@ -115,7 +115,7 @@ Here's how the CPU profile for the main thread looks like:
115115

116116
> Now, if this looks intimidating, don't worry. Profiles will often contain a big chunk of pointers and stack traces you don't know or understand because they reflect the state of the _entire process_.
117117
118-
In these profiles, I am interested in spotting _abnormally long execution times_. Luckily, this report is sortred by "Total Time" automatically for me! That being said, I see nothing suspicious in the main thread so I proceed to the other profiles.
118+
In these profiles, I am interested in spotting _abnormally long execution times_. Luckily, this report is sorted by "Total Time" automatically for me! That being said, I see nothing suspicious in the main thread so I proceed to the other profiles.
119119

120120
### Test profiles
121121

@@ -144,5 +144,5 @@ What I can also do is give you a rough idea about approaching issues based on th
144144
| CPU | Memory |
145145
| --------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
146146
| Analyze your expensive logic and refactor it where appropriate. | Analyze the problematic logic to see why it leaks memory. |
147-
| Take advantage of asynchronicity and parallelization. | Fix improper memory management (e.g. rougue child processes, unterminated loops, forgotten timers and intervals, etc). |
147+
| Take advantage of asynchronicity and parallelization. | Fix improper memory management (e.g. rogue child processes, unterminated loops, forgotten timers and intervals, etc). |
148148
| Use caching where appropriate. | In your test setup, be cautious about closing test servers or databases. Prefer scoping mocks to individual tests and deleting them completely once the test is done. |

exercises/04.performance/02.solution.concurrency/README.mdx

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -12,11 +12,11 @@ test.concurrent(`${i}`, async () => {})
1212

1313
By making our test cases concurrent, we can switch from a test waterfall to a flat test execution:
1414

15-
![A diagram illustrating a test case waterfal without concurrency and a simultaneous test case execution with concurrency enabled](/assets/05-02-with-concurrency.png)
15+
![A diagram illustrating a test case waterfall without concurrency and a simultaneous test case execution with concurrency enabled](/assets/05-02-with-concurrency.png)
1616

17-
Now that our test run at the same time, it is absolutely crucial we provision proper _test isolation_. We don't want tests to be stepping on each other's toes and becoming flaky as a result. This often comes down to eliminating or replacing any shared (or global) state in test cases.
17+
Now that our tests run at the same time, it is absolutely crucial we provision proper _test isolation_. We don't want tests to be stepping on each other's toes and becoming flaky as a result. This often comes down to eliminating or replacing any shared (or global) state in test cases.
1818

19-
For example, the `expect()` function that you use to make assertions _might contain state_. It is essential we scoped that function to individual test case by accessing it from the test context object:
19+
For example, the `expect()` function that you use to make assertions _might contain state_. It is essential we scope that function to individual test case by accessing it from the test context object:
2020

2121
```ts diff remove=1 add=3
2222
test.concurrent(`${i}`, async () => {
@@ -55,7 +55,7 @@ export default defineConfig({
5555
5656
> 🦉 Bigger doesn't necessarily mean better with concurrency. There is a physical limit to any concurrency dictated by your hardware. If you set a `maxConcurrency` value higher than that limit, concurrent tests will be _queued_ until they have a slot to run.
5757
58-
By fine-tunning `maxConcurrency`, we are able to improve the test performance even further to the whooping 123ms!
58+
By fine-tuning `maxConcurrency`, we are able to improve the test performance even further to the whopping 123ms!
5959
6060
```bash remove=1 add=2
6161
Duration 271ms
@@ -66,14 +66,14 @@ By fine-tunning `maxConcurrency`, we are able to improve the test performance ev
6666
6767
While concurrency may improve performance, it can also make your tests _flaky_. Keep in mind that the main price you pay for concurrency is _writing isolated tests_.
6868
69-
Here's a few guidelines on how to keep your tests concurrency-friendly:
69+
Here are a few guidelines on how to keep your tests concurrency-friendly:
7070
71-
- **Do not rely on _shared state_ of any kind**. Never have multiple test modify the same data (even the `expect` function can become a shared state!). In practice, you achieve this through actions like:
71+
- **Do not rely on _shared state_ of any kind**. Never have multiple tests modify the same data (even the `expect` function can become a shared state!). In practice, you achieve this through actions like:
7272
- Striving toward self-contained tests (never have one test rely on the result of another);
7373
- Keeping the test setup next to the test itself;
7474
- Binding mocks (e.g. databases or network) to the test.
7575
- **Isolate side effects**. If your test absolutely must perform a side effect, like writing to a file, guarantee that those side effects are isolated and bound to the test (e.g. create a temporary file accessed only by this particular test).
76-
- **Abolish hard-coded values**. If two tests try to establish a mock server at the same port, one is destined to fail. Once again, procude test-specific fixtures (e.g. spawn a mock server at port `0` to get a random available port number).
76+
- **Abolish hard-coded values**. If two tests try to establish a mock server at the same port, one is destined to fail. Once again, produce test-specific fixtures (e.g. spawn a mock server at port `0` to get a random available port number).
7777
7878
It is worth mentioning that due to these considerations, not all test cases can be flipped to concurrent and call it a day. Concurrency can, however, be a useful factor to stress your tests and highlight the shortcomings in their design. You can then address those shortcomings in planned, incremental refactors to benefit from concurrency (among other things) once your tests are properly isolated.
7979

0 commit comments

Comments
 (0)