Skip to content

Commit 61efadb

Browse files
Copilotmmcky
andauthored
Migrate timing code to quantecon Timer context manager from legacy patterns (#391)
* Initial plan * Migrate timing code from tic/toc to Timer context manager Co-authored-by: mmcky <[email protected]> * Remove Timer fallback code after quantecon v0.9.0 release Co-authored-by: mmcky <[email protected]> * Remove silent=True from Timer instances in numba lecture Co-authored-by: mmcky <[email protected]> * convert %time to qe.Timer() * Convert remaining %time and %%time to qe.Timer() context manager Co-authored-by: mmcky <[email protected]> * add quantecon import for jax_intro * add import * Convert all %timeit instances to qe.timeit() with runs=3 Co-authored-by: mmcky <[email protected]> * tst: new quantecon@copilot/fix-796 * Migrate qe.timeit instances back to qe.Timer() with milliseconds for SciPy Co-authored-by: mmcky <[email protected]> * Revert "tst: new quantecon@copilot/fix-796" This reverts commit 2ae4cbf. --------- Co-authored-by: copilot-swe-agent[bot] <[email protected]> Co-authored-by: mmcky <[email protected]> Co-authored-by: mmcky <[email protected]>
1 parent 6d0df81 commit 61efadb

File tree

5 files changed

+103
-91
lines changed

5 files changed

+103
-91
lines changed

lectures/jax_intro.md

Lines changed: 30 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ In addition to what's in Anaconda, this lecture will need the following librarie
1818
```{code-cell} ipython3
1919
:tags: [hide-output]
2020
21-
!pip install jax
21+
!pip install jax quantecon
2222
```
2323

2424
This lecture provides a short introduction to [Google JAX](https://github.com/jax-ml/jax).
@@ -52,6 +52,7 @@ The following import is standard, replacing `import numpy as np`:
5252
```{code-cell} ipython3
5353
import jax
5454
import jax.numpy as jnp
55+
import quantecon as qe
5556
```
5657

5758
Now we can use `jnp` in place of `np` for the usual array operations:
@@ -304,7 +305,8 @@ x = jnp.ones(n)
304305
How long does the function take to execute?
305306

306307
```{code-cell} ipython3
307-
%time f(x).block_until_ready()
308+
with qe.Timer():
309+
f(x).block_until_ready()
308310
```
309311

310312
```{note}
@@ -318,7 +320,8 @@ allows the Python interpreter to run ahead of numerical computations.
318320
If we run it a second time it becomes faster again:
319321

320322
```{code-cell} ipython3
321-
%time f(x).block_until_ready()
323+
with qe.Timer():
324+
f(x).block_until_ready()
322325
```
323326

324327
This is because the built in functions like `jnp.cos` are JIT compiled and the
@@ -341,7 +344,8 @@ y = jnp.ones(m)
341344
```
342345

343346
```{code-cell} ipython3
344-
%time f(y).block_until_ready()
347+
with qe.Timer():
348+
f(y).block_until_ready()
345349
```
346350

347351
Notice that the execution time increases, because now new versions of
@@ -352,14 +356,16 @@ If we run again, the code is dispatched to the correct compiled version and we
352356
get faster execution.
353357

354358
```{code-cell} ipython3
355-
%time f(y).block_until_ready()
359+
with qe.Timer():
360+
f(y).block_until_ready()
356361
```
357362

358363
The compiled versions for the previous array size are still available in memory
359364
too, and the following call is dispatched to the correct compiled code.
360365

361366
```{code-cell} ipython3
362-
%time f(x).block_until_ready()
367+
with qe.Timer():
368+
f(x).block_until_ready()
363369
```
364370

365371
### Compiling the outer function
@@ -379,7 +385,8 @@ f_jit(x)
379385
And now let's time it.
380386

381387
```{code-cell} ipython3
382-
%time f_jit(x).block_until_ready()
388+
with qe.Timer():
389+
f_jit(x).block_until_ready()
383390
```
384391

385392
Note the speed gain.
@@ -534,10 +541,10 @@ z_loops = np.empty((n, n))
534541
```
535542

536543
```{code-cell} ipython3
537-
%%time
538-
for i in range(n):
539-
for j in range(n):
540-
z_loops[i, j] = f(x[i], y[j])
544+
with qe.Timer():
545+
for i in range(n):
546+
for j in range(n):
547+
z_loops[i, j] = f(x[i], y[j])
541548
```
542549

543550
Even for this very small grid, the run time is extremely slow.
@@ -575,15 +582,15 @@ x_mesh, y_mesh = jnp.meshgrid(x, y)
575582
Now we get what we want and the execution time is very fast.
576583

577584
```{code-cell} ipython3
578-
%%time
579-
z_mesh = f(x_mesh, y_mesh).block_until_ready()
585+
with qe.Timer():
586+
z_mesh = f(x_mesh, y_mesh).block_until_ready()
580587
```
581588

582589
Let's run again to eliminate compile time.
583590

584591
```{code-cell} ipython3
585-
%%time
586-
z_mesh = f(x_mesh, y_mesh).block_until_ready()
592+
with qe.Timer():
593+
z_mesh = f(x_mesh, y_mesh).block_until_ready()
587594
```
588595

589596
Let's confirm that we got the right answer.
@@ -602,8 +609,8 @@ x_mesh, y_mesh = jnp.meshgrid(x, y)
602609
```
603610

604611
```{code-cell} ipython3
605-
%%time
606-
z_mesh = f(x_mesh, y_mesh).block_until_ready()
612+
with qe.Timer():
613+
z_mesh = f(x_mesh, y_mesh).block_until_ready()
607614
```
608615

609616
But there is one problem here: the mesh grids use a lot of memory.
@@ -641,8 +648,8 @@ f_vec = jax.vmap(f_vec_y, in_axes=(0, None))
641648
With this construction, we can now call the function $f$ on flat (low memory) arrays.
642649

643650
```{code-cell} ipython3
644-
%%time
645-
z_vmap = f_vec(x, y).block_until_ready()
651+
with qe.Timer():
652+
z_vmap = f_vec(x, y).block_until_ready()
646653
```
647654

648655
The execution time is essentially the same as the mesh operation but we are using much less memory.
@@ -711,15 +718,15 @@ def compute_call_price_jax(β=β,
711718
Let's run it once to compile it:
712719

713720
```{code-cell} ipython3
714-
%%time
715-
compute_call_price_jax().block_until_ready()
721+
with qe.Timer():
722+
compute_call_price_jax().block_until_ready()
716723
```
717724

718725
And now let's time it:
719726

720727
```{code-cell} ipython3
721-
%%time
722-
compute_call_price_jax().block_until_ready()
728+
with qe.Timer():
729+
compute_call_price_jax().block_until_ready()
723730
```
724731

725732
```{solution-end}

lectures/numba.md

Lines changed: 25 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -41,6 +41,8 @@ import quantecon as qe
4141
import matplotlib.pyplot as plt
4242
```
4343

44+
45+
4446
## Overview
4547

4648
In an {doc}`earlier lecture <need_for_speed>` we learned about vectorization, which is one method to improve speed and efficiency in numerical work.
@@ -133,17 +135,17 @@ Let's time and compare identical function calls across these two versions, start
133135
```{code-cell} ipython3
134136
n = 10_000_000
135137
136-
qe.tic()
137-
qm(0.1, int(n))
138-
time1 = qe.toc()
138+
with qe.Timer() as timer1:
139+
qm(0.1, int(n))
140+
time1 = timer1.elapsed
139141
```
140142

141143
Now let's try qm_numba
142144

143145
```{code-cell} ipython3
144-
qe.tic()
145-
qm_numba(0.1, int(n))
146-
time2 = qe.toc()
146+
with qe.Timer() as timer2:
147+
qm_numba(0.1, int(n))
148+
time2 = timer2.elapsed
147149
```
148150

149151
This is already a very large speed gain.
@@ -153,9 +155,9 @@ In fact, the next time and all subsequent times it runs even faster as the funct
153155
(qm_numba_result)=
154156

155157
```{code-cell} ipython3
156-
qe.tic()
157-
qm_numba(0.1, int(n))
158-
time3 = qe.toc()
158+
with qe.Timer() as timer3:
159+
qm_numba(0.1, int(n))
160+
time3 = timer3.elapsed
159161
```
160162

161163
```{code-cell} ipython3
@@ -225,15 +227,13 @@ This is equivalent to adding `qm = jit(qm)` after the function definition.
225227
The following now uses the jitted version:
226228

227229
```{code-cell} ipython3
228-
%%time
229-
230-
qm(0.1, 100_000)
230+
with qe.Timer():
231+
qm(0.1, 100_000)
231232
```
232233

233234
```{code-cell} ipython3
234-
%%time
235-
236-
qm(0.1, 100_000)
235+
with qe.Timer():
236+
qm(0.1, 100_000)
237237
```
238238

239239
Numba also provides several arguments for decorators to accelerate computation and cache functions -- see [here](https://numba.readthedocs.io/en/stable/user/performance-tips.html).
@@ -289,7 +289,8 @@ We can fix this error easily in this case by compiling `mean`.
289289
def mean(data):
290290
return np.mean(data)
291291
292-
%time bootstrap(data, mean, n_resamples)
292+
with qe.Timer():
293+
bootstrap(data, mean, n_resamples)
293294
```
294295

295296
## Compiling Classes
@@ -534,11 +535,13 @@ def calculate_pi(n=1_000_000):
534535
Now let's see how fast it runs:
535536

536537
```{code-cell} ipython3
537-
%time calculate_pi()
538+
with qe.Timer():
539+
calculate_pi()
538540
```
539541

540542
```{code-cell} ipython3
541-
%time calculate_pi()
543+
with qe.Timer():
544+
calculate_pi()
542545
```
543546

544547
If we switch off JIT compilation by removing `@njit`, the code takes around
@@ -639,9 +642,8 @@ This is (approximately) the right output.
639642
Now let's time it:
640643

641644
```{code-cell} ipython3
642-
qe.tic()
643-
compute_series(n)
644-
qe.toc()
645+
with qe.Timer():
646+
compute_series(n)
645647
```
646648

647649
Next let's implement a Numba version, which is easy
@@ -660,9 +662,8 @@ print(np.mean(x == 0))
660662
Let's see the time
661663

662664
```{code-cell} ipython3
663-
qe.tic()
664-
compute_series_numba(n)
665-
qe.toc()
665+
with qe.Timer():
666+
compute_series_numba(n)
666667
```
667668

668669
This is a nice speed improvement for one line of code!

lectures/numpy.md

Lines changed: 29 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -63,6 +63,8 @@ from mpl_toolkits.mplot3d.axes3d import Axes3D
6363
from matplotlib import cm
6464
```
6565

66+
67+
6668
(numpy_array)=
6769
## NumPy Arrays
6870

@@ -1190,21 +1192,19 @@ n = 1_000_000
11901192
```
11911193

11921194
```{code-cell} python3
1193-
%%time
1194-
1195-
y = 0 # Will accumulate and store sum
1196-
for i in range(n):
1197-
x = random.uniform(0, 1)
1198-
y += x**2
1195+
with qe.Timer():
1196+
y = 0 # Will accumulate and store sum
1197+
for i in range(n):
1198+
x = random.uniform(0, 1)
1199+
y += x**2
11991200
```
12001201

12011202
The following vectorized code achieves the same thing.
12021203

12031204
```{code-cell} ipython
1204-
%%time
1205-
1206-
x = np.random.uniform(0, 1, n)
1207-
y = np.sum(x**2)
1205+
with qe.Timer():
1206+
x = np.random.uniform(0, 1, n)
1207+
y = np.sum(x**2)
12081208
```
12091209

12101210
As you can see, the second code block runs much faster. Why?
@@ -1285,24 +1285,22 @@ grid = np.linspace(-3, 3, 1000)
12851285
Here's a non-vectorized version that uses Python loops.
12861286

12871287
```{code-cell} python3
1288-
%%time
1289-
1290-
m = -np.inf
1288+
with qe.Timer():
1289+
m = -np.inf
12911290
1292-
for x in grid:
1293-
for y in grid:
1294-
z = f(x, y)
1295-
if z > m:
1296-
m = z
1291+
for x in grid:
1292+
for y in grid:
1293+
z = f(x, y)
1294+
if z > m:
1295+
m = z
12971296
```
12981297

12991298
And here's a vectorized version
13001299

13011300
```{code-cell} python3
1302-
%%time
1303-
1304-
x, y = np.meshgrid(grid, grid)
1305-
np.max(f(x, y))
1301+
with qe.Timer():
1302+
x, y = np.meshgrid(grid, grid)
1303+
np.max(f(x, y))
13061304
```
13071305

13081306
In the vectorized version, all the looping takes place in compiled code.
@@ -1636,9 +1634,8 @@ np.random.seed(123)
16361634
x = np.random.randn(1000, 100, 100)
16371635
y = np.random.randn(100)
16381636
1639-
qe.tic()
1640-
B = x / y
1641-
qe.toc()
1637+
with qe.Timer("Broadcasting operation"):
1638+
B = x / y
16421639
```
16431640

16441641
Here is the output
@@ -1696,14 +1693,13 @@ np.random.seed(123)
16961693
x = np.random.randn(1000, 100, 100)
16971694
y = np.random.randn(100)
16981695
1699-
qe.tic()
1700-
D = np.empty_like(x)
1701-
d1, d2, d3 = x.shape
1702-
for i in range(d1):
1703-
for j in range(d2):
1704-
for k in range(d3):
1705-
D[i, j, k] = x[i, j, k] / y[k]
1706-
qe.toc()
1696+
with qe.Timer("For loop operation"):
1697+
D = np.empty_like(x)
1698+
d1, d2, d3 = x.shape
1699+
for i in range(d1):
1700+
for j in range(d2):
1701+
for k in range(d3):
1702+
D[i, j, k] = x[i, j, k] / y[k]
17071703
```
17081704

17091705
Note that the `for` loop takes much longer than the broadcasting operation.

0 commit comments

Comments
 (0)