Skip to content

Commit 16a97e2

Browse files
committed
add further exercises
1 parent acd3b63 commit 16a97e2

File tree

5 files changed

+291
-231
lines changed

5 files changed

+291
-231
lines changed

exercises/bayesflow-diffusion.ipynb

Lines changed: 206 additions & 174 deletions
Large diffs are not rendered by default.

exercises/bayesflow-normal.ipynb

Lines changed: 57 additions & 49 deletions
Large diffs are not rendered by default.

exercises/flow-matching-datasaurus.ipynb

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -432,6 +432,15 @@
432432
"f=plt.xlim(-5, 5)\n",
433433
"f=plt.xlim(-5, 5)"
434434
]
435+
},
436+
{
437+
"cell_type": "markdown",
438+
"metadata": {},
439+
"source": [
440+
"# Further exercise\n",
441+
"\n",
442+
"The accuracy of the approximation is generally driven by how expressive the velocity network is, how well it is trained, but also by the accuracy of the integrator. Play around with the network complexity of the flow matching model, simulation budget, and the steps taken during the ODE solver to see how will it affect your ability to generate the datasaurus distribution."
443+
]
435444
}
436445
],
437446
"metadata": {

exercises/flow-matching-swiss-roll.ipynb

Lines changed: 18 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -263,7 +263,7 @@
263263
},
264264
{
265265
"cell_type": "code",
266-
"execution_count": 7,
266+
"execution_count": null,
267267
"metadata": {},
268268
"outputs": [],
269269
"source": [
@@ -279,14 +279,15 @@
279279
"\n",
280280
" def __getitem__(self, index):\n",
281281
" data, _ = make_swiss_roll(self.batch_size, noise=1)\n",
282-
" data = data[:,[0, -1]]\n",
283-
" condition = np.random.choice([1, -1], size=(batch_size, 2))\n",
284-
" data = condition * data\n",
282+
" data=data[:,[0, -1]]\n",
283+
" \n",
284+
" condition=np.random.choice([1, -1], size=(batch_size, 2))\n",
285+
" data=condition * data\n",
285286
"\n",
286-
" base= make_ring(data.shape[0])\n",
287+
" base=make_ring(data.shape[0])\n",
287288
" \n",
288-
" t = np.random.uniform(low=0, high=1, size=data.shape[0])\n",
289-
" t = np.repeat(t[:,np.newaxis], repeats=data.shape[1], axis=1)\n",
289+
" t=np.random.uniform(low=0, high=1, size=data.shape[0])\n",
290+
" t=np.repeat(t[:,np.newaxis], repeats=data.shape[1], axis=1)\n",
290291
"\n",
291292
" target = data - base\n",
292293
" return dict(x_0=base, x_1=data, t=t, condition=condition), target"
@@ -438,6 +439,16 @@
438439
" axs[i,j].set_title(\"x scale: {}, y scale {}\".format(x_scale, y_scale))\n",
439440
"fig.tight_layout()"
440441
]
442+
},
443+
{
444+
"cell_type": "markdown",
445+
"metadata": {},
446+
"source": [
447+
"## Further exercises\n",
448+
"\n",
449+
"1. Here we only scaled the data by 2 values (-1 or 1) in two directions ($x$ and $y$). However, there is nothing that can stop us from using different values. Try to replace the line `condition=np.random.choice([1, -1], size=(batch_size, 2))` with some other transformation (for example, generate values from a uniform distribution between -1 and 1). What do you think the network will learn? Try it for yourself.\n",
450+
"2. The swiss roll distribution generates 3D data. In this exercise, we only used 2 of the axes and neglected the third one. Try to change to model so that you can actually reproduce the swiss role in 3D (note: you will also need to change the base distribution to be 3D)."
451+
]
441452
}
442453
],
443454
"metadata": {

exercises/normalizing-flow.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -704,7 +704,7 @@
704704
"name": "python",
705705
"nbconvert_exporter": "python",
706706
"pygments_lexer": "ipython3",
707-
"version": "3.11.10"
707+
"version": "3.11.11"
708708
}
709709
},
710710
"nbformat": 4,

0 commit comments

Comments
 (0)