|
263 | 263 | }, |
264 | 264 | { |
265 | 265 | "cell_type": "code", |
266 | | - "execution_count": 7, |
| 266 | + "execution_count": null, |
267 | 267 | "metadata": {}, |
268 | 268 | "outputs": [], |
269 | 269 | "source": [ |
|
279 | 279 | "\n", |
280 | 280 | " def __getitem__(self, index):\n", |
281 | 281 | " data, _ = make_swiss_roll(self.batch_size, noise=1)\n", |
282 | | - " data = data[:,[0, -1]]\n", |
283 | | - " condition = np.random.choice([1, -1], size=(batch_size, 2))\n", |
284 | | - " data = condition * data\n", |
| 282 | + " data=data[:,[0, -1]]\n", |
| 283 | + " \n", |
| 284 | + " condition=np.random.choice([1, -1], size=(batch_size, 2))\n", |
| 285 | + " data=condition * data\n", |
285 | 286 | "\n", |
286 | | - " base= make_ring(data.shape[0])\n", |
| 287 | + " base=make_ring(data.shape[0])\n", |
287 | 288 | " \n", |
288 | | - " t = np.random.uniform(low=0, high=1, size=data.shape[0])\n", |
289 | | - " t = np.repeat(t[:,np.newaxis], repeats=data.shape[1], axis=1)\n", |
| 289 | + " t=np.random.uniform(low=0, high=1, size=data.shape[0])\n", |
| 290 | + " t=np.repeat(t[:,np.newaxis], repeats=data.shape[1], axis=1)\n", |
290 | 291 | "\n", |
291 | 292 | " target = data - base\n", |
292 | 293 | " return dict(x_0=base, x_1=data, t=t, condition=condition), target" |
|
438 | 439 | " axs[i,j].set_title(\"x scale: {}, y scale {}\".format(x_scale, y_scale))\n", |
439 | 440 | "fig.tight_layout()" |
440 | 441 | ] |
| 442 | + }, |
| 443 | + { |
| 444 | + "cell_type": "markdown", |
| 445 | + "metadata": {}, |
| 446 | + "source": [ |
| 447 | + "## Further exercises\n", |
| 448 | + "\n", |
| 449 | + "1. Here we only scaled the data by 2 values (-1 or 1) in two directions ($x$ and $y$). However, there is nothing that can stop us from using different values. Try to replace the line `condition=np.random.choice([1, -1], size=(batch_size, 2))` with some other transformation (for example, generate values from a uniform distribution between -1 and 1). What do you think the network will learn? Try it for yourself.\n", |
| 450 | + "2. The swiss roll distribution generates 3D data. In this exercise, we only used 2 of the axes and neglected the third one. Try to change to model so that you can actually reproduce the swiss role in 3D (note: you will also need to change the base distribution to be 3D)." |
| 451 | + ] |
441 | 452 | } |
442 | 453 | ], |
443 | 454 | "metadata": { |
|
0 commit comments