Compare commits
115 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
e9fa659fee | ||
|
|
5db96e8a3d | ||
|
|
6ac37b5548 | ||
|
|
9acd27c8b3 | ||
|
|
a481425005 | ||
|
|
41e60e8597 | ||
|
|
ec748ecc18 | ||
|
|
467a36bed8 | ||
|
|
351b050e7e | ||
|
|
5f19405c51 | ||
|
|
ad5339a972 | ||
|
|
1d8a9af439 | ||
|
|
55a96289fe | ||
|
|
d72143ff42 | ||
|
|
a1154e2262 | ||
|
|
b419f52791 | ||
|
|
8c0bec30ff | ||
|
|
87cfb0a846 | ||
|
|
0fcfa4fe36 | ||
|
|
8e6fd1bf81 | ||
|
|
7a2a7cd2e8 | ||
|
|
f46ae30bf2 | ||
|
|
24206afa50 | ||
|
|
77ec015669 | ||
|
|
1c40b09e48 | ||
|
|
972e035974 | ||
|
|
d7a792785f | ||
|
|
80ef390e45 | ||
|
|
f70dd6e60a | ||
|
|
a7d840f204 | ||
|
|
50d1c5e255 | ||
|
|
9a3517629a | ||
|
|
af77c76435 | ||
|
|
0515940ace | ||
|
|
5d578df07b | ||
|
|
d174c9f34c | ||
|
|
dbb3d4b666 | ||
|
|
39ad6413ce | ||
|
|
3e51d89714 | ||
|
|
5680e5d7f7 | ||
|
|
0a5a97f55d | ||
|
|
0dc94ead03 | ||
|
|
c7e7e731b3 | ||
|
|
06f197c787 | ||
|
|
ff29fc34e8 | ||
|
|
2653116c47 | ||
|
|
f50de74496 | ||
|
|
aa5d89adf3 | ||
|
|
18e827842c | ||
|
|
22b6b18660 | ||
|
|
ed060b6b08 | ||
|
|
df0132505b | ||
|
|
67fb0f5990 | ||
|
|
ecd01f2992 | ||
|
|
70878c94be | ||
|
|
832a3c9104 | ||
|
|
68cd38bf2f | ||
|
|
964d26c684 | ||
|
|
c1d56880a6 | ||
|
|
0cd321fb96 | ||
|
|
043969fb79 | ||
|
|
4052d29b0a | ||
|
|
19ee3afd90 | ||
|
|
91f99c7398 | ||
|
|
96fe95cea1 | ||
|
|
fc35dc4168 | ||
|
|
80497e298d | ||
|
|
8c658ac321 | ||
|
|
4a08fa44bf | ||
|
|
c5755efe68 | ||
|
|
55b425b41b | ||
|
|
e000397470 | ||
|
|
b4d0b49776 | ||
|
|
f5c1b2af2e | ||
|
|
a32260c39f | ||
|
|
6b2288665f | ||
|
|
1cf990bfe7 | ||
|
|
ae967f8e7e | ||
|
|
690d338406 | ||
|
|
5b845f7d7b | ||
|
|
d4de5f324d | ||
|
|
958b9d543e | ||
|
|
604260ebdf | ||
|
|
fc7503e068 | ||
|
|
2f25abacdb | ||
|
|
14b7523a86 | ||
|
|
d8d7eee7b1 | ||
|
|
9cbd56dff3 | ||
|
|
9f3099b97f | ||
|
|
516c7db675 | ||
|
|
4a74a9d0be | ||
|
|
e1544de817 | ||
|
|
7d120525ad | ||
|
|
e30926679d | ||
|
|
70748bde97 | ||
|
|
374db64950 | ||
|
|
80894cea1f | ||
|
|
45791c3951 | ||
|
|
d7635a0fc8 | ||
|
|
d93bc911ee | ||
|
|
7c06d80d1a | ||
|
|
bc0f29c751 | ||
|
|
35150d7ec7 | ||
|
|
7e9954855c | ||
|
|
f65e5c7fbb | ||
|
|
fb742f12ee | ||
|
|
bfc32d5cfa | ||
|
|
2fd5a0cd12 | ||
|
|
7f4cf1acb7 | ||
|
|
b6bc481fec | ||
|
|
513fb0eba6 | ||
|
|
5bf3cf2474 | ||
|
|
6ae8b99899 | ||
|
|
fdc1efce08 | ||
|
|
4199a20fd2 |
@@ -67,7 +67,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"# Define a linear function with just one input, x\n",
|
"# Define a linear function with just one input, x\n",
|
||||||
"def linear_function_1D(x,beta,omega):\n",
|
"def linear_function_1D(x,beta,omega):\n",
|
||||||
" # TODO -- replace the code lin below with formula for 1D linear equation\n",
|
" # TODO -- replace the code line below with formula for 1D linear equation\n",
|
||||||
" y = x\n",
|
" y = x\n",
|
||||||
"\n",
|
"\n",
|
||||||
" return y"
|
" return y"
|
||||||
@@ -332,9 +332,7 @@
|
|||||||
"2. What is $\\mbox{exp}[1]$?\n",
|
"2. What is $\\mbox{exp}[1]$?\n",
|
||||||
"3. What is $\\mbox{exp}[-\\infty]$?\n",
|
"3. What is $\\mbox{exp}[-\\infty]$?\n",
|
||||||
"4. What is $\\mbox{exp}[+\\infty]$?\n",
|
"4. What is $\\mbox{exp}[+\\infty]$?\n",
|
||||||
"5. A function is convex if we can draw a straight line between any two points on the\n",
|
"5. A function is convex if we can draw a straight line between any two points on the function, and this line always lies above the function. Similarly, a function is concave if a straight line between any two points always lies below the function. Is the exponential function convex or concave or neither?\n"
|
||||||
"function, and this line always lies above the function. Similarly, a function is concave\n",
|
|
||||||
"if a straight line between any two points always lies below the function. Is the exponential function convex or concave or neither?\n"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -343,7 +341,7 @@
|
|||||||
"id": "R6A4e5IxIWCu"
|
"id": "R6A4e5IxIWCu"
|
||||||
},
|
},
|
||||||
"source": [
|
"source": [
|
||||||
"Now let's consider the logarithm function $y=\\log[x]$. Throughout the book we always use natural (base $e$) logarithms. The log funcction maps non-negative numbers $[0,\\infty]$ to real numbers $[-\\infty,\\infty]$. It is the inverse of the exponential function. So when we compute $\\log[x]$ we are really asking \"What is the number $y$ so that $e^y=x$?\""
|
"Now let's consider the logarithm function $y=\\log[x]$. Throughout the book we always use natural (base $e$) logarithms. The log function maps non-negative numbers $[0,\\infty]$ to real numbers $[-\\infty,\\infty]$. It is the inverse of the exponential function. So when we compute $\\log[x]$ we are really asking \"What is the number $y$ so that $e^y=x$?\""
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -384,15 +382,6 @@
|
|||||||
"6. What is $\\mbox{log}[-1]$?\n",
|
"6. What is $\\mbox{log}[-1]$?\n",
|
||||||
"7. Is the logarithm function concave or convex?\n"
|
"7. Is the logarithm function concave or convex?\n"
|
||||||
]
|
]
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"source": [],
|
|
||||||
"metadata": {
|
|
||||||
"id": "XG0CKLiPJI7I"
|
|
||||||
},
|
|
||||||
"execution_count": null,
|
|
||||||
"outputs": []
|
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
|
|||||||
File diff suppressed because one or more lines are too long
@@ -4,7 +4,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"colab": {
|
"colab": {
|
||||||
"provenance": [],
|
"provenance": [],
|
||||||
"authorship_tag": "ABX9TyPFqKOqd6BjlymOawCRkmfn",
|
"authorship_tag": "ABX9TyNk2dAhwwRxGpfVSC3b2Owv",
|
||||||
"include_colab_link": true
|
"include_colab_link": true
|
||||||
},
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
@@ -182,7 +182,7 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"source": [
|
"source": [
|
||||||
"Now we'll extend this model to have two outputs $y_1$ and $y_2$, each of which can be visualized with a separate heatmap. You will now have sets of parameters $\\phi_{10}, \\phi_{11},\\phi_{12}$ and $\\phi_{2}, \\phi_{21},\\phi_{22}$ that correspond to each of these outputs."
|
"Now we'll extend this model to have two outputs $y_1$ and $y_2$, each of which can be visualized with a separate heatmap. You will now have sets of parameters $\\phi_{10}, \\phi_{11},\\phi_{12}$ and $\\phi_{20}, \\phi_{21},\\phi_{22}$ that correspond to each of these outputs."
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "Xl6LcrUyM7Lh"
|
"id": "Xl6LcrUyM7Lh"
|
||||||
@@ -238,7 +238,7 @@
|
|||||||
"def shallow_2_2_3(x1,x2, activation_fn, phi_10,phi_11,phi_12,phi_13, phi_20,phi_21,phi_22,phi_23, theta_10, theta_11,\\\n",
|
"def shallow_2_2_3(x1,x2, activation_fn, phi_10,phi_11,phi_12,phi_13, phi_20,phi_21,phi_22,phi_23, theta_10, theta_11,\\\n",
|
||||||
" theta_12, theta_20, theta_21, theta_22, theta_30, theta_31, theta_32):\n",
|
" theta_12, theta_20, theta_21, theta_22, theta_30, theta_31, theta_32):\n",
|
||||||
"\n",
|
"\n",
|
||||||
" # TODO -- write this function -- replace the dummy code blow\n",
|
" # TODO -- write this function -- replace the dummy code below\n",
|
||||||
" pre_1 = np.zeros_like(x1)\n",
|
" pre_1 = np.zeros_like(x1)\n",
|
||||||
" pre_2 = np.zeros_like(x1)\n",
|
" pre_2 = np.zeros_like(x1)\n",
|
||||||
" pre_3 = np.zeros_like(x1)\n",
|
" pre_3 = np.zeros_like(x1)\n",
|
||||||
|
|||||||
@@ -4,7 +4,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"colab": {
|
"colab": {
|
||||||
"provenance": [],
|
"provenance": [],
|
||||||
"authorship_tag": "ABX9TyMhLSGU8+odPS/CoW5PwKna",
|
"authorship_tag": "ABX9TyNioITtfAcfxEfM3UOfQyb9",
|
||||||
"include_colab_link": true
|
"include_colab_link": true
|
||||||
},
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
@@ -62,7 +62,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"The number of regions $N$ created by a shallow neural network with $D_i$ inputs and $D$ hidden units is given by Zaslavsky's formula:\n",
|
"The number of regions $N$ created by a shallow neural network with $D_i$ inputs and $D$ hidden units is given by Zaslavsky's formula:\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\\begin{equation}N = \\sum_{j=1}^{D_{i}}\\binom{D}{j}=\\sum_{j=1}^{D_{i}} \\frac{D!}{(D-j)!j!} \\end{equation} <br>\n",
|
"\\begin{equation}N = \\sum_{j=0}^{D_{i}}\\binom{D}{j}=\\sum_{j=0}^{D_{i}} \\frac{D!}{(D-j)!j!} \\end{equation} <br>\n",
|
||||||
"\n"
|
"\n"
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
@@ -79,7 +79,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"def number_regions(Di, D):\n",
|
"def number_regions(Di, D):\n",
|
||||||
" # TODO -- implement Zaslavsky's formula\n",
|
" # TODO -- implement Zaslavsky's formula\n",
|
||||||
" # You will need to use math.factorial() https://www.geeksforgeeks.org/factorial-in-python/\n",
|
" # You can use math.com() https://www.w3schools.com/python/ref_math_comb.asp\n",
|
||||||
" # Replace this code\n",
|
" # Replace this code\n",
|
||||||
" N = 1;\n",
|
" N = 1;\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -115,7 +115,7 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"source": [
|
"source": [
|
||||||
"This works but there is a complication. If the number of hidden units $D$ is fewer than the number of hidden dimensions $D_i$ , the formula will fail. When this is the case, there are just $2^D$ regions (see figure 3.10 to understand why).\n",
|
"This works but there is a complication. If the number of hidden units $D$ is fewer than the number of input dimensions $D_i$ , the formula will fail. When this is the case, there are just $2^D$ regions (see figure 3.10 to understand why).\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Let's demonstrate this:"
|
"Let's demonstrate this:"
|
||||||
],
|
],
|
||||||
@@ -142,7 +142,7 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"source": [
|
||||||
"# Let's do the calculation properly when D<Di\n",
|
"# Let's do the calculation properly when D<Di (see figure 3.10 from the book)\n",
|
||||||
"D = 8; Di = 10\n",
|
"D = 8; Di = 10\n",
|
||||||
"N = np.power(2,D)\n",
|
"N = np.power(2,D)\n",
|
||||||
"# We can equivalently do this by calling number_regions with the D twice\n",
|
"# We can equivalently do this by calling number_regions with the D twice\n",
|
||||||
@@ -191,7 +191,7 @@
|
|||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"source": [
|
||||||
"# Now let's compute and plot the number of regions as a function of the number of parameters as in figure 3.9b\n",
|
"# Now let's compute and plot the number of regions as a function of the number of parameters as in figure 3.9b\n",
|
||||||
"# First let's write a function that computes the number of parameters as a function of the input dimension and number of hidden layers (assuming just one output)\n",
|
"# First let's write a function that computes the number of parameters as a function of the input dimension and number of hidden units (assuming just one output)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"def number_parameters(D_i, D):\n",
|
"def number_parameters(D_i, D):\n",
|
||||||
" # TODO -- replace this code with the proper calculation\n",
|
" # TODO -- replace this code with the proper calculation\n",
|
||||||
@@ -210,7 +210,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"# Now let's test the code\n",
|
"# Now let's test the code\n",
|
||||||
"N = number_parameters(10, 8)\n",
|
"N = number_parameters(10, 8)\n",
|
||||||
"print(f\"Di=10, D=8, Number of parameters = {int(N)}, True value = 90\")"
|
"print(f\"Di=10, D=8, Number of parameters = {int(N)}, True value = 97\")"
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "VbhDmZ1gwkQj"
|
"id": "VbhDmZ1gwkQj"
|
||||||
@@ -233,7 +233,7 @@
|
|||||||
" for c_hidden in range(1, 200):\n",
|
" for c_hidden in range(1, 200):\n",
|
||||||
" # Iterate over different ranges of number hidden variables for different input sizes\n",
|
" # Iterate over different ranges of number hidden variables for different input sizes\n",
|
||||||
" D = int(c_hidden * 500 / D_i)\n",
|
" D = int(c_hidden * 500 / D_i)\n",
|
||||||
" params[c_dim, c_hidden] = D_i * D +1 + D +1\n",
|
" params[c_dim, c_hidden] = D_i * D +D + D +1\n",
|
||||||
" regions[c_dim, c_hidden] = number_regions(np.min([D_i,D]), D)\n",
|
" regions[c_dim, c_hidden] = number_regions(np.min([D_i,D]), D)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"fig, ax = plt.subplots()\n",
|
"fig, ax = plt.subplots()\n",
|
||||||
|
|||||||
@@ -4,7 +4,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"colab": {
|
"colab": {
|
||||||
"provenance": [],
|
"provenance": [],
|
||||||
"authorship_tag": "ABX9TyOu5BvK3aFb7ZEQKG5vfOZ1",
|
"authorship_tag": "ABX9TyPmra+JD+dm2M3gCqx3bMak",
|
||||||
"include_colab_link": true
|
"include_colab_link": true
|
||||||
},
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
@@ -185,7 +185,7 @@
|
|||||||
"The ReLU isn't the only kind of activation function. For a long time, people used sigmoid functions. A logistic sigmoid function is defined by the equation\n",
|
"The ReLU isn't the only kind of activation function. For a long time, people used sigmoid functions. A logistic sigmoid function is defined by the equation\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\\begin{equation}\n",
|
"\\begin{equation}\n",
|
||||||
"f[h] = \\frac{1}{1+\\exp{[-10 z ]}}\n",
|
"f[z] = \\frac{1}{1+\\exp{[-10 z ]}}\n",
|
||||||
"\\end{equation}\n",
|
"\\end{equation}\n",
|
||||||
"\n",
|
"\n",
|
||||||
"(Note that the factor of 10 is not standard -- but it allow us to plot on the same axes as the ReLU examples)"
|
"(Note that the factor of 10 is not standard -- but it allow us to plot on the same axes as the ReLU examples)"
|
||||||
|
|||||||
@@ -4,7 +4,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"colab": {
|
"colab": {
|
||||||
"provenance": [],
|
"provenance": [],
|
||||||
"authorship_tag": "ABX9TyMbJGN6f2+yKzzsVep/wi5U",
|
"authorship_tag": "ABX9TyPyaqr0yJlxfIcTpfLSHDrP",
|
||||||
"include_colab_link": true
|
"include_colab_link": true
|
||||||
},
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
@@ -274,7 +274,7 @@
|
|||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"source": [
|
||||||
"# define sizes\n",
|
"# define sizes\n",
|
||||||
"D_i=4; D_1=5; D_2=2; D_3=1; D_o=1\n",
|
"D_i=4; D_1=5; D_2=2; D_3=4; D_o=1\n",
|
||||||
"# We'll choose the inputs and parameters of this network randomly using np.random.normal\n",
|
"# We'll choose the inputs and parameters of this network randomly using np.random.normal\n",
|
||||||
"# For example, we'll set the input using\n",
|
"# For example, we'll set the input using\n",
|
||||||
"n_data = 4;\n",
|
"n_data = 4;\n",
|
||||||
|
|||||||
File diff suppressed because one or more lines are too long
@@ -4,7 +4,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"colab": {
|
"colab": {
|
||||||
"provenance": [],
|
"provenance": [],
|
||||||
"authorship_tag": "ABX9TyNkBMOVt5gO7Awn9JMn4N8Z",
|
"authorship_tag": "ABX9TyPX88BLalmJTle9GSAZMJcz",
|
||||||
"include_colab_link": true
|
"include_colab_link": true
|
||||||
},
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
@@ -307,7 +307,7 @@
|
|||||||
"# Return the negative log likelihood of the data under the model\n",
|
"# Return the negative log likelihood of the data under the model\n",
|
||||||
"def compute_negative_log_likelihood(y_train, mu, sigma):\n",
|
"def compute_negative_log_likelihood(y_train, mu, sigma):\n",
|
||||||
" # TODO -- compute the likelihood of the data -- don't use the likelihood function above -- compute the negative sum of the log probabilities\n",
|
" # TODO -- compute the likelihood of the data -- don't use the likelihood function above -- compute the negative sum of the log probabilities\n",
|
||||||
" # Bottom line of equation 5.3 in the notes\n",
|
" # Equation 5.4 in the notes\n",
|
||||||
" # You will need np.sum(), np.log()\n",
|
" # You will need np.sum(), np.log()\n",
|
||||||
" # Replace the line below\n",
|
" # Replace the line below\n",
|
||||||
" nll = 0\n",
|
" nll = 0\n",
|
||||||
|
|||||||
192
Notebooks/Chap06/6_1_Line_Search.ipynb
Normal file
192
Notebooks/Chap06/6_1_Line_Search.ipynb
Normal file
@@ -0,0 +1,192 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyOfxeJ15PMkIi4geDTRCz3c",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap06/6_1_Line_Search.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 6.1: Line search**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook investigates how to find the minimum of a 1D function using line search as described in Figure 6.10.\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "el8l05WQEO46"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "xhmIOLiZELV_"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# import libraries\n",
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Let's create a simple 1D function\n",
|
||||||
|
"def loss_function(phi):\n",
|
||||||
|
" return 1- 0.5 * np.exp(-(phi-0.65)*(phi-0.65)/0.1) - 0.45 *np.exp(-(phi-0.35)*(phi-0.35)/0.02)\n",
|
||||||
|
"\n",
|
||||||
|
"def draw_function(loss_function,a=None, b=None, c=None, d=None):\n",
|
||||||
|
" # Plot the function\n",
|
||||||
|
" phi_plot = np.arange(0,1,0.01);\n",
|
||||||
|
" fig,ax = plt.subplots()\n",
|
||||||
|
" ax.plot(phi_plot,loss_function(phi_plot),'r-')\n",
|
||||||
|
" ax.set_xlim(0,1); ax.set_ylim(0,1)\n",
|
||||||
|
" ax.set_xlabel('$\\phi$'); ax.set_ylabel('$L[\\phi]$')\n",
|
||||||
|
" if a is not None and b is not None and c is not None and d is not None:\n",
|
||||||
|
" plt.axvspan(a, d, facecolor='k', alpha=0.2)\n",
|
||||||
|
" ax.plot([a,a],[0,1],'b-')\n",
|
||||||
|
" ax.plot([b,b],[0,1],'b-')\n",
|
||||||
|
" ax.plot([c,c],[0,1],'b-')\n",
|
||||||
|
" ax.plot([d,d],[0,1],'b-')\n",
|
||||||
|
" plt.show()\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "qFRe9POHF2le"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Draw this function\n",
|
||||||
|
"draw_function(loss_function)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "TXx1Tpd1Tl-I"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now lets create a line search procedure to find the minimum in the range 0,1"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "QU5mdGvpTtEG"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def line_search(loss_function, thresh=.0001, max_iter = 10, draw_flag = False):\n",
|
||||||
|
"\n",
|
||||||
|
" # Initialize four points along the range we are going to search\n",
|
||||||
|
" a = 0\n",
|
||||||
|
" b = 0.33\n",
|
||||||
|
" c = 0.66\n",
|
||||||
|
" d = 1.0\n",
|
||||||
|
" n_iter =0;\n",
|
||||||
|
"\n",
|
||||||
|
" # While we haven't found the minimum closely enough\n",
|
||||||
|
" while np.abs(b-c) > thresh and n_iter < max_iter:\n",
|
||||||
|
" # Increment iteration counter (just to prevent an infinite loop)\n",
|
||||||
|
" n_iter = n_iter+1\n",
|
||||||
|
"\n",
|
||||||
|
" # Calculate all four points\n",
|
||||||
|
" lossa = loss_function(a)\n",
|
||||||
|
" lossb = loss_function(b)\n",
|
||||||
|
" lossc = loss_function(c)\n",
|
||||||
|
" lossd = loss_function(d)\n",
|
||||||
|
"\n",
|
||||||
|
" if draw_flag:\n",
|
||||||
|
" draw_function(loss_function, a,b,c,d)\n",
|
||||||
|
"\n",
|
||||||
|
" print('Iter %d, a=%3.3f, b=%3.3f, c=%3.3f, d=%3.3f'%(n_iter, a,b,c,d))\n",
|
||||||
|
"\n",
|
||||||
|
" # Rule #1 If point A is less than points B, C, and D then halve values of B, C, and D\n",
|
||||||
|
" # i.e. bring them closer to the original point\n",
|
||||||
|
" # TODO REPLACE THE BLOCK OF CODE BELOW WITH THIS RULE\n",
|
||||||
|
" if (0):\n",
|
||||||
|
" continue;\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" # Rule #2 If point b is less than point c then\n",
|
||||||
|
" # then point d becomes point c, and\n",
|
||||||
|
" # point b becomes 1/3 between a and new d\n",
|
||||||
|
" # point c becomes 2/3 between a and new d\n",
|
||||||
|
" # TODO REPLACE THE BLOCK OF CODE BELOW WITH THIS RULE\n",
|
||||||
|
" if (0):\n",
|
||||||
|
" continue;\n",
|
||||||
|
"\n",
|
||||||
|
" # Rule #3 If point c is less than point b then\n",
|
||||||
|
" # then point a becomes point b, and\n",
|
||||||
|
" # point b becomes 1/3 between new a and d\n",
|
||||||
|
" # point c becomes 2/3 between new a and d\n",
|
||||||
|
" # TODO REPLACE THE BLOCK OF CODE BELOW WITH THIS RULE\n",
|
||||||
|
" if(0):\n",
|
||||||
|
" continue\n",
|
||||||
|
"\n",
|
||||||
|
" # TODO -- FINAL SOLUTION IS AVERAGE OF B and C\n",
|
||||||
|
" # REPLACE THIS LINE\n",
|
||||||
|
" soln = 1\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" return soln"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "K-NTHpAAHlCl"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"soln = line_search(loss_function, draw_flag=True)\n",
|
||||||
|
"print('Soln = %3.3f, loss = %3.3f'%(soln,loss_function(soln)))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "YVq6rmaWRD2M"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [],
|
||||||
|
"metadata": {
|
||||||
|
"id": "tOLd0gtdRLLS"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
421
Notebooks/Chap06/6_2_Gradient_Descent.ipynb
Normal file
421
Notebooks/Chap06/6_2_Gradient_Descent.ipynb
Normal file
@@ -0,0 +1,421 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyM/FIXDTd6tZYs6WRzK00hB",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap06/6_2_Gradient_Descent.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 6.2 Gradient descent**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook recreates the gradient descent algorithm as shon in figure 6.1.\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
|
||||||
|
"\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "el8l05WQEO46"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "xhmIOLiZELV_"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# import libraries\n",
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt\n",
|
||||||
|
"from matplotlib import cm\n",
|
||||||
|
"from matplotlib.colors import ListedColormap"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Let's create our training data 12 pairs {x_i, y_i}\n",
|
||||||
|
"# We'll try to fit the straight line model to these data\n",
|
||||||
|
"data = np.array([[0.03,0.19,0.34,0.46,0.78,0.81,1.08,1.18,1.39,1.60,1.65,1.90],\n",
|
||||||
|
" [0.67,0.85,1.05,1.00,1.40,1.50,1.30,1.54,1.55,1.68,1.73,1.60]])"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "4cRkrh9MZ58Z"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Let's define our model -- just a straight line with intercept phi[0] and slope phi[1]\n",
|
||||||
|
"def model(phi,x):\n",
|
||||||
|
" y_pred = phi[0]+phi[1] * x\n",
|
||||||
|
" return y_pred"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "WQUERmb2erAe"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Draw model\n",
|
||||||
|
"def draw_model(data,model,phi,title=None):\n",
|
||||||
|
" x_model = np.arange(0,2,0.01)\n",
|
||||||
|
" y_model = model(phi,x_model)\n",
|
||||||
|
"\n",
|
||||||
|
" fix, ax = plt.subplots()\n",
|
||||||
|
" ax.plot(data[0,:],data[1,:],'bo')\n",
|
||||||
|
" ax.plot(x_model,y_model,'m-')\n",
|
||||||
|
" ax.set_xlim([0,2]);ax.set_ylim([0,2])\n",
|
||||||
|
" ax.set_xlabel('x'); ax.set_ylabel('y')\n",
|
||||||
|
" ax.set_aspect('equal')\n",
|
||||||
|
" if title is not None:\n",
|
||||||
|
" ax.set_title(title)\n",
|
||||||
|
" plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "qFRe9POHF2le"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Initialize the parameters to some arbitrary values and draw the model\n",
|
||||||
|
"phi = np.zeros((2,1))\n",
|
||||||
|
"phi[0] = 0.6 # Intercept\n",
|
||||||
|
"phi[1] = -0.2 # Slope\n",
|
||||||
|
"draw_model(data,model,phi, \"Initial parameters\")\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "TXx1Tpd1Tl-I"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now lets create compute the sum of squares loss for the training data"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "QU5mdGvpTtEG"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def compute_loss(data_x, data_y, model, phi):\n",
|
||||||
|
" # TODO -- Write this function -- replace the line below\n",
|
||||||
|
" # First make model predictions from data x\n",
|
||||||
|
" # Then compute the squared difference between the predictions and true y values\n",
|
||||||
|
" # Then sum them all and return\n",
|
||||||
|
" pred_y = 0\n",
|
||||||
|
" loss = 0\n",
|
||||||
|
"\n",
|
||||||
|
" return loss"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "I7dqTY2Gg7CR"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Let's just test that we got that right"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "eB5DQvU5hYNx"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"loss = compute_loss(data[0,:],data[1,:],model,np.array([[0.6],[-0.2]]))\n",
|
||||||
|
"print('Your loss = %3.3f, Correct loss = %3.3f'%(loss, 12.367))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "Ty05UtEEg9tc"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's plot the whole loss function"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "F3trnavPiHpH"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def draw_loss_function(compute_loss, data, model, phi_iters = None):\n",
|
||||||
|
" # Define pretty colormap\n",
|
||||||
|
" my_colormap_vals_hex =('2a0902', '2b0a03', '2c0b04', '2d0c05', '2e0c06', '2f0d07', '300d08', '310e09', '320f0a', '330f0b', '34100b', '35110c', '36110d', '37120e', '38120f', '39130f', '3a1410', '3b1411', '3c1511', '3d1612', '3e1613', '3f1713', '401714', '411814', '421915', '431915', '451a16', '461b16', '471b17', '481c17', '491d18', '4a1d18', '4b1e19', '4c1f19', '4d1f1a', '4e201b', '50211b', '51211c', '52221c', '53231d', '54231d', '55241e', '56251e', '57261f', '58261f', '592720', '5b2821', '5c2821', '5d2922', '5e2a22', '5f2b23', '602b23', '612c24', '622d25', '632e25', '652e26', '662f26', '673027', '683027', '693128', '6a3229', '6b3329', '6c342a', '6d342a', '6f352b', '70362c', '71372c', '72372d', '73382e', '74392e', '753a2f', '763a2f', '773b30', '783c31', '7a3d31', '7b3e32', '7c3e33', '7d3f33', '7e4034', '7f4134', '804235', '814236', '824336', '834437', '854538', '864638', '874739', '88473a', '89483a', '8a493b', '8b4a3c', '8c4b3c', '8d4c3d', '8e4c3e', '8f4d3f', '904e3f', '924f40', '935041', '945141', '955242', '965343', '975343', '985444', '995545', '9a5646', '9b5746', '9c5847', '9d5948', '9e5a49', '9f5a49', 'a05b4a', 'a15c4b', 'a35d4b', 'a45e4c', 'a55f4d', 'a6604e', 'a7614e', 'a8624f', 'a96350', 'aa6451', 'ab6552', 'ac6552', 'ad6653', 'ae6754', 'af6855', 'b06955', 'b16a56', 'b26b57', 'b36c58', 'b46d59', 'b56e59', 'b66f5a', 'b7705b', 'b8715c', 'b9725d', 'ba735d', 'bb745e', 'bc755f', 'bd7660', 'be7761', 'bf7862', 'c07962', 'c17a63', 'c27b64', 'c27c65', 'c37d66', 'c47e67', 'c57f68', 'c68068', 'c78169', 'c8826a', 'c9836b', 'ca846c', 'cb856d', 'cc866e', 'cd876f', 'ce886f', 'ce8970', 'cf8a71', 'd08b72', 'd18c73', 'd28d74', 'd38e75', 'd48f76', 'd59077', 'd59178', 'd69279', 'd7937a', 'd8957b', 'd9967b', 'da977c', 'da987d', 'db997e', 'dc9a7f', 'dd9b80', 'de9c81', 'de9d82', 'df9e83', 'e09f84', 'e1a185', 'e2a286', 'e2a387', 'e3a488', 'e4a589', 'e5a68a', 'e5a78b', 'e6a88c', 'e7aa8d', 'e7ab8e', 'e8ac8f', 'e9ad90', 'eaae91', 'eaaf92', 'ebb093', 'ecb295', 'ecb396', 'edb497', 'eeb598', 'eeb699', 'efb79a', 'efb99b', 'f0ba9c', 'f1bb9d', 'f1bc9e', 'f2bd9f', 'f2bfa1', 'f3c0a2', 'f3c1a3', 'f4c2a4', 'f5c3a5', 'f5c5a6', 'f6c6a7', 'f6c7a8', 'f7c8aa', 'f7c9ab', 'f8cbac', 'f8ccad', 'f8cdae', 'f9ceb0', 'f9d0b1', 'fad1b2', 'fad2b3', 'fbd3b4', 'fbd5b6', 'fbd6b7', 'fcd7b8', 'fcd8b9', 'fcdaba', 'fddbbc', 'fddcbd', 'fddebe', 'fddfbf', 'fee0c1', 'fee1c2', 'fee3c3', 'fee4c5', 'ffe5c6', 'ffe7c7', 'ffe8c9', 'ffe9ca', 'ffebcb', 'ffeccd', 'ffedce', 'ffefcf', 'fff0d1', 'fff2d2', 'fff3d3', 'fff4d5', 'fff6d6', 'fff7d8', 'fff8d9', 'fffada', 'fffbdc', 'fffcdd', 'fffedf', 'ffffe0')\n",
|
||||||
|
" my_colormap_vals_dec = np.array([int(element,base=16) for element in my_colormap_vals_hex])\n",
|
||||||
|
" r = np.floor(my_colormap_vals_dec/(256*256))\n",
|
||||||
|
" g = np.floor((my_colormap_vals_dec - r *256 *256)/256)\n",
|
||||||
|
" b = np.floor(my_colormap_vals_dec - r * 256 *256 - g * 256)\n",
|
||||||
|
" my_colormap = ListedColormap(np.vstack((r,g,b)).transpose()/255.0)\n",
|
||||||
|
"\n",
|
||||||
|
" # Make grid of intercept/slope values to plot\n",
|
||||||
|
" intercepts_mesh, slopes_mesh = np.meshgrid(np.arange(0.0,2.0,0.02), np.arange(-1.0,1.0,0.002))\n",
|
||||||
|
" loss_mesh = np.zeros_like(slopes_mesh)\n",
|
||||||
|
" # Compute loss for every set of parameters\n",
|
||||||
|
" for idslope, slope in np.ndenumerate(slopes_mesh):\n",
|
||||||
|
" loss_mesh[idslope] = compute_loss(data[0,:], data[1,:], model, np.array([[intercepts_mesh[idslope]], [slope]]))\n",
|
||||||
|
"\n",
|
||||||
|
" fig,ax = plt.subplots()\n",
|
||||||
|
" fig.set_size_inches(8,8)\n",
|
||||||
|
" ax.contourf(intercepts_mesh,slopes_mesh,loss_mesh,256,cmap=my_colormap)\n",
|
||||||
|
" ax.contour(intercepts_mesh,slopes_mesh,loss_mesh,40,colors=['#80808080'])\n",
|
||||||
|
" if phi_iters is not None:\n",
|
||||||
|
" ax.plot(phi_iters[0,:], phi_iters[1,:],'go-')\n",
|
||||||
|
" ax.set_ylim([1,-1])\n",
|
||||||
|
" ax.set_xlabel('Intercept $\\phi_{0}$'); ax.set_ylabel('Slope, $\\phi_{1}$')\n",
|
||||||
|
" plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "K-NTHpAAHlCl"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"draw_loss_function(compute_loss, data, model)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "l8HbvIupnTME"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's compute the gradient vector for a given set of parameters:\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{equation}\n",
|
||||||
|
"\\frac{\\partial L}{\\partial \\boldsymbol\\phi} = \\begin{bmatrix}\\frac{\\partial L}{\\partial \\phi_0} \\\\\\frac{\\partial L}{\\partial \\phi_1} \\end{bmatrix}.\n",
|
||||||
|
"\\end{equation}"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "s9Duf05WqqSC"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# These are in the lecture slides and notes, but worth trying to calculate them yourself to\n",
|
||||||
|
"# check that you get them right. Write out the expression for the sum of squares loss and take the\n",
|
||||||
|
"# derivative with respect to phi0 and phi1\n",
|
||||||
|
"def compute_gradient(data_x, data_y, phi):\n",
|
||||||
|
" # TODO -- write this function, replacing the lines below\n",
|
||||||
|
" dl_dphi0 = 0.0\n",
|
||||||
|
" dl_dphi1 = 0.0\n",
|
||||||
|
"\n",
|
||||||
|
" # Return the gradient\n",
|
||||||
|
" return np.array([[dl_dphi0],[dl_dphi1]])"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "UpswmkL2qwBT"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"We can check we got this right using a trick known as **finite differences**. If we evaluate the function and then change one of the parameters by a very small amount and normalize by that amount, we get an approximation to the gradient, so:\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{eqnarray}\n",
|
||||||
|
"\\frac{\\partial L}{\\partial \\phi_{0}}&\\approx & \\frac{L[\\phi_0+\\delta, \\phi_1]-L[\\phi_0, \\phi_1]}{\\delta}\\\\\n",
|
||||||
|
"\\frac{\\partial L}{\\partial \\phi_{1}}&\\approx & \\frac{L[\\phi_0, \\phi_1+\\delta]-L[\\phi_0, \\phi_1]}{\\delta}\n",
|
||||||
|
"\\end{eqnarray}\n",
|
||||||
|
"\n",
|
||||||
|
"We can't do this when there are many parameters; for a million parameters, we would have to evaluate the loss function two million times, and usually computing the gradients directly is much more efficient."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "RS1nEcYVuEAM"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Compute the gradient using your function\n",
|
||||||
|
"gradient = compute_gradient(data[0,:],data[1,:], phi)\n",
|
||||||
|
"print(\"Your gradients: (%3.3f,%3.3f)\"%(gradient[0],gradient[1]))\n",
|
||||||
|
"# Approximate the gradients with finite differences\n",
|
||||||
|
"delta = 0.0001\n",
|
||||||
|
"dl_dphi0_est = (compute_loss(data[0,:],data[1,:],model,phi+np.array([[delta],[0]])) - \\\n",
|
||||||
|
" compute_loss(data[0,:],data[1,:],model,phi))/delta\n",
|
||||||
|
"dl_dphi1_est = (compute_loss(data[0,:],data[1,:],model,phi+np.array([[0],[delta]])) - \\\n",
|
||||||
|
" compute_loss(data[0,:],data[1,:],model,phi))/delta\n",
|
||||||
|
"print(\"Approx gradients: (%3.3f,%3.3f)\"%(dl_dphi0_est,dl_dphi1_est))\n",
|
||||||
|
"# There might be small differences in the last significant figure because finite gradients is an approximation\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "QuwAHN7yt-gi"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now we are ready to perform gradient descent. We'll need to use our line search routine from part I, which I've reproduced here plus the helper function loss_function_1D that converts from a 2D problem to a 1D problem"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "5EIjMM9Fw2eT"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def loss_function_1D(dist_prop, data, model, phi_start, gradient):\n",
|
||||||
|
" # Return the loss after moving this far\n",
|
||||||
|
" return compute_loss(data[0,:], data[1,:], model, phi_start+ gradient * dist_prop)\n",
|
||||||
|
"\n",
|
||||||
|
"def line_search(data, model, phi, gradient, thresh=.00001, max_dist = 0.1, max_iter = 15, verbose=False):\n",
|
||||||
|
" # Initialize four points along the range we are going to search\n",
|
||||||
|
" a = 0\n",
|
||||||
|
" b = 0.33 * max_dist\n",
|
||||||
|
" c = 0.66 * max_dist\n",
|
||||||
|
" d = 1.0 * max_dist\n",
|
||||||
|
" n_iter =0;\n",
|
||||||
|
"\n",
|
||||||
|
" # While we haven't found the minimum closely enough\n",
|
||||||
|
" while np.abs(b-c) > thresh and n_iter < max_iter:\n",
|
||||||
|
" # Increment iteration counter (just to prevent an infinite loop)\n",
|
||||||
|
" n_iter = n_iter+1\n",
|
||||||
|
" # Calculate all four points\n",
|
||||||
|
" lossa = loss_function_1D(a, data, model, phi,gradient)\n",
|
||||||
|
" lossb = loss_function_1D(b, data, model, phi,gradient)\n",
|
||||||
|
" lossc = loss_function_1D(c, data, model, phi,gradient)\n",
|
||||||
|
" lossd = loss_function_1D(d, data, model, phi,gradient)\n",
|
||||||
|
"\n",
|
||||||
|
" if verbose:\n",
|
||||||
|
" print('Iter %d, a=%3.3f, b=%3.3f, c=%3.3f, d=%3.3f'%(n_iter, a,b,c,d))\n",
|
||||||
|
" print('a %f, b%f, c%f, d%f'%(lossa,lossb,lossc,lossd))\n",
|
||||||
|
"\n",
|
||||||
|
" # Rule #1 If point A is less than points B, C, and D then halve points B,C, and D\n",
|
||||||
|
" if np.argmin((lossa,lossb,lossc,lossd))==0:\n",
|
||||||
|
" b = b/2\n",
|
||||||
|
" c = c/2\n",
|
||||||
|
" d = d/2\n",
|
||||||
|
" continue;\n",
|
||||||
|
"\n",
|
||||||
|
" # Rule #2 If point b is less than point c then\n",
|
||||||
|
" # then point d becomes point c, and\n",
|
||||||
|
" # point b becomes 1/3 between a and new d\n",
|
||||||
|
" # point c becomes 2/3 between a and new d\n",
|
||||||
|
" if lossb < lossc:\n",
|
||||||
|
" d = c\n",
|
||||||
|
" b = a+ (d-a)/3\n",
|
||||||
|
" c = a+ 2*(d-a)/3\n",
|
||||||
|
" continue\n",
|
||||||
|
"\n",
|
||||||
|
" # Rule #2 If point c is less than point b then\n",
|
||||||
|
" # then point a becomes point b, and\n",
|
||||||
|
" # point b becomes 1/3 between new a and d\n",
|
||||||
|
" # point c becomes 2/3 between new a and d\n",
|
||||||
|
" a = b\n",
|
||||||
|
" b = a+ (d-a)/3\n",
|
||||||
|
" c = a+ 2*(d-a)/3\n",
|
||||||
|
"\n",
|
||||||
|
" # Return average of two middle points\n",
|
||||||
|
" return (b+c)/2.0"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "XrJ2gQjfw1XP"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def gradient_descent_step(phi, data, model):\n",
|
||||||
|
" # TODO -- update Phi with the gradient descent step (equation 6.3)\n",
|
||||||
|
" # 1. Compute the gradient\n",
|
||||||
|
" # 2. Find the best step size alpha (use negative gradient as going downhill)\n",
|
||||||
|
" # 3. Update the parameters phi\n",
|
||||||
|
"\n",
|
||||||
|
" return phi"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "YVq6rmaWRD2M"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Initialize the parameters and draw the model\n",
|
||||||
|
"n_steps = 10\n",
|
||||||
|
"phi_all = np.zeros((2,n_steps+1))\n",
|
||||||
|
"phi_all[0,0] = 1.6\n",
|
||||||
|
"phi_all[1,0] = -0.5\n",
|
||||||
|
"\n",
|
||||||
|
"# Measure loss and draw initial model\n",
|
||||||
|
"loss = compute_loss(data[0,:], data[1,:], model, phi_all[:,0:1])\n",
|
||||||
|
"draw_model(data,model,phi_all[:,0:1], \"Initial parameters, Loss = %f\"%(loss))\n",
|
||||||
|
"\n",
|
||||||
|
"# Repeatedly take gradient descent steps\n",
|
||||||
|
"for c_step in range (n_steps):\n",
|
||||||
|
" # Do gradient descent step\n",
|
||||||
|
" phi_all[:,c_step+1:c_step+2] = gradient_descent_step(phi_all[:,c_step:c_step+1],data, model)\n",
|
||||||
|
" # Measure loss and draw model\n",
|
||||||
|
" loss = compute_loss(data[0,:], data[1,:], model, phi_all[:,c_step+1:c_step+2])\n",
|
||||||
|
" draw_model(data,model,phi_all[:,c_step+1], \"Iteration %d, loss = %f\"%(c_step+1,loss))\n",
|
||||||
|
"\n",
|
||||||
|
"# Draw the trajectory on the loss function\n",
|
||||||
|
"draw_loss_function(compute_loss, data, model,phi_all)\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "tOLd0gtdRLLS"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
586
Notebooks/Chap06/6_3_Stochastic_Gradient_Descent.ipynb
Normal file
586
Notebooks/Chap06/6_3_Stochastic_Gradient_Descent.ipynb
Normal file
@@ -0,0 +1,586 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyNk5FN4qlw3pk8BwDVWw1jN",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap06/6_3_Stochastic_Gradient_Descent.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 6.3: Stochastic gradient descent**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook investigates gradient descent and stochastic gradient descent and recreates figure 6.5 from the book\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "el8l05WQEO46"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "xhmIOLiZELV_"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# import libraries\n",
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt\n",
|
||||||
|
"from matplotlib import cm\n",
|
||||||
|
"from matplotlib.colors import ListedColormap"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Let's create our training data 30 pairs {x_i, y_i}\n",
|
||||||
|
"# We'll try to fit the Gabor model to these data\n",
|
||||||
|
"data = np.array([[-1.920e+00,-1.422e+01,1.490e+00,-1.940e+00,-2.389e+00,-5.090e+00,\n",
|
||||||
|
" -8.861e+00,3.578e+00,-6.010e+00,-6.995e+00,3.634e+00,8.743e-01,\n",
|
||||||
|
" -1.096e+01,4.073e-01,-9.467e+00,8.560e+00,1.062e+01,-1.729e-01,\n",
|
||||||
|
" 1.040e+01,-1.261e+01,1.574e-01,-1.304e+01,-2.156e+00,-1.210e+01,\n",
|
||||||
|
" -1.119e+01,2.902e+00,-8.220e+00,-1.179e+01,-8.391e+00,-4.505e+00],\n",
|
||||||
|
" [-1.051e+00,-2.482e-02,8.896e-01,-4.943e-01,-9.371e-01,4.306e-01,\n",
|
||||||
|
" 9.577e-03,-7.944e-02 ,1.624e-01,-2.682e-01,-3.129e-01,8.303e-01,\n",
|
||||||
|
" -2.365e-02,5.098e-01,-2.777e-01,3.367e-01,1.927e-01,-2.222e-01,\n",
|
||||||
|
" 6.352e-02,6.888e-03,3.224e-02,1.091e-02,-5.706e-01,-5.258e-02,\n",
|
||||||
|
" -3.666e-02,1.709e-01,-4.805e-02,2.008e-01,-1.904e-01,5.952e-01]])"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "4cRkrh9MZ58Z"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Let's define our model\n",
|
||||||
|
"def model(phi,x):\n",
|
||||||
|
" sin_component = np.sin(phi[0] + 0.06 * phi[1] * x)\n",
|
||||||
|
" gauss_component = np.exp(-(phi[0] + 0.06 * phi[1] * x) * (phi[0] + 0.06 * phi[1] * x) / 32)\n",
|
||||||
|
" y_pred= sin_component * gauss_component\n",
|
||||||
|
" return y_pred"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "WQUERmb2erAe"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Draw model\n",
|
||||||
|
"def draw_model(data,model,phi,title=None):\n",
|
||||||
|
" x_model = np.arange(-15,15,0.1)\n",
|
||||||
|
" y_model = model(phi,x_model)\n",
|
||||||
|
"\n",
|
||||||
|
" fix, ax = plt.subplots()\n",
|
||||||
|
" ax.plot(data[0,:],data[1,:],'bo')\n",
|
||||||
|
" ax.plot(x_model,y_model,'m-')\n",
|
||||||
|
" ax.set_xlim([-15,15]);ax.set_ylim([-1,1])\n",
|
||||||
|
" ax.set_xlabel('x'); ax.set_ylabel('y')\n",
|
||||||
|
" if title is not None:\n",
|
||||||
|
" ax.set_title(title)\n",
|
||||||
|
" plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "qFRe9POHF2le"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Initialize the parmaeters and draw the model\n",
|
||||||
|
"phi = np.zeros((2,1))\n",
|
||||||
|
"phi[0] = -5 # Horizontal offset\n",
|
||||||
|
"phi[1] = 25 # Frequency\n",
|
||||||
|
"draw_model(data,model,phi, \"Initial parameters\")\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "TXx1Tpd1Tl-I"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now lets create compute the sum of squares loss for the training data"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "QU5mdGvpTtEG"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def compute_loss(data_x, data_y, model, phi):\n",
|
||||||
|
" # TODO -- Write this function -- replace the line below\n",
|
||||||
|
" # TODO -- First make model predictions from data x\n",
|
||||||
|
" # TODO -- Then compute the squared difference between the predictions and true y values\n",
|
||||||
|
" # TODO -- Then sum them all and return\n",
|
||||||
|
" loss = 0\n",
|
||||||
|
"\n",
|
||||||
|
" return loss"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "I7dqTY2Gg7CR"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Let's just test that we got that right"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "eB5DQvU5hYNx"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"loss = compute_loss(data[0,:],data[1,:],model,np.array([[0.6],[-0.2]]))\n",
|
||||||
|
"print('Your loss = %3.3f, Correct loss = %3.3f'%(loss, 16.419))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "Ty05UtEEg9tc"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's plot the whole loss function"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "F3trnavPiHpH"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def draw_loss_function(compute_loss, data, model, phi_iters = None):\n",
|
||||||
|
" # Define pretty colormap\n",
|
||||||
|
" my_colormap_vals_hex =('2a0902', '2b0a03', '2c0b04', '2d0c05', '2e0c06', '2f0d07', '300d08', '310e09', '320f0a', '330f0b', '34100b', '35110c', '36110d', '37120e', '38120f', '39130f', '3a1410', '3b1411', '3c1511', '3d1612', '3e1613', '3f1713', '401714', '411814', '421915', '431915', '451a16', '461b16', '471b17', '481c17', '491d18', '4a1d18', '4b1e19', '4c1f19', '4d1f1a', '4e201b', '50211b', '51211c', '52221c', '53231d', '54231d', '55241e', '56251e', '57261f', '58261f', '592720', '5b2821', '5c2821', '5d2922', '5e2a22', '5f2b23', '602b23', '612c24', '622d25', '632e25', '652e26', '662f26', '673027', '683027', '693128', '6a3229', '6b3329', '6c342a', '6d342a', '6f352b', '70362c', '71372c', '72372d', '73382e', '74392e', '753a2f', '763a2f', '773b30', '783c31', '7a3d31', '7b3e32', '7c3e33', '7d3f33', '7e4034', '7f4134', '804235', '814236', '824336', '834437', '854538', '864638', '874739', '88473a', '89483a', '8a493b', '8b4a3c', '8c4b3c', '8d4c3d', '8e4c3e', '8f4d3f', '904e3f', '924f40', '935041', '945141', '955242', '965343', '975343', '985444', '995545', '9a5646', '9b5746', '9c5847', '9d5948', '9e5a49', '9f5a49', 'a05b4a', 'a15c4b', 'a35d4b', 'a45e4c', 'a55f4d', 'a6604e', 'a7614e', 'a8624f', 'a96350', 'aa6451', 'ab6552', 'ac6552', 'ad6653', 'ae6754', 'af6855', 'b06955', 'b16a56', 'b26b57', 'b36c58', 'b46d59', 'b56e59', 'b66f5a', 'b7705b', 'b8715c', 'b9725d', 'ba735d', 'bb745e', 'bc755f', 'bd7660', 'be7761', 'bf7862', 'c07962', 'c17a63', 'c27b64', 'c27c65', 'c37d66', 'c47e67', 'c57f68', 'c68068', 'c78169', 'c8826a', 'c9836b', 'ca846c', 'cb856d', 'cc866e', 'cd876f', 'ce886f', 'ce8970', 'cf8a71', 'd08b72', 'd18c73', 'd28d74', 'd38e75', 'd48f76', 'd59077', 'd59178', 'd69279', 'd7937a', 'd8957b', 'd9967b', 'da977c', 'da987d', 'db997e', 'dc9a7f', 'dd9b80', 'de9c81', 'de9d82', 'df9e83', 'e09f84', 'e1a185', 'e2a286', 'e2a387', 'e3a488', 'e4a589', 'e5a68a', 'e5a78b', 'e6a88c', 'e7aa8d', 'e7ab8e', 'e8ac8f', 'e9ad90', 'eaae91', 'eaaf92', 'ebb093', 'ecb295', 'ecb396', 'edb497', 'eeb598', 'eeb699', 'efb79a', 'efb99b', 'f0ba9c', 'f1bb9d', 'f1bc9e', 'f2bd9f', 'f2bfa1', 'f3c0a2', 'f3c1a3', 'f4c2a4', 'f5c3a5', 'f5c5a6', 'f6c6a7', 'f6c7a8', 'f7c8aa', 'f7c9ab', 'f8cbac', 'f8ccad', 'f8cdae', 'f9ceb0', 'f9d0b1', 'fad1b2', 'fad2b3', 'fbd3b4', 'fbd5b6', 'fbd6b7', 'fcd7b8', 'fcd8b9', 'fcdaba', 'fddbbc', 'fddcbd', 'fddebe', 'fddfbf', 'fee0c1', 'fee1c2', 'fee3c3', 'fee4c5', 'ffe5c6', 'ffe7c7', 'ffe8c9', 'ffe9ca', 'ffebcb', 'ffeccd', 'ffedce', 'ffefcf', 'fff0d1', 'fff2d2', 'fff3d3', 'fff4d5', 'fff6d6', 'fff7d8', 'fff8d9', 'fffada', 'fffbdc', 'fffcdd', 'fffedf', 'ffffe0')\n",
|
||||||
|
" my_colormap_vals_dec = np.array([int(element,base=16) for element in my_colormap_vals_hex])\n",
|
||||||
|
" r = np.floor(my_colormap_vals_dec/(256*256))\n",
|
||||||
|
" g = np.floor((my_colormap_vals_dec - r *256 *256)/256)\n",
|
||||||
|
" b = np.floor(my_colormap_vals_dec - r * 256 *256 - g * 256)\n",
|
||||||
|
" my_colormap = ListedColormap(np.vstack((r,g,b)).transpose()/255.0)\n",
|
||||||
|
"\n",
|
||||||
|
" # Make grid of intercept/slope values to plot\n",
|
||||||
|
" offsets_mesh, freqs_mesh = np.meshgrid(np.arange(-10,10.0,0.1), np.arange(2.5,22.5,0.1))\n",
|
||||||
|
" loss_mesh = np.zeros_like(freqs_mesh)\n",
|
||||||
|
" # Compute loss for every set of parameters\n",
|
||||||
|
" for idslope, slope in np.ndenumerate(freqs_mesh):\n",
|
||||||
|
" loss_mesh[idslope] = compute_loss(data[0,:], data[1,:], model, np.array([[offsets_mesh[idslope]], [slope]]))\n",
|
||||||
|
"\n",
|
||||||
|
" fig,ax = plt.subplots()\n",
|
||||||
|
" fig.set_size_inches(8,8)\n",
|
||||||
|
" ax.contourf(offsets_mesh,freqs_mesh,loss_mesh,256,cmap=my_colormap)\n",
|
||||||
|
" ax.contour(offsets_mesh,freqs_mesh,loss_mesh,20,colors=['#80808080'])\n",
|
||||||
|
" if phi_iters is not None:\n",
|
||||||
|
" ax.plot(phi_iters[0,:], phi_iters[1,:],'go-')\n",
|
||||||
|
" ax.set_ylim([2.5,22.5])\n",
|
||||||
|
" ax.set_xlabel('Offset $\\phi_{0}$'); ax.set_ylabel('Frequency, $\\phi_{1}$')\n",
|
||||||
|
" plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "K-NTHpAAHlCl"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"draw_loss_function(compute_loss, data, model)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "l8HbvIupnTME"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's compute the gradient vector for a given set of parameters:\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{equation}\n",
|
||||||
|
"\\frac{\\partial L}{\\partial \\boldsymbol\\phi} = \\begin{bmatrix}\\frac{\\partial L}{\\partial \\phi_0} \\\\\\frac{\\partial L}{\\partial \\phi_1} \\end{bmatrix}.\n",
|
||||||
|
"\\end{equation}"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "s9Duf05WqqSC"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# These came from writing out the expression for the sum of squares loss and taking the\n",
|
||||||
|
"# derivative with respect to phi0 and phi1. It was a lot of hassle to get it right!\n",
|
||||||
|
"def gabor_deriv_phi0(data_x,data_y,phi0, phi1):\n",
|
||||||
|
" x = 0.06 * phi1 * data_x + phi0\n",
|
||||||
|
" y = data_y\n",
|
||||||
|
" cos_component = np.cos(x)\n",
|
||||||
|
" sin_component = np.sin(x)\n",
|
||||||
|
" gauss_component = np.exp(-0.5 * x *x / 16)\n",
|
||||||
|
" deriv = cos_component * gauss_component - sin_component * gauss_component * x / 16\n",
|
||||||
|
" deriv = 2* deriv * (sin_component * gauss_component - y)\n",
|
||||||
|
" return np.sum(deriv)\n",
|
||||||
|
"\n",
|
||||||
|
"def gabor_deriv_phi1(data_x, data_y,phi0, phi1):\n",
|
||||||
|
" x = 0.06 * phi1 * data_x + phi0\n",
|
||||||
|
" y = data_y\n",
|
||||||
|
" cos_component = np.cos(x)\n",
|
||||||
|
" sin_component = np.sin(x)\n",
|
||||||
|
" gauss_component = np.exp(-0.5 * x *x / 16)\n",
|
||||||
|
" deriv = 0.06 * data_x * cos_component * gauss_component - 0.06 * data_x*sin_component * gauss_component * x / 16\n",
|
||||||
|
" deriv = 2*deriv * (sin_component * gauss_component - y)\n",
|
||||||
|
" return np.sum(deriv)\n",
|
||||||
|
"\n",
|
||||||
|
"def compute_gradient(data_x, data_y, phi):\n",
|
||||||
|
" dl_dphi0 = gabor_deriv_phi0(data_x, data_y, phi[0],phi[1])\n",
|
||||||
|
" dl_dphi1 = gabor_deriv_phi1(data_x, data_y, phi[0],phi[1])\n",
|
||||||
|
" # Return the gradient\n",
|
||||||
|
" return np.array([[dl_dphi0],[dl_dphi1]])"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "UpswmkL2qwBT"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"We can check we got this right using a trick known as **finite differences**. If we evaluate the function and then change one of the parameters by a very small amount and normalize by that amount, we get an approximation to the gradient, so:\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{eqnarray}\n",
|
||||||
|
"\\frac{\\partial L}{\\partial \\phi_{0}}&\\approx & \\frac{L[\\phi_0+\\delta, \\phi_1]-L[\\phi_0, \\phi_1]}{\\delta}\\\\\n",
|
||||||
|
"\\frac{\\partial L}{\\partial \\phi_{1}}&\\approx & \\frac{L[\\phi_0, \\phi_1+\\delta]-L[\\phi_0, \\phi_1]}{\\delta}\n",
|
||||||
|
"\\end{eqnarray}\n",
|
||||||
|
"\n",
|
||||||
|
"We can't do this when there are many parameters; for a million parameters, we would have to evaluate the loss function two million times, and usually computing the gradients directly is much more efficient."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "RS1nEcYVuEAM"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Compute the gradient using your function\n",
|
||||||
|
"gradient = compute_gradient(data[0,:],data[1,:], phi)\n",
|
||||||
|
"print(\"Your gradients: (%3.3f,%3.3f)\"%(gradient[0],gradient[1]))\n",
|
||||||
|
"# Approximate the gradients with finite differences\n",
|
||||||
|
"delta = 0.0001\n",
|
||||||
|
"dl_dphi0_est = (compute_loss(data[0,:],data[1,:],model,phi+np.array([[delta],[0]])) - \\\n",
|
||||||
|
" compute_loss(data[0,:],data[1,:],model,phi))/delta\n",
|
||||||
|
"dl_dphi1_est = (compute_loss(data[0,:],data[1,:],model,phi+np.array([[0],[delta]])) - \\\n",
|
||||||
|
" compute_loss(data[0,:],data[1,:],model,phi))/delta\n",
|
||||||
|
"print(\"Approx gradients: (%3.3f,%3.3f)\"%(dl_dphi0_est,dl_dphi1_est))\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "QuwAHN7yt-gi"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now we are ready to perform gradient descent. We'll need to use our line search routine from Notebook 6.1, which I've reproduced here plus the helper function loss_function_1D that converts from a 2D problem to a 1D problem"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "5EIjMM9Fw2eT"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def loss_function_1D(dist_prop, data, model, phi_start, gradient):\n",
|
||||||
|
" # Return the loss after moving this far\n",
|
||||||
|
" return compute_loss(data[0,:], data[1,:], model, phi_start+ gradient * dist_prop)\n",
|
||||||
|
"\n",
|
||||||
|
"def line_search(data, model, phi, gradient, thresh=.00001, max_dist = 0.1, max_iter = 15, verbose=False):\n",
|
||||||
|
" # Initialize four points along the range we are going to search\n",
|
||||||
|
" a = 0\n",
|
||||||
|
" b = 0.33 * max_dist\n",
|
||||||
|
" c = 0.66 * max_dist\n",
|
||||||
|
" d = 1.0 * max_dist\n",
|
||||||
|
" n_iter =0;\n",
|
||||||
|
"\n",
|
||||||
|
" # While we haven't found the minimum closely enough\n",
|
||||||
|
" while np.abs(b-c) > thresh and n_iter < max_iter:\n",
|
||||||
|
" # Increment iteration counter (just to prevent an infinite loop)\n",
|
||||||
|
" n_iter = n_iter+1\n",
|
||||||
|
" # Calculate all four points\n",
|
||||||
|
" lossa = loss_function_1D(a, data, model, phi,gradient)\n",
|
||||||
|
" lossb = loss_function_1D(b, data, model, phi,gradient)\n",
|
||||||
|
" lossc = loss_function_1D(c, data, model, phi,gradient)\n",
|
||||||
|
" lossd = loss_function_1D(d, data, model, phi,gradient)\n",
|
||||||
|
"\n",
|
||||||
|
" if verbose:\n",
|
||||||
|
" print('Iter %d, a=%3.3f, b=%3.3f, c=%3.3f, d=%3.3f'%(n_iter, a,b,c,d))\n",
|
||||||
|
" print('a %f, b%f, c%f, d%f'%(lossa,lossb,lossc,lossd))\n",
|
||||||
|
"\n",
|
||||||
|
" # Rule #1 If point A is less than points B, C, and D then halve points B,C, and D\n",
|
||||||
|
" if np.argmin((lossa,lossb,lossc,lossd))==0:\n",
|
||||||
|
" b = b/2\n",
|
||||||
|
" c = c/2\n",
|
||||||
|
" d = d/2\n",
|
||||||
|
" continue;\n",
|
||||||
|
"\n",
|
||||||
|
" # Rule #2 If point b is less than point c then\n",
|
||||||
|
" # then point d becomes point c, and\n",
|
||||||
|
" # point b becomes 1/3 between a and new d\n",
|
||||||
|
" # point c becomes 2/3 between a and new d\n",
|
||||||
|
" if lossb < lossc:\n",
|
||||||
|
" d = c\n",
|
||||||
|
" b = a+ (d-a)/3\n",
|
||||||
|
" c = a+ 2*(d-a)/3\n",
|
||||||
|
" continue\n",
|
||||||
|
"\n",
|
||||||
|
" # Rule #2 If point c is less than point b then\n",
|
||||||
|
" # then point a becomes point b, and\n",
|
||||||
|
" # point b becomes 1/3 between new a and d\n",
|
||||||
|
" # point c becomes 2/3 between new a and d\n",
|
||||||
|
" a = b\n",
|
||||||
|
" b = a+ (d-a)/3\n",
|
||||||
|
" c = a+ 2*(d-a)/3\n",
|
||||||
|
"\n",
|
||||||
|
" # Return average of two middle points\n",
|
||||||
|
" return (b+c)/2.0"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "XrJ2gQjfw1XP"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def gradient_descent_step(phi, data, model):\n",
|
||||||
|
" # Step 1: Compute the gradient\n",
|
||||||
|
" gradient = compute_gradient(data[0,:],data[1,:], phi)\n",
|
||||||
|
" # Step 2: Update the parameters -- note we want to search in the negative (downhill direction)\n",
|
||||||
|
" alpha = line_search(data, model, phi, gradient*-1, max_dist = 2.0)\n",
|
||||||
|
" phi = phi - alpha * gradient\n",
|
||||||
|
" return phi"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "YVq6rmaWRD2M"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Initialize the parameters\n",
|
||||||
|
"n_steps = 21\n",
|
||||||
|
"phi_all = np.zeros((2,n_steps+1))\n",
|
||||||
|
"phi_all[0,0] = -1.5\n",
|
||||||
|
"phi_all[1,0] = 8.5\n",
|
||||||
|
"\n",
|
||||||
|
"# Measure loss and draw initial model\n",
|
||||||
|
"loss = compute_loss(data[0,:], data[1,:], model, phi_all[:,0:1])\n",
|
||||||
|
"draw_model(data,model,phi_all[:,0:1], \"Initial parameters, Loss = %f\"%(loss))\n",
|
||||||
|
"\n",
|
||||||
|
"for c_step in range (n_steps):\n",
|
||||||
|
" # Do gradient descent step\n",
|
||||||
|
" phi_all[:,c_step+1:c_step+2] = gradient_descent_step(phi_all[:,c_step:c_step+1],data, model)\n",
|
||||||
|
" # Measure loss and draw model every 4th step\n",
|
||||||
|
" if c_step % 4 == 0:\n",
|
||||||
|
" loss = compute_loss(data[0,:], data[1,:], model, phi_all[:,c_step+1:c_step+2])\n",
|
||||||
|
" draw_model(data,model,phi_all[:,c_step+1], \"Iteration %d, loss = %f\"%(c_step+1,loss))\n",
|
||||||
|
"\n",
|
||||||
|
"draw_loss_function(compute_loss, data, model,phi_all)\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "tOLd0gtdRLLS"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# TODO Experiment with starting the optimization in the previous cell in different places\n",
|
||||||
|
"# and show that it heads to a local minimum if we don't start it in the right valley"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "Oi8ZlH0ptLqA"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def gradient_descent_step_fixed_learning_rate(phi, data, alpha):\n",
|
||||||
|
" # TODO -- fill in this routine so that we take a fixed size step of size alpha without using line search\n",
|
||||||
|
"\n",
|
||||||
|
" return phi"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "4l-ueLk-oAxV"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Initialize the parameters\n",
|
||||||
|
"n_steps = 21\n",
|
||||||
|
"phi_all = np.zeros((2,n_steps+1))\n",
|
||||||
|
"phi_all[0,0] = -1.5\n",
|
||||||
|
"phi_all[1,0] = 8.5\n",
|
||||||
|
"\n",
|
||||||
|
"# Measure loss and draw initial model\n",
|
||||||
|
"loss = compute_loss(data[0,:], data[1,:], model, phi_all[:,0:1])\n",
|
||||||
|
"draw_model(data,model,phi_all[:,0:1], \"Initial parameters, Loss = %f\"%(loss))\n",
|
||||||
|
"\n",
|
||||||
|
"for c_step in range (n_steps):\n",
|
||||||
|
" # Do gradient descent step\n",
|
||||||
|
" phi_all[:,c_step+1:c_step+2] = gradient_descent_step_fixed_learning_rate(phi_all[:,c_step:c_step+1],data, alpha =0.2)\n",
|
||||||
|
" # Measure loss and draw model every 4th step\n",
|
||||||
|
" if c_step % 4 == 0:\n",
|
||||||
|
" loss = compute_loss(data[0,:], data[1,:], model, phi_all[:,c_step+1:c_step+2])\n",
|
||||||
|
" draw_model(data,model,phi_all[:,c_step+1], \"Iteration %d, loss = %f\"%(c_step+1,loss))\n",
|
||||||
|
"\n",
|
||||||
|
"draw_loss_function(compute_loss, data, model,phi_all)\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "oi9MX_GRpM41"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# TODO Experiment with the learning rate, alpha.\n",
|
||||||
|
"# What happens if you set it too large?\n",
|
||||||
|
"# What happens if you set it too small?"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "In6sQ5YCpMqn"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def stochastic_gradient_descent_step(phi, data, alpha, batch_size):\n",
|
||||||
|
" # TODO -- fill in this routine so that we take a fixed size step of size alpha but only using a subset (batch) of the data\n",
|
||||||
|
" # at each step\n",
|
||||||
|
" # You can use the function np.random.permutation to generate a random permutation of the n_data = data.shape[1] indices\n",
|
||||||
|
" # and then just choose the first n=batch_size of these indices. Then compute the gradient update\n",
|
||||||
|
" # from just the data with these indices. More properly, you should sample with replacement, but this will do for now.\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" return phi"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "VKTC9-1Gpm3N"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Set the random number generator so you always get same numbers (disable if you don't want this)\n",
|
||||||
|
"np.random.seed(1)\n",
|
||||||
|
"# Initialize the parameters\n",
|
||||||
|
"n_steps = 41\n",
|
||||||
|
"phi_all = np.zeros((2,n_steps+1))\n",
|
||||||
|
"phi_all[0,0] = 3.5\n",
|
||||||
|
"phi_all[1,0] = 6.5\n",
|
||||||
|
"\n",
|
||||||
|
"# Measure loss and draw initial model\n",
|
||||||
|
"loss = compute_loss(data[0,:], data[1,:], model, phi_all[:,0:1])\n",
|
||||||
|
"draw_model(data,model,phi_all[:,0:1], \"Initial parameters, Loss = %f\"%(loss))\n",
|
||||||
|
"\n",
|
||||||
|
"for c_step in range (n_steps):\n",
|
||||||
|
" # Do gradient descent step\n",
|
||||||
|
" phi_all[:,c_step+1:c_step+2] = stochastic_gradient_descent_step(phi_all[:,c_step:c_step+1],data, alpha =0.8, batch_size=5)\n",
|
||||||
|
" # Measure loss and draw model every 8th step\n",
|
||||||
|
" if c_step % 8 == 0:\n",
|
||||||
|
" loss = compute_loss(data[0,:], data[1,:], model, phi_all[:,c_step+1:c_step+2])\n",
|
||||||
|
" draw_model(data,model,phi_all[:,c_step+1], \"Iteration %d, loss = %f\"%(c_step+1,loss))\n",
|
||||||
|
"\n",
|
||||||
|
"draw_loss_function(compute_loss, data, model,phi_all)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "469OP_UHskJ4"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# TODO -- Experiment with different learning rates, starting points, batch sizes, number of steps. Get a feel for this."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "LxE2kTa3s29p"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# TODO -- Add a learning rate schedule. Reduce the learning rate by a factor of beta every M iterations"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "lw4QPOaQTh5e"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
382
Notebooks/Chap06/6_4_Momentum.ipynb
Normal file
382
Notebooks/Chap06/6_4_Momentum.ipynb
Normal file
@@ -0,0 +1,382 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyMLS4qeqBTVHGdg9Sds9jND",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap06/6_4_Momentum.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 6.4: Momentum**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook investigates the use of momentum as illustrated in figure 6.7 from the book.\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "el8l05WQEO46"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "xhmIOLiZELV_"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# import libraries\n",
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt\n",
|
||||||
|
"from matplotlib import cm\n",
|
||||||
|
"from matplotlib.colors import ListedColormap"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Let's create our training data 30 pairs {x_i, y_i}\n",
|
||||||
|
"# We'll try to fit the Gabor model to these data\n",
|
||||||
|
"data = np.array([[-1.920e+00,-1.422e+01,1.490e+00,-1.940e+00,-2.389e+00,-5.090e+00,\n",
|
||||||
|
" -8.861e+00,3.578e+00,-6.010e+00,-6.995e+00,3.634e+00,8.743e-01,\n",
|
||||||
|
" -1.096e+01,4.073e-01,-9.467e+00,8.560e+00,1.062e+01,-1.729e-01,\n",
|
||||||
|
" 1.040e+01,-1.261e+01,1.574e-01,-1.304e+01,-2.156e+00,-1.210e+01,\n",
|
||||||
|
" -1.119e+01,2.902e+00,-8.220e+00,-1.179e+01,-8.391e+00,-4.505e+00],\n",
|
||||||
|
" [-1.051e+00,-2.482e-02,8.896e-01,-4.943e-01,-9.371e-01,4.306e-01,\n",
|
||||||
|
" 9.577e-03,-7.944e-02 ,1.624e-01,-2.682e-01,-3.129e-01,8.303e-01,\n",
|
||||||
|
" -2.365e-02,5.098e-01,-2.777e-01,3.367e-01,1.927e-01,-2.222e-01,\n",
|
||||||
|
" 6.352e-02,6.888e-03,3.224e-02,1.091e-02,-5.706e-01,-5.258e-02,\n",
|
||||||
|
" -3.666e-02,1.709e-01,-4.805e-02,2.008e-01,-1.904e-01,5.952e-01]])"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "4cRkrh9MZ58Z"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Let's define our model\n",
|
||||||
|
"def model(phi,x):\n",
|
||||||
|
" sin_component = np.sin(phi[0] + 0.06 * phi[1] * x)\n",
|
||||||
|
" gauss_component = np.exp(-(phi[0] + 0.06 * phi[1] * x) * (phi[0] + 0.06 * phi[1] * x) / 32)\n",
|
||||||
|
" y_pred= sin_component * gauss_component\n",
|
||||||
|
" return y_pred"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "WQUERmb2erAe"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Draw model\n",
|
||||||
|
"def draw_model(data,model,phi,title=None):\n",
|
||||||
|
" x_model = np.arange(-15,15,0.1)\n",
|
||||||
|
" y_model = model(phi,x_model)\n",
|
||||||
|
"\n",
|
||||||
|
" fix, ax = plt.subplots()\n",
|
||||||
|
" ax.plot(data[0,:],data[1,:],'bo')\n",
|
||||||
|
" ax.plot(x_model,y_model,'m-')\n",
|
||||||
|
" ax.set_xlim([-15,15]);ax.set_ylim([-1,1])\n",
|
||||||
|
" ax.set_xlabel('x'); ax.set_ylabel('y')\n",
|
||||||
|
" if title is not None:\n",
|
||||||
|
" ax.set_title(title)\n",
|
||||||
|
" plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "qFRe9POHF2le"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Initialize the parmaeters and draw the model\n",
|
||||||
|
"phi = np.zeros((2,1))\n",
|
||||||
|
"phi[0] = -5 # Horizontal offset\n",
|
||||||
|
"phi[1] = 25 # Frequency\n",
|
||||||
|
"draw_model(data,model,phi, \"Initial parameters\")\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "TXx1Tpd1Tl-I"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now lets compute the sum of squares loss for the training data and plot the loss function"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "QU5mdGvpTtEG"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def compute_loss(data_x, data_y, model, phi):\n",
|
||||||
|
" pred_y = model(phi, data_x)\n",
|
||||||
|
" loss = np.sum((pred_y-data_y)*(pred_y-data_y))\n",
|
||||||
|
" return loss\n",
|
||||||
|
"\n",
|
||||||
|
"def draw_loss_function(compute_loss, data, model, phi_iters = None):\n",
|
||||||
|
" # Define pretty colormap\n",
|
||||||
|
" my_colormap_vals_hex =('2a0902', '2b0a03', '2c0b04', '2d0c05', '2e0c06', '2f0d07', '300d08', '310e09', '320f0a', '330f0b', '34100b', '35110c', '36110d', '37120e', '38120f', '39130f', '3a1410', '3b1411', '3c1511', '3d1612', '3e1613', '3f1713', '401714', '411814', '421915', '431915', '451a16', '461b16', '471b17', '481c17', '491d18', '4a1d18', '4b1e19', '4c1f19', '4d1f1a', '4e201b', '50211b', '51211c', '52221c', '53231d', '54231d', '55241e', '56251e', '57261f', '58261f', '592720', '5b2821', '5c2821', '5d2922', '5e2a22', '5f2b23', '602b23', '612c24', '622d25', '632e25', '652e26', '662f26', '673027', '683027', '693128', '6a3229', '6b3329', '6c342a', '6d342a', '6f352b', '70362c', '71372c', '72372d', '73382e', '74392e', '753a2f', '763a2f', '773b30', '783c31', '7a3d31', '7b3e32', '7c3e33', '7d3f33', '7e4034', '7f4134', '804235', '814236', '824336', '834437', '854538', '864638', '874739', '88473a', '89483a', '8a493b', '8b4a3c', '8c4b3c', '8d4c3d', '8e4c3e', '8f4d3f', '904e3f', '924f40', '935041', '945141', '955242', '965343', '975343', '985444', '995545', '9a5646', '9b5746', '9c5847', '9d5948', '9e5a49', '9f5a49', 'a05b4a', 'a15c4b', 'a35d4b', 'a45e4c', 'a55f4d', 'a6604e', 'a7614e', 'a8624f', 'a96350', 'aa6451', 'ab6552', 'ac6552', 'ad6653', 'ae6754', 'af6855', 'b06955', 'b16a56', 'b26b57', 'b36c58', 'b46d59', 'b56e59', 'b66f5a', 'b7705b', 'b8715c', 'b9725d', 'ba735d', 'bb745e', 'bc755f', 'bd7660', 'be7761', 'bf7862', 'c07962', 'c17a63', 'c27b64', 'c27c65', 'c37d66', 'c47e67', 'c57f68', 'c68068', 'c78169', 'c8826a', 'c9836b', 'ca846c', 'cb856d', 'cc866e', 'cd876f', 'ce886f', 'ce8970', 'cf8a71', 'd08b72', 'd18c73', 'd28d74', 'd38e75', 'd48f76', 'd59077', 'd59178', 'd69279', 'd7937a', 'd8957b', 'd9967b', 'da977c', 'da987d', 'db997e', 'dc9a7f', 'dd9b80', 'de9c81', 'de9d82', 'df9e83', 'e09f84', 'e1a185', 'e2a286', 'e2a387', 'e3a488', 'e4a589', 'e5a68a', 'e5a78b', 'e6a88c', 'e7aa8d', 'e7ab8e', 'e8ac8f', 'e9ad90', 'eaae91', 'eaaf92', 'ebb093', 'ecb295', 'ecb396', 'edb497', 'eeb598', 'eeb699', 'efb79a', 'efb99b', 'f0ba9c', 'f1bb9d', 'f1bc9e', 'f2bd9f', 'f2bfa1', 'f3c0a2', 'f3c1a3', 'f4c2a4', 'f5c3a5', 'f5c5a6', 'f6c6a7', 'f6c7a8', 'f7c8aa', 'f7c9ab', 'f8cbac', 'f8ccad', 'f8cdae', 'f9ceb0', 'f9d0b1', 'fad1b2', 'fad2b3', 'fbd3b4', 'fbd5b6', 'fbd6b7', 'fcd7b8', 'fcd8b9', 'fcdaba', 'fddbbc', 'fddcbd', 'fddebe', 'fddfbf', 'fee0c1', 'fee1c2', 'fee3c3', 'fee4c5', 'ffe5c6', 'ffe7c7', 'ffe8c9', 'ffe9ca', 'ffebcb', 'ffeccd', 'ffedce', 'ffefcf', 'fff0d1', 'fff2d2', 'fff3d3', 'fff4d5', 'fff6d6', 'fff7d8', 'fff8d9', 'fffada', 'fffbdc', 'fffcdd', 'fffedf', 'ffffe0')\n",
|
||||||
|
" my_colormap_vals_dec = np.array([int(element,base=16) for element in my_colormap_vals_hex])\n",
|
||||||
|
" r = np.floor(my_colormap_vals_dec/(256*256))\n",
|
||||||
|
" g = np.floor((my_colormap_vals_dec - r *256 *256)/256)\n",
|
||||||
|
" b = np.floor(my_colormap_vals_dec - r * 256 *256 - g * 256)\n",
|
||||||
|
" my_colormap = ListedColormap(np.vstack((r,g,b)).transpose()/255.0)\n",
|
||||||
|
"\n",
|
||||||
|
" # Make grid of intercept/slope values to plot\n",
|
||||||
|
" offsets_mesh, freqs_mesh = np.meshgrid(np.arange(-10,10.0,0.1), np.arange(2.5,22.5,0.1))\n",
|
||||||
|
" loss_mesh = np.zeros_like(freqs_mesh)\n",
|
||||||
|
" # Compute loss for every set of parameters\n",
|
||||||
|
" for idslope, slope in np.ndenumerate(freqs_mesh):\n",
|
||||||
|
" loss_mesh[idslope] = compute_loss(data[0,:], data[1,:], model, np.array([[offsets_mesh[idslope]], [slope]]))\n",
|
||||||
|
"\n",
|
||||||
|
" fig,ax = plt.subplots()\n",
|
||||||
|
" fig.set_size_inches(8,8)\n",
|
||||||
|
" ax.contourf(offsets_mesh,freqs_mesh,loss_mesh,256,cmap=my_colormap)\n",
|
||||||
|
" ax.contour(offsets_mesh,freqs_mesh,loss_mesh,20,colors=['#80808080'])\n",
|
||||||
|
" if phi_iters is not None:\n",
|
||||||
|
" ax.plot(phi_iters[0,:], phi_iters[1,:],'go-')\n",
|
||||||
|
" ax.set_ylim([2.5,22.5])\n",
|
||||||
|
" ax.set_xlabel('Offset $\\phi_{0}$'); ax.set_ylabel('Frequency, $\\phi_{1}$')\n",
|
||||||
|
" plt.show()\n",
|
||||||
|
"\n",
|
||||||
|
"draw_loss_function(compute_loss, data, model)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "I7dqTY2Gg7CR"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"As before, we compute the gradient vector for a given set of parameters:\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{equation}\n",
|
||||||
|
"\\frac{\\partial L}{\\partial \\boldsymbol\\phi} = \\begin{bmatrix}\\frac{\\partial L}{\\partial \\phi_0} \\\\\\frac{\\partial L}{\\partial \\phi_1} \\end{bmatrix}.\n",
|
||||||
|
"\\end{equation}"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "s9Duf05WqqSC"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# These came from writing out the expression for the sum of squares loss and taking the\n",
|
||||||
|
"# derivative with respect to phi0 and phi1. It was a lot of hassle to get it right!\n",
|
||||||
|
"def gabor_deriv_phi0(data_x,data_y,phi0, phi1):\n",
|
||||||
|
" x = 0.06 * phi1 * data_x + phi0\n",
|
||||||
|
" y = data_y\n",
|
||||||
|
" cos_component = np.cos(x)\n",
|
||||||
|
" sin_component = np.sin(x)\n",
|
||||||
|
" gauss_component = np.exp(-0.5 * x *x / 16)\n",
|
||||||
|
" deriv = cos_component * gauss_component - sin_component * gauss_component * x / 16\n",
|
||||||
|
" deriv = 2* deriv * (sin_component * gauss_component - y)\n",
|
||||||
|
" return np.sum(deriv)\n",
|
||||||
|
"\n",
|
||||||
|
"def gabor_deriv_phi1(data_x, data_y,phi0, phi1):\n",
|
||||||
|
" x = 0.06 * phi1 * data_x + phi0\n",
|
||||||
|
" y = data_y\n",
|
||||||
|
" cos_component = np.cos(x)\n",
|
||||||
|
" sin_component = np.sin(x)\n",
|
||||||
|
" gauss_component = np.exp(-0.5 * x *x / 16)\n",
|
||||||
|
" deriv = 0.06 * data_x * cos_component * gauss_component - 0.06 * data_x*sin_component * gauss_component * x / 16\n",
|
||||||
|
" deriv = 2*deriv * (sin_component * gauss_component - y)\n",
|
||||||
|
" return np.sum(deriv)\n",
|
||||||
|
"\n",
|
||||||
|
"def compute_gradient(data_x, data_y, phi):\n",
|
||||||
|
" dl_dphi0 = gabor_deriv_phi0(data_x, data_y, phi[0],phi[1])\n",
|
||||||
|
" dl_dphi1 = gabor_deriv_phi1(data_x, data_y, phi[0],phi[1])\n",
|
||||||
|
" # Return the gradient\n",
|
||||||
|
" return np.array([[dl_dphi0],[dl_dphi1]])"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "UpswmkL2qwBT"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Let's first run standard stochastic gradient descent."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "7Tv3d4zqAdZR"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Set the random number generator so you always get same numbers (disable if you don't want this)\n",
|
||||||
|
"np.random.seed(1)\n",
|
||||||
|
"# Initialize the parameters\n",
|
||||||
|
"n_steps = 81\n",
|
||||||
|
"batch_size = 5\n",
|
||||||
|
"alpha = 0.6\n",
|
||||||
|
"phi_all = np.zeros((2,n_steps+1))\n",
|
||||||
|
"phi_all[0,0] = -1.5\n",
|
||||||
|
"phi_all[1,0] = 6.5\n",
|
||||||
|
"\n",
|
||||||
|
"# Measure loss and draw initial model\n",
|
||||||
|
"loss = compute_loss(data[0,:], data[1,:], model, phi_all[:,0:1])\n",
|
||||||
|
"draw_model(data,model,phi_all[:,0:1], \"Initial parameters, Loss = %f\"%(loss))\n",
|
||||||
|
"\n",
|
||||||
|
"for c_step in range (n_steps):\n",
|
||||||
|
" # Choose random batch indices\n",
|
||||||
|
" batch_index = np.random.permutation(data.shape[1])[0:batch_size]\n",
|
||||||
|
" # Compute the gradient\n",
|
||||||
|
" gradient = compute_gradient(data[0,batch_index], data[1,batch_index], phi_all[:,c_step:c_step+1] )\n",
|
||||||
|
" # Update the parameters\n",
|
||||||
|
" phi_all[:,c_step+1:c_step+2] = phi_all[:,c_step:c_step+1] - alpha * gradient\n",
|
||||||
|
"\n",
|
||||||
|
"loss = compute_loss(data[0,:], data[1,:], model, phi_all[:,c_step+1:c_step+2])\n",
|
||||||
|
"draw_model(data,model,phi_all[:,c_step+1], \"Iteration %d, loss = %f\"%(c_step+1,loss))\n",
|
||||||
|
"draw_loss_function(compute_loss, data, model,phi_all)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "469OP_UHskJ4"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's add momentum (equation 6.11)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "nMILovgMFpdI"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Set the random number generator so you always get same numbers (disable if you don't want this)\n",
|
||||||
|
"np.random.seed(1)\n",
|
||||||
|
"# Initialize the parameters\n",
|
||||||
|
"n_steps = 81\n",
|
||||||
|
"batch_size = 5\n",
|
||||||
|
"alpha = 0.6\n",
|
||||||
|
"beta = 0.6\n",
|
||||||
|
"momentum = np.zeros([2,1])\n",
|
||||||
|
"phi_all = np.zeros((2,n_steps+1))\n",
|
||||||
|
"phi_all[0,0] = -1.5\n",
|
||||||
|
"phi_all[1,0] = 6.5\n",
|
||||||
|
"\n",
|
||||||
|
"# Measure loss and draw initial model\n",
|
||||||
|
"loss = compute_loss(data[0,:], data[1,:], model, phi_all[:,0:1])\n",
|
||||||
|
"draw_model(data,model,phi_all[:,0:1], \"Initial parameters, Loss = %f\"%(loss))\n",
|
||||||
|
"\n",
|
||||||
|
"for c_step in range (n_steps):\n",
|
||||||
|
" # Choose random batch indices\n",
|
||||||
|
" batch_index = np.random.permutation(data.shape[1])[0:batch_size]\n",
|
||||||
|
" # Compute the gradient\n",
|
||||||
|
" gradient = compute_gradient(data[0,batch_index], data[1,batch_index], phi_all[:,c_step:c_step+1])\n",
|
||||||
|
" # TODO -- calculate momentum - replace the line below\n",
|
||||||
|
" momentum = np.zeros([2,1])\n",
|
||||||
|
"\n",
|
||||||
|
" # Update the parameters\n",
|
||||||
|
" phi_all[:,c_step+1:c_step+2] = phi_all[:,c_step:c_step+1] - alpha * momentum\n",
|
||||||
|
"\n",
|
||||||
|
"loss = compute_loss(data[0,:], data[1,:], model, phi_all[:,c_step+1:c_step+2])\n",
|
||||||
|
"draw_model(data,model,phi_all[:,c_step+1], \"Iteration %d, loss = %f\"%(c_step+1,loss))\n",
|
||||||
|
"draw_loss_function(compute_loss, data, model,phi_all)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "dWBU8ZbSFny9"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Finally, we'll try Nesterov momentum"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "nYIAomA-KPkU"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Set the random number generator so you always get same numbers (disable if you don't want this)\n",
|
||||||
|
"np.random.seed(1)\n",
|
||||||
|
"# Initialize the parameters\n",
|
||||||
|
"n_steps = 81\n",
|
||||||
|
"batch_size = 5\n",
|
||||||
|
"alpha = 0.6\n",
|
||||||
|
"beta = 0.6\n",
|
||||||
|
"momentum = np.zeros([2,1])\n",
|
||||||
|
"phi_all = np.zeros((2,n_steps+1))\n",
|
||||||
|
"phi_all[0,0] = -1.5\n",
|
||||||
|
"phi_all[1,0] = 6.5\n",
|
||||||
|
"\n",
|
||||||
|
"# Measure loss and draw initial model\n",
|
||||||
|
"loss = compute_loss(data[0,:], data[1,:], model, phi_all[:,0:1])\n",
|
||||||
|
"draw_model(data,model,phi_all[:,0:1], \"Initial parameters, Loss = %f\"%(loss))\n",
|
||||||
|
"\n",
|
||||||
|
"for c_step in range (n_steps):\n",
|
||||||
|
" # Choose random batch indices\n",
|
||||||
|
" batch_index = np.random.permutation(data.shape[1])[0:batch_size]\n",
|
||||||
|
" # TODO -- calculate Nesterov momentum - replace the lines below\n",
|
||||||
|
" gradient = np.zeros([2,1])\n",
|
||||||
|
" momentum = np.zeros([2,1])\n",
|
||||||
|
"\n",
|
||||||
|
" # Update the parameters\n",
|
||||||
|
" phi_all[:,c_step+1:c_step+2] = phi_all[:,c_step:c_step+1] - alpha * momentum\n",
|
||||||
|
" # Measure loss and draw model every 8th step\n",
|
||||||
|
"\n",
|
||||||
|
"loss = compute_loss(data[0,:], data[1,:], model, phi_all[:,c_step+1:c_step+2])\n",
|
||||||
|
"draw_model(data,model,phi_all[:,c_step+1], \"Iteration %d, loss = %f\"%(c_step+1,loss))\n",
|
||||||
|
"draw_loss_function(compute_loss, data, model,phi_all)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "XtwWeCZ5HLLh"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
288
Notebooks/Chap06/6_5_Adam.ipynb
Normal file
288
Notebooks/Chap06/6_5_Adam.ipynb
Normal file
@@ -0,0 +1,288 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyNFsCOnucz1nQt7PBEnKeTV",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap06/6_5_Adam.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 6.5: Adam**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook investigates the Adam algorithm as illustrated in figure 6.9 from the book.\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "ysg9OHZq07YC"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "Hi_t5nCk01tx"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# import libraries\n",
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt\n",
|
||||||
|
"from matplotlib.colors import ListedColormap"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Define function that we wish to find the minimum of (normally would be defined implicitly by data and loss)\n",
|
||||||
|
"def loss(phi0, phi1):\n",
|
||||||
|
" height = np.exp(-0.5 * (phi1 * phi1)*4.0)\n",
|
||||||
|
" height = height * np. exp(-0.5* (phi0-0.7) *(phi0-0.7)/4.0)\n",
|
||||||
|
" return 1.0-height\n",
|
||||||
|
"\n",
|
||||||
|
"# Compute the gradients of this function (for simplicity, I just used finite differences)\n",
|
||||||
|
"def get_loss_gradient(phi0, phi1):\n",
|
||||||
|
" delta_phi = 0.00001;\n",
|
||||||
|
" gradient = np.zeros((2,1));\n",
|
||||||
|
" gradient[0] = (loss(phi0+delta_phi/2.0, phi1) - loss(phi0-delta_phi/2.0, phi1))/delta_phi\n",
|
||||||
|
" gradient[1] = (loss(phi0, phi1+delta_phi/2.0) - loss(phi0, phi1-delta_phi/2.0))/delta_phi\n",
|
||||||
|
" return gradient[:,0];\n",
|
||||||
|
"\n",
|
||||||
|
"# Compute the loss function at a range of values of phi0 and phi1 for plotting\n",
|
||||||
|
"def get_loss_function_for_plot():\n",
|
||||||
|
" grid_values = np.arange(-1.0,1.0,0.01);\n",
|
||||||
|
" phi0mesh, phi1mesh = np.meshgrid(grid_values, grid_values)\n",
|
||||||
|
" loss_function = np.zeros((grid_values.size, grid_values.size))\n",
|
||||||
|
" for idphi0, phi0 in enumerate(grid_values):\n",
|
||||||
|
" for idphi1, phi1 in enumerate(grid_values):\n",
|
||||||
|
" loss_function[idphi0, idphi1] = loss(phi1,phi0)\n",
|
||||||
|
" return loss_function, phi0mesh, phi1mesh"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "GTrgOKhp16zw"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Define fancy colormap\n",
|
||||||
|
"my_colormap_vals_hex =('2a0902', '2b0a03', '2c0b04', '2d0c05', '2e0c06', '2f0d07', '300d08', '310e09', '320f0a', '330f0b', '34100b', '35110c', '36110d', '37120e', '38120f', '39130f', '3a1410', '3b1411', '3c1511', '3d1612', '3e1613', '3f1713', '401714', '411814', '421915', '431915', '451a16', '461b16', '471b17', '481c17', '491d18', '4a1d18', '4b1e19', '4c1f19', '4d1f1a', '4e201b', '50211b', '51211c', '52221c', '53231d', '54231d', '55241e', '56251e', '57261f', '58261f', '592720', '5b2821', '5c2821', '5d2922', '5e2a22', '5f2b23', '602b23', '612c24', '622d25', '632e25', '652e26', '662f26', '673027', '683027', '693128', '6a3229', '6b3329', '6c342a', '6d342a', '6f352b', '70362c', '71372c', '72372d', '73382e', '74392e', '753a2f', '763a2f', '773b30', '783c31', '7a3d31', '7b3e32', '7c3e33', '7d3f33', '7e4034', '7f4134', '804235', '814236', '824336', '834437', '854538', '864638', '874739', '88473a', '89483a', '8a493b', '8b4a3c', '8c4b3c', '8d4c3d', '8e4c3e', '8f4d3f', '904e3f', '924f40', '935041', '945141', '955242', '965343', '975343', '985444', '995545', '9a5646', '9b5746', '9c5847', '9d5948', '9e5a49', '9f5a49', 'a05b4a', 'a15c4b', 'a35d4b', 'a45e4c', 'a55f4d', 'a6604e', 'a7614e', 'a8624f', 'a96350', 'aa6451', 'ab6552', 'ac6552', 'ad6653', 'ae6754', 'af6855', 'b06955', 'b16a56', 'b26b57', 'b36c58', 'b46d59', 'b56e59', 'b66f5a', 'b7705b', 'b8715c', 'b9725d', 'ba735d', 'bb745e', 'bc755f', 'bd7660', 'be7761', 'bf7862', 'c07962', 'c17a63', 'c27b64', 'c27c65', 'c37d66', 'c47e67', 'c57f68', 'c68068', 'c78169', 'c8826a', 'c9836b', 'ca846c', 'cb856d', 'cc866e', 'cd876f', 'ce886f', 'ce8970', 'cf8a71', 'd08b72', 'd18c73', 'd28d74', 'd38e75', 'd48f76', 'd59077', 'd59178', 'd69279', 'd7937a', 'd8957b', 'd9967b', 'da977c', 'da987d', 'db997e', 'dc9a7f', 'dd9b80', 'de9c81', 'de9d82', 'df9e83', 'e09f84', 'e1a185', 'e2a286', 'e2a387', 'e3a488', 'e4a589', 'e5a68a', 'e5a78b', 'e6a88c', 'e7aa8d', 'e7ab8e', 'e8ac8f', 'e9ad90', 'eaae91', 'eaaf92', 'ebb093', 'ecb295', 'ecb396', 'edb497', 'eeb598', 'eeb699', 'efb79a', 'efb99b', 'f0ba9c', 'f1bb9d', 'f1bc9e', 'f2bd9f', 'f2bfa1', 'f3c0a2', 'f3c1a3', 'f4c2a4', 'f5c3a5', 'f5c5a6', 'f6c6a7', 'f6c7a8', 'f7c8aa', 'f7c9ab', 'f8cbac', 'f8ccad', 'f8cdae', 'f9ceb0', 'f9d0b1', 'fad1b2', 'fad2b3', 'fbd3b4', 'fbd5b6', 'fbd6b7', 'fcd7b8', 'fcd8b9', 'fcdaba', 'fddbbc', 'fddcbd', 'fddebe', 'fddfbf', 'fee0c1', 'fee1c2', 'fee3c3', 'fee4c5', 'ffe5c6', 'ffe7c7', 'ffe8c9', 'ffe9ca', 'ffebcb', 'ffeccd', 'ffedce', 'ffefcf', 'fff0d1', 'fff2d2', 'fff3d3', 'fff4d5', 'fff6d6', 'fff7d8', 'fff8d9', 'fffada', 'fffbdc', 'fffcdd', 'fffedf', 'ffffe0')\n",
|
||||||
|
"my_colormap_vals_dec = np.array([int(element,base=16) for element in my_colormap_vals_hex])\n",
|
||||||
|
"r = np.floor(my_colormap_vals_dec/(256*256))\n",
|
||||||
|
"g = np.floor((my_colormap_vals_dec - r *256 *256)/256)\n",
|
||||||
|
"b = np.floor(my_colormap_vals_dec - r * 256 *256 - g * 256)\n",
|
||||||
|
"my_colormap_vals = np.vstack((r,g,b)).transpose()/255.0\n",
|
||||||
|
"my_colormap = ListedColormap(my_colormap_vals)\n",
|
||||||
|
"\n",
|
||||||
|
"# Plotting function\n",
|
||||||
|
"def draw_function(phi0mesh, phi1mesh, loss_function, my_colormap, opt_path):\n",
|
||||||
|
" fig = plt.figure();\n",
|
||||||
|
" ax = plt.axes();\n",
|
||||||
|
" fig.set_size_inches(7,7)\n",
|
||||||
|
" ax.contourf(phi0mesh, phi1mesh, loss_function, 256, cmap=my_colormap);\n",
|
||||||
|
" ax.contour(phi0mesh, phi1mesh, loss_function, 20, colors=['#80808080'])\n",
|
||||||
|
" ax.plot(opt_path[0,:], opt_path[1,:],'-', color='#a0d9d3ff')\n",
|
||||||
|
" ax.plot(opt_path[0,:], opt_path[1,:],'.', color='#a0d9d3ff',markersize=10)\n",
|
||||||
|
" ax.set_xlabel(\"$\\phi_{0}$\")\n",
|
||||||
|
" ax.set_ylabel(\"$\\phi_1}$\")\n",
|
||||||
|
" plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "YKijFyuH4ZJD"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Simple fixed step size gradient descent\n",
|
||||||
|
"def grad_descent(start_posn, n_steps, alpha):\n",
|
||||||
|
" grad_path = np.zeros((2, n_steps+1));\n",
|
||||||
|
" grad_path[:,0] = start_posn[:,0];\n",
|
||||||
|
" for c_step in range(n_steps):\n",
|
||||||
|
" this_grad = get_loss_gradient(grad_path[0,c_step], grad_path[1,c_step]);\n",
|
||||||
|
" grad_path[:,c_step+1] = grad_path[:,c_step] - alpha * this_grad\n",
|
||||||
|
" return grad_path;"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "Afxr7RqR8s7Q"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"We'll start by running gradient descent with a fixed step size for this loss function."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "MXZL8lu3-EUF"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"loss_function, phi0mesh, phi1mesh = get_loss_function_for_plot() ;\n",
|
||||||
|
"\n",
|
||||||
|
"start_posn = np.zeros((2,1));\n",
|
||||||
|
"start_posn[0,0] = -0.7; start_posn[1,0] = -0.9\n",
|
||||||
|
"\n",
|
||||||
|
"# Run gradient descent\n",
|
||||||
|
"grad_path1 = grad_descent(start_posn, n_steps=200, alpha = 0.08)\n",
|
||||||
|
"draw_function(phi0mesh, phi1mesh, loss_function, my_colormap, grad_path1)\n",
|
||||||
|
"grad_path2 = grad_descent(start_posn, n_steps=40, alpha= 1.0)\n",
|
||||||
|
"draw_function(phi0mesh, phi1mesh, loss_function, my_colormap, grad_path2)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "fgkwVEal8stH"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Because the function changes much faster in $\\phi_1$ than in $\\phi_0$, there is no great step size to choose. If we set the step size so that it makes sensible progress in the $\\phi_1$, then it takes many iterations to converge. If we set the step size tso that we make sensible progress in the $\\phi_{0}$ direction, then the path oscillates in the $\\phi_1$ direction. \n",
|
||||||
|
"\n",
|
||||||
|
"This motivates Adam. At the core of Adam is the idea that we should just determine which way is downhill along each axis (i.e. left/right for $\\phi_0$ or up/down for $\\phi_1$) and move a fixed distance in that direction."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "AN2uNxaa-bRX"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def normalized_gradients(start_posn, n_steps, alpha, epsilon=1e-20):\n",
|
||||||
|
" grad_path = np.zeros((2, n_steps+1));\n",
|
||||||
|
" grad_path[:,0] = start_posn[:,0];\n",
|
||||||
|
" for c_step in range(n_steps):\n",
|
||||||
|
" # Measure the gradient as in equation 6.13 (first line)\n",
|
||||||
|
" m = get_loss_gradient(grad_path[0,c_step], grad_path[1,c_step]);\n",
|
||||||
|
" # TO DO -- compute the squared gradient as in equation 6.13 (second line)\n",
|
||||||
|
" # Replace this line:\n",
|
||||||
|
" v = np.ones_like(grad_path[:,0])\n",
|
||||||
|
"\n",
|
||||||
|
" # TO DO -- apply the update rule (equation 6.14)\n",
|
||||||
|
" # Replace this line:\n",
|
||||||
|
" grad_path[:,c_step+1] = grad_path[:,c_step]\n",
|
||||||
|
"\n",
|
||||||
|
" return grad_path;"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "IqX2zP_29gLF"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Let's try out normalized gradients\n",
|
||||||
|
"start_posn = np.zeros((2,1));\n",
|
||||||
|
"start_posn[0,0] = -0.7; start_posn[1,0] = -0.9\n",
|
||||||
|
"\n",
|
||||||
|
"# Run gradient descent\n",
|
||||||
|
"grad_path1 = normalized_gradients(start_posn, n_steps=40, alpha = 0.08)\n",
|
||||||
|
"draw_function(phi0mesh, phi1mesh, loss_function, my_colormap, grad_path1)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "wxe-dKW5Chv3"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"This moves towards the minimum at a sensible speed, but we never actually converge -- the solution just bounces back and forth between the last two points. To make it converge, we add momentum to both the estimates of the gradient and the pointwise squared gradient. We also modify the statistics by a factor that depends on the time to make sure the progress is now slow to start with."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "_6KoKBJdGGI4"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def adam(start_posn, n_steps, alpha, beta=0.9, gamma=0.99, epsilon=1e-20):\n",
|
||||||
|
" grad_path = np.zeros((2, n_steps+1));\n",
|
||||||
|
" grad_path[:,0] = start_posn[:,0];\n",
|
||||||
|
" m = np.zeros_like(grad_path[:,0])\n",
|
||||||
|
" v = np.zeros_like(grad_path[:,0])\n",
|
||||||
|
" for c_step in range(n_steps):\n",
|
||||||
|
" # Measure the gradient\n",
|
||||||
|
" grad = get_loss_gradient(grad_path[0,c_step], grad_path[1,c_step])\n",
|
||||||
|
" # TODO -- Update the momentum based gradient estimate equation 6.15 (first line)\n",
|
||||||
|
" # Replace this line:\n",
|
||||||
|
" m = m;\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" # TODO -- update the momentum based squared gradient estimate as in equation 6.15 (second line)\n",
|
||||||
|
" # Replace this line:\n",
|
||||||
|
" v = v\n",
|
||||||
|
"\n",
|
||||||
|
" # TODO -- Modify the statistics according to euation 6.16\n",
|
||||||
|
" # You will need the function np.power\n",
|
||||||
|
" # Replace these lines\n",
|
||||||
|
" m_tilde = m\n",
|
||||||
|
" v_tilde = v\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" # TO DO -- apply the update rule (equation 6.17)\n",
|
||||||
|
" # Replace this line:\n",
|
||||||
|
" grad_path[:,c_step+1] = grad_path[:,c_step]\n",
|
||||||
|
"\n",
|
||||||
|
" return grad_path;"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "BKUhZSGgDEm0"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Let's try out our Adam algorithm\n",
|
||||||
|
"start_posn = np.zeros((2,1));\n",
|
||||||
|
"start_posn[0,0] = -0.7; start_posn[1,0] = -0.9\n",
|
||||||
|
"\n",
|
||||||
|
"# Run gradient descent\n",
|
||||||
|
"grad_path1 = adam(start_posn, n_steps=60, alpha = 0.05)\n",
|
||||||
|
"draw_function(phi0mesh, phi1mesh, loss_function, my_colormap, grad_path1)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "sg5X18P3IbYo"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
415
Notebooks/Chap07/7_1_Backpropagation_in_Toy_Model.ipynb
Normal file
415
Notebooks/Chap07/7_1_Backpropagation_in_Toy_Model.ipynb
Normal file
@@ -0,0 +1,415 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyP5wHK5E7/el+vxU947K3q8",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap07/7_1_Backpropagation_in_Toy_Model.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 7.1: Backpropagation in Toy Model**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook computes the derivatives of the toy function discussed in section 7.3 of the book.\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "pOZ6Djz0dhoy"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"We're going to investigate how to take the derivatives of functions where one operation is composed with another, which is composed with a third and so on. For example, consider the model:\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{equation}\n",
|
||||||
|
" \\mbox{f}[x,\\boldsymbol\\phi] = \\beta_3+\\omega_3\\cdot\\cos\\Bigl[\\beta_2+\\omega_2\\cdot\\exp\\bigl[\\beta_1+\\omega_1\\cdot\\sin[\\beta_0+\\omega_0x]\\bigr]\\Bigr],\n",
|
||||||
|
"\\end{equation}\n",
|
||||||
|
"\n",
|
||||||
|
"with parameters $\\boldsymbol\\phi=\\{\\beta_0,\\omega_0,\\beta_1,\\omega_1,\\beta_2,\\omega_2,\\beta_3,\\omega_3\\}$.<br>\n",
|
||||||
|
"\n",
|
||||||
|
"This is a composition of the functions $\\cos[\\bullet],\\exp[\\bullet],\\sin[\\bullet]$. I chose these just because you probably already know the derivatives of these functions:\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{eqnarray*}\n",
|
||||||
|
" \\frac{\\partial \\cos[z]}{\\partial z} = -\\sin[z] \\quad\\quad \\frac{\\partial \\exp[z]}{\\partial z} = \\exp[z] \\quad\\quad \\frac{\\partial \\sin[z]}{\\partial z} = \\cos[z].\n",
|
||||||
|
"\\end{eqnarray*}\n",
|
||||||
|
"\n",
|
||||||
|
"Suppose that we have a least squares loss function:\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{equation*}\n",
|
||||||
|
"\\ell_i = (\\mbox{f}[x_i,\\boldsymbol\\phi]-y_i)^2,\n",
|
||||||
|
"\\end{equation*}\n",
|
||||||
|
"\n",
|
||||||
|
"Assume that we know the current values of $\\beta_{0},\\beta_{1},\\beta_{2},\\beta_{3},\\omega_{0},\\omega_{1},\\omega_{2},\\omega_{3}$, $x_i$ and $y_i$. We could obviously calculate $\\ell_i$. But we also want to know how $\\ell_i$ changes when we make a small change to $\\beta_{0},\\beta_{1},\\beta_{2},\\beta_{3},\\omega_{0},\\omega_{1},\\omega_{2}$, or $\\omega_{3}$. In other words, we want to compute the eight derivatives:\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{eqnarray*}\n",
|
||||||
|
"\\frac{\\partial \\ell_i}{\\partial \\beta_{0}}, \\quad \\frac{\\partial \\ell_i}{\\partial \\beta_{1}}, \\quad \\frac{\\partial \\ell_i}{\\partial \\beta_{2}}, \\quad \\frac{\\partial \\ell_i }{\\partial \\beta_{3}}, \\quad \\frac{\\partial \\ell_i}{\\partial \\omega_{0}}, \\quad \\frac{\\partial \\ell_i}{\\partial \\omega_{1}}, \\quad \\frac{\\partial \\ell_i}{\\partial \\omega_{2}}, \\quad\\mbox{and} \\quad \\frac{\\partial \\ell_i}{\\partial \\omega_{3}}.\n",
|
||||||
|
"\\end{eqnarray*}"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "1DmMo2w63CmT"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# import library\n",
|
||||||
|
"import numpy as np"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "RIPaoVN834Lj"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Let's first define the original function for $y$ and the likelihood term:"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "32-ufWhc3v2c"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "AakK_qen3BpU"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"def fn(x, beta0, beta1, beta2, beta3, omega0, omega1, omega2, omega3):\n",
|
||||||
|
" return beta3+omega3 * np.cos(beta2 + omega2 * np.exp(beta1 + omega1 * np.sin(beta0 + omega0 * x)))\n",
|
||||||
|
"\n",
|
||||||
|
"def likelihood(x, y, beta0, beta1, beta2, beta3, omega0, omega1, omega2, omega3):\n",
|
||||||
|
" diff = fn(x, beta0, beta1, beta2, beta3, omega0, omega1, omega2, omega3) - y\n",
|
||||||
|
" return diff * diff"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now we'll choose some values for the betas and the omegas and x and compute the output of the function:"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "y7tf0ZMt5OXt"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"beta0 = 1.0; beta1 = 2.0; beta2 = -3.0; beta3 = 0.4\n",
|
||||||
|
"omega0 = 0.1; omega1 = -0.4; omega2 = 2.0; omega3 = 3.0\n",
|
||||||
|
"x = 2.3; y =2.0\n",
|
||||||
|
"l_i_func = likelihood(x,y,beta0,beta1,beta2,beta3,omega0,omega1,omega2,omega3)\n",
|
||||||
|
"print('l_i=%3.3f'%l_i_func)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "pwvOcCxr41X_"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# Computing derivatives by hand\n",
|
||||||
|
"\n",
|
||||||
|
"We could compute expressions for the derivatives by hand and write code to compute them directly but some have very complex expressions, even for this relatively simple original equation. For example:\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{eqnarray*}\n",
|
||||||
|
"\\frac{\\partial \\ell_i}{\\partial \\omega_{0}} &=& -2 \\left( \\beta_3+\\omega_3\\cdot\\cos\\Bigl[\\beta_2+\\omega_2\\cdot\\exp\\bigl[\\beta_1+\\omega_1\\cdot\\sin[\\beta_0+\\omega_0\\cdot x_i]\\bigr]\\Bigr]-y_i\\right)\\nonumber \\\\\n",
|
||||||
|
"&&\\hspace{0.5cm}\\cdot \\omega_1\\omega_2\\omega_3\\cdot x_i\\cdot\\cos[\\beta_0+\\omega_0 \\cdot x_i]\\cdot\\exp\\Bigl[\\beta_1 + \\omega_1 \\cdot \\sin[\\beta_0+\\omega_0\\cdot x_i]\\Bigr]\\nonumber\\\\\n",
|
||||||
|
"&& \\hspace{1cm}\\cdot \\sin\\biggl[\\beta_2+\\omega_2\\cdot \\exp\\Bigl[\\beta_1 + \\omega_1 \\cdot \\sin[\\beta_0+\\omega_0\\cdot x_i]\\Bigr]\\biggr].\n",
|
||||||
|
"\\end{eqnarray*}"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "u5w69NeT64yV"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"dldbeta3_func = 2 * (beta3 +omega3 * np.cos(beta2 + omega2 * np.exp(beta1+omega1 * np.sin(beta0+omega0 * x)))-y)\n",
|
||||||
|
"dldomega0_func = -2 *(beta3 +omega3 * np.cos(beta2 + omega2 * np.exp(beta1+omega1 * np.sin(beta0+omega0 * x)))-y) * \\\n",
|
||||||
|
" omega1 * omega2 * omega3 * x * np.cos(beta0 + omega0 * x) * np.exp(beta1 +omega1 * np.sin(beta0 + omega0 * x)) *\\\n",
|
||||||
|
" np.sin(beta2 + omega2 * np.exp(beta1+ omega1* np.sin(beta0+omega0 * x)))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "7t22hALp5zkq"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Let's make sure this is correct using finite differences:"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "iRh4hnu3-H3n"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"dldomega0_fd = (likelihood(x,y,beta0,beta1,beta2,beta3,omega0+0.00001,omega1,omega2,omega3)-likelihood(x,y,beta0,beta1,beta2,beta3,omega0,omega1,omega2,omega3))/0.00001\n",
|
||||||
|
"\n",
|
||||||
|
"print('dydomega0: Function value = %3.3f, Finite difference value = %3.3f'%(dldomega0_func,dldomega0_fd))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "1O3XmXMx-HlZ"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"The code to calculate $\\partial l_i/ \\partial \\omega_0$ is a bit of a nightmare. It's easy to make mistakes, and you can see that some parts of it are repeated (for example, the $\\sin[\\bullet]$ term), which suggests some kind of redundancy in the calculations. The goal of this practical is to compute the derivatives in a much simpler way. There will be three steps:"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "wS4IPjZAKWTN"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"**Step 1:** Write the original equations as a series of intermediate calculations.\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{eqnarray}\n",
|
||||||
|
"f_{0} &=& \\beta_{0} + \\omega_{0} x_i\\nonumber\\\\\n",
|
||||||
|
"h_{1} &=& \\sin[f_{0}]\\nonumber\\\\\n",
|
||||||
|
"f_{1} &=& \\beta_{1} + \\omega_{1}h_{1}\\nonumber\\\\\n",
|
||||||
|
"h_{2} &=& \\exp[f_{1}]\\nonumber\\\\\n",
|
||||||
|
"f_{2} &=& \\beta_{2} + \\omega_{2} h_{2}\\nonumber\\\\\n",
|
||||||
|
"h_{3} &=& \\cos[f_{2}]\\nonumber\\\\\n",
|
||||||
|
"f_{3} &=& \\beta_{3} + \\omega_{3}h_{3}\\nonumber\\\\\n",
|
||||||
|
"l_i &=& (f_3-y_i)^2\n",
|
||||||
|
"\\end{eqnarray}\n",
|
||||||
|
"\n",
|
||||||
|
"and compute and store the values of all of these intermediate values. We'll need them to compute the derivatives.<br> This is called the **forward pass**."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "8UWhvDeNDudz"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# TODO compute all the f_k and h_k terms\n",
|
||||||
|
"# Replace the code below\n",
|
||||||
|
"\n",
|
||||||
|
"f0 = 0\n",
|
||||||
|
"h1 = 0\n",
|
||||||
|
"f1 = 0\n",
|
||||||
|
"h2 = 0\n",
|
||||||
|
"f2 = 0\n",
|
||||||
|
"h3 = 0\n",
|
||||||
|
"f3 = 0\n",
|
||||||
|
"l_i = 0\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "ZWKAq6HC90qV"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Let's check we got that right:\n",
|
||||||
|
"print(\"f0: true value = %3.3f, your value = %3.3f\"%(1.230, f0))\n",
|
||||||
|
"print(\"h1: true value = %3.3f, your value = %3.3f\"%(0.942, h1))\n",
|
||||||
|
"print(\"f1: true value = %3.3f, your value = %3.3f\"%(1.623, f1))\n",
|
||||||
|
"print(\"h2: true value = %3.3f, your value = %3.3f\"%(5.068, h2))\n",
|
||||||
|
"print(\"f2: true value = %3.3f, your value = %3.3f\"%(7.137, f2))\n",
|
||||||
|
"print(\"h3: true value = %3.3f, your value = %3.3f\"%(0.657, h3))\n",
|
||||||
|
"print(\"f3: true value = %3.3f, your value = %3.3f\"%(2.372, f3))\n",
|
||||||
|
"print(\"like original = %3.3f, like from forward pass = %3.3f\"%(l_i_func, l_i))\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "ibxXw7TUW4Sx"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"**Step 2:** Compute the derivatives of $y$ with respect to the intermediate quantities that we just calculated, but in reverse order:\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{eqnarray}\n",
|
||||||
|
"\\quad \\frac{\\partial \\ell_i}{\\partial f_3}, \\quad \\frac{\\partial \\ell_i}{\\partial h_3}, \\quad \\frac{\\partial \\ell_i}{\\partial f_2}, \\quad\n",
|
||||||
|
"\\frac{\\partial \\ell_i}{\\partial h_2}, \\quad \\frac{\\partial \\ell_i}{\\partial f_1}, \\quad \\frac{\\partial \\ell_i}{\\partial h_1}, \\quad\\mbox{and} \\quad \\frac{\\partial \\ell_i}{\\partial f_0}.\n",
|
||||||
|
"\\end{eqnarray}\n",
|
||||||
|
"\n",
|
||||||
|
"The first of these derivatives is straightforward:\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{equation}\n",
|
||||||
|
"\\frac{\\partial \\ell_i}{\\partial f_{3}} = 2 (f_3-y).\n",
|
||||||
|
"\\end{equation}\n",
|
||||||
|
"\n",
|
||||||
|
"The second derivative can be calculated using the chain rule:\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{equation}\n",
|
||||||
|
"\\frac{\\partial \\ell_i}{\\partial h_{3}} =\\frac{\\partial f_{3}}{\\partial h_{3}} \\frac{\\partial \\ell_i}{\\partial f_{3}} .\n",
|
||||||
|
"\\end{equation}\n",
|
||||||
|
"\n",
|
||||||
|
"The left-hand side asks how $\\ell_i$ changes when $h_{3}$ changes. The right-hand side says we can decompose this into (i) how $ell_i$ changes when $f_{3}$ changes and how $f_{3}$ changes when $h_{3}$ changes. So you get a chain of events happening: $h_{3}$ changes $f_{3}$, which changes $\\ell_i$, and the derivatives represent the effects of this chain. Notice that we computed the first of these derivatives already and is $2 (f_3-y)$. We calculated $f_{3}$ in step 1. The second term is the derivative of $\\beta_{3} + \\omega_{3}h_{3}$ with respect to $h_3$ which is simply $\\omega_3$. \n",
|
||||||
|
"\n",
|
||||||
|
"We can continue in this way, computing the derivatives of the output with respect to these intermediate quantities:\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{eqnarray}\n",
|
||||||
|
"\\frac{\\partial \\ell_i}{\\partial f_{2}} &=& \\frac{\\partial h_{3}}{\\partial f_{2}}\\left(\n",
|
||||||
|
"\\frac{\\partial f_{3}}{\\partial h_{3}}\\frac{\\partial \\ell_i}{\\partial f_{3}} \\right)\n",
|
||||||
|
"\\nonumber \\\\\n",
|
||||||
|
"\\frac{\\partial \\ell_i}{\\partial h_{2}} &=& \\frac{\\partial f_{2}}{\\partial h_{2}}\\left(\\frac{\\partial h_{3}}{\\partial f_{2}}\\frac{\\partial f_{3}}{\\partial h_{3}}\\frac{\\partial \\ell_i}{\\partial f_{3}}\\right)\\nonumber \\\\\n",
|
||||||
|
"\\frac{\\partial \\ell_i}{\\partial f_{1}} &=& \\frac{\\partial h_{2}}{\\partial f_{1}}\\left( \\frac{\\partial f_{2}}{\\partial h_{2}}\\frac{\\partial h_{3}}{\\partial f_{2}}\\frac{\\partial f_{3}}{\\partial h_{3}}\\frac{\\partial \\ell_i}{\\partial f_{3}} \\right)\\nonumber \\\\\n",
|
||||||
|
"\\frac{\\partial \\ell_i}{\\partial h_{1}} &=& \\frac{\\partial f_{1}}{\\partial h_{1}}\\left(\\frac{\\partial h_{2}}{\\partial f_{1}} \\frac{\\partial f_{2}}{\\partial h_{2}}\\frac{\\partial h_{3}}{\\partial f_{2}}\\frac{\\partial f_{3}}{\\partial h_{3}}\\frac{\\partial \\ell_i}{\\partial f_{3}} \\right)\\nonumber \\\\\n",
|
||||||
|
"\\frac{\\partial \\ell_i}{\\partial f_{0}} &=& \\frac{\\partial h_{1}}{\\partial f_{0}}\\left(\\frac{\\partial f_{1}}{\\partial h_{1}}\\frac{\\partial h_{2}}{\\partial f_{1}} \\frac{\\partial f_{2}}{\\partial h_{2}}\\frac{\\partial h_{3}}{\\partial f_{2}}\\frac{\\partial f_{3}}{\\partial h_{3}}\\frac{\\partial \\ell_i}{\\partial f_{3}} \\right).\n",
|
||||||
|
"\\end{eqnarray}\n",
|
||||||
|
"\n",
|
||||||
|
"In each case, we have already computed all of the terms except the last one in the previous step, and the last term is simple to evaluate. This is called the **backward pass**."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "jay8NYWdFHuZ"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# TODO -- Compute the derivatives of the output with respect\n",
|
||||||
|
"# to the intermediate computations h_k and f_k (i.e, run the backward pass)\n",
|
||||||
|
"# I've done the first two for you. You replace the code below:\n",
|
||||||
|
"dldf3 = 2* (f3 - y)\n",
|
||||||
|
"dldh3 = omega3 * dldf3\n",
|
||||||
|
"# Replace the code below\n",
|
||||||
|
"dldf2 = 1\n",
|
||||||
|
"dldh2 = 1\n",
|
||||||
|
"dldf1 = 1\n",
|
||||||
|
"dldh1 = 1\n",
|
||||||
|
"dldf0 = 1\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "gCQJeI--Egdl"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Let's check we got that right\n",
|
||||||
|
"print(\"dldf3: true value = %3.3f, your value = %3.3f\"%(0.745, dldf3))\n",
|
||||||
|
"print(\"dldh3: true value = %3.3f, your value = %3.3f\"%(2.234, dldh3))\n",
|
||||||
|
"print(\"dldf2: true value = %3.3f, your value = %3.3f\"%(-1.683, dldf2))\n",
|
||||||
|
"print(\"dldh2: true value = %3.3f, your value = %3.3f\"%(-3.366, dldh2))\n",
|
||||||
|
"print(\"dldf1: true value = %3.3f, your value = %3.3f\"%(-17.060, dldf1))\n",
|
||||||
|
"print(\"dldh1: true value = %3.3f, your value = %3.3f\"%(6.824, dldh1))\n",
|
||||||
|
"print(\"dldf0: true value = %3.3f, your value = %3.3f\"%(2.281, dldf0))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "dS1OrLtlaFr7"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"**Step 3:** Finally, we consider how the loss~$\\ell_{i}$ changes when we change the parameters $\\beta_{\\bullet}$ and $\\omega_{\\bullet}$. Once more, we apply the chain rule:\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{eqnarray}\n",
|
||||||
|
"\\frac{\\partial \\ell_i}{\\partial \\beta_{k}} &=& \\frac{\\partial f_{k}}{\\partial \\beta_{k}}\\frac{\\partial \\ell_i}{\\partial f_{k}}\\nonumber \\\\\n",
|
||||||
|
"\\frac{\\partial \\ell_i}{\\partial \\omega_{k}} &=& \\frac{\\partial f_{k}}{\\partial \\omega_{k}}\\frac{\\partial \\ell_i}{\\partial f_{k}}.\n",
|
||||||
|
"\\end{eqnarray}\n",
|
||||||
|
"\n",
|
||||||
|
"\\noindent In each case, the second term on the right-hand side was computed in step 2. When $k>0$, we have~$f_{k}=\\beta_{k}+\\omega_k \\cdot h_{k}$, so:\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{eqnarray}\n",
|
||||||
|
"\\frac{\\partial f_{k}}{\\partial \\beta_{k}} = 1 \\quad\\quad\\mbox{and}\\quad \\quad \\frac{\\partial f_{k}}{\\partial \\omega_{k}} &=& h_{k}.\n",
|
||||||
|
"\\end{eqnarray}"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "FlzlThQPGpkU"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# TODO -- Calculate the final derivatives with respect to the beta and omega terms\n",
|
||||||
|
"\n",
|
||||||
|
"dldbeta3 = 1\n",
|
||||||
|
"dldomega3 = 1\n",
|
||||||
|
"dldbeta2 = 1\n",
|
||||||
|
"dldomega2 = 1\n",
|
||||||
|
"dldbeta1 = 1\n",
|
||||||
|
"dldomega1 = 1\n",
|
||||||
|
"dldbeta0 = 1\n",
|
||||||
|
"dldomega0 = 1\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "1I2BhqZhGMK6"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Let's check we got them right\n",
|
||||||
|
"print('dldbeta3: Your value = %3.3f, True value = %3.3f'%(dldbeta3, 0.745))\n",
|
||||||
|
"print('dldomega3: Your value = %3.3f, True value = %3.3f'%(dldomega3, 0.489))\n",
|
||||||
|
"print('dldbeta2: Your value = %3.3f, True value = %3.3f'%(dldbeta2, -1.683))\n",
|
||||||
|
"print('dldomega2: Your value = %3.3f, True value = %3.3f'%(dldomega2, -8.530))\n",
|
||||||
|
"print('dldbeta1: Your value = %3.3f, True value = %3.3f'%(dldbeta1, -17.060))\n",
|
||||||
|
"print('dldomega1: Your value = %3.3f, True value = %3.3f'%(dldomega1, -16.079))\n",
|
||||||
|
"print('dldbeta0: Your value = %3.3f, True value = %3.3f'%(dldbeta0, 2.281))\n",
|
||||||
|
"print('dldomega0: Your value = %3.3f, Function value = %3.3f, Finite difference value = %3.3f'%(dldomega0, dldomega0_func, dldomega0_fd))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "38eiOn2aHgHI"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Using this method, we can compute the derivatives quite easily without needing to compute very complicated expressions. In the next practical, we'll apply this same method to a deep neural network."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "N2ZhrR-2fNa1"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
345
Notebooks/Chap07/7_2_Backpropagation.ipynb
Normal file
345
Notebooks/Chap07/7_2_Backpropagation.ipynb
Normal file
@@ -0,0 +1,345 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyN2nPVR0imZntgj4Oasyvmo",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap07/7_2_Backpropagation.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 7.2: Backpropagation**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook runs the backpropagation algorithm on a deep neural network as described in section 7.4 of the book.\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "L6chybAVFJW2"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "LdIDglk1FFcG"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"First let's define a neural network. We'll just choose the weights and biases randomly for now"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "nnUoI0m6GyjC"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Set seed so we always get the same random numbers\n",
|
||||||
|
"np.random.seed(0)\n",
|
||||||
|
"\n",
|
||||||
|
"# Number of layers\n",
|
||||||
|
"K = 5\n",
|
||||||
|
"# Number of neurons per layer\n",
|
||||||
|
"D = 6\n",
|
||||||
|
"# Input layer\n",
|
||||||
|
"D_i = 1\n",
|
||||||
|
"# Output layer\n",
|
||||||
|
"D_o = 1\n",
|
||||||
|
"\n",
|
||||||
|
"# Make empty lists\n",
|
||||||
|
"all_weights = [None] * (K+1)\n",
|
||||||
|
"all_biases = [None] * (K+1)\n",
|
||||||
|
"\n",
|
||||||
|
"# Create input and output layers\n",
|
||||||
|
"all_weights[0] = np.random.normal(size=(D, D_i))\n",
|
||||||
|
"all_weights[-1] = np.random.normal(size=(D_o, D))\n",
|
||||||
|
"all_biases[0] = np.random.normal(size =(D,1))\n",
|
||||||
|
"all_biases[-1]= np.random.normal(size =(D_o,1))\n",
|
||||||
|
"\n",
|
||||||
|
"# Create intermediate layers\n",
|
||||||
|
"for layer in range(1,K):\n",
|
||||||
|
" all_weights[layer] = np.random.normal(size=(D,D))\n",
|
||||||
|
" all_biases[layer] = np.random.normal(size=(D,1))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "WVM4Tc_jGI0Q"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Define the Rectified Linear Unit (ReLU) function\n",
|
||||||
|
"def ReLU(preactivation):\n",
|
||||||
|
" activation = preactivation.clip(0.0)\n",
|
||||||
|
" return activation"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "jZh-7bPXIDq4"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's run our random network. The weight matrices $\\boldsymbol\\Omega_{1\\ldots K}$ are the entries of the list \"all_weights\" and the biases $\\boldsymbol\\beta_{1\\ldots k}$ are the entries of the list \"all_biases\"\n",
|
||||||
|
"\n",
|
||||||
|
"We know that we will need the activations $\\mathbf{f}_{0\\ldots K}$ and the activations $\\mathbf{h}_{1\\ldots K}$ for the forward pass of backpropagation, so we'll store and return these as well.\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "5irtyxnLJSGX"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def compute_network_output(net_input, all_weights, all_biases):\n",
|
||||||
|
"\n",
|
||||||
|
" # Retrieve number of layers\n",
|
||||||
|
" K = len(all_weights) -1\n",
|
||||||
|
"\n",
|
||||||
|
" # We'll store the pre-activations at each layer in a list \"all_f\"\n",
|
||||||
|
" # and the activations in a second list[all_h].\n",
|
||||||
|
" all_f = [None] * (K+1)\n",
|
||||||
|
" all_h = [None] * (K+1)\n",
|
||||||
|
"\n",
|
||||||
|
" #For convenience, we'll set\n",
|
||||||
|
" # all_h[0] to be the input, and all_f[K] will be the output\n",
|
||||||
|
" all_h[0] = net_input\n",
|
||||||
|
"\n",
|
||||||
|
" # Run through the layers, calculating all_f[0...K-1] and all_h[1...K]\n",
|
||||||
|
" for layer in range(K):\n",
|
||||||
|
" # Update preactivations and activations at this layer according to eqn 7.16\n",
|
||||||
|
" # Remmember to use np.matmul for matrrix multiplications\n",
|
||||||
|
" # TODO -- Replace the lines below\n",
|
||||||
|
" all_f[layer] = all_h[layer]\n",
|
||||||
|
" all_h[layer+1] = all_f[layer]\n",
|
||||||
|
"\n",
|
||||||
|
" # Compute the output from the last hidden layer\n",
|
||||||
|
" # TO DO -- Replace the line below\n",
|
||||||
|
" all_f[K] = np.zeros_like(all_biases[-1])\n",
|
||||||
|
"\n",
|
||||||
|
" # Retrieve the output\n",
|
||||||
|
" net_output = all_f[K]\n",
|
||||||
|
"\n",
|
||||||
|
" return net_output, all_f, all_h"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "LgquJUJvJPaN"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Define in input\n",
|
||||||
|
"net_input = np.ones((D_i,1)) * 1.2\n",
|
||||||
|
"# Compute network output\n",
|
||||||
|
"net_output, all_f, all_h = compute_network_output(net_input,all_weights, all_biases)\n",
|
||||||
|
"print(\"True output = %3.3f, Your answer = %3.3f\"%(1.907, net_output[0,0]))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "IN6w5m2ZOhnB"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's define a loss function. We'll just use the least squares loss function. We'll also write a function to compute dloss_doutput"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "SxVTKp3IcoBF"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def least_squares_loss(net_output, y):\n",
|
||||||
|
" return np.sum((net_output-y) * (net_output-y))\n",
|
||||||
|
"\n",
|
||||||
|
"def d_loss_d_output(net_output, y):\n",
|
||||||
|
" return 2*(net_output -y);"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "6XqWSYWJdhQR"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"y = np.ones((D_o,1)) * 20.0\n",
|
||||||
|
"loss = least_squares_loss(net_output, y)\n",
|
||||||
|
"print(\"y = %3.3f Loss = %3.3f\"%(y, loss))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "njF2DUQmfttR"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's compute the derivatives of the network. We already computed the forward pass. Let's compute the backward pass."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "98WmyqFYWA-0"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# We'll need the indicator function\n",
|
||||||
|
"def indicator_function(x):\n",
|
||||||
|
" x_in = np.array(x)\n",
|
||||||
|
" x_in[x_in>=0] = 1\n",
|
||||||
|
" x_in[x_in<0] = 0\n",
|
||||||
|
" return x_in\n",
|
||||||
|
"\n",
|
||||||
|
"# Main backward pass routine\n",
|
||||||
|
"def backward_pass(all_weights, all_biases, all_f, all_h, y):\n",
|
||||||
|
" # We'll store the derivatives dl_dweights and dl_dbiases in lists as well\n",
|
||||||
|
" all_dl_dweights = [None] * (K+1)\n",
|
||||||
|
" all_dl_dbiases = [None] * (K+1)\n",
|
||||||
|
" # And we'll store the derivatives of the loss with respect to the activation and preactivations in lists\n",
|
||||||
|
" all_dl_df = [None] * (K+1)\n",
|
||||||
|
" all_dl_dh = [None] * (K+1)\n",
|
||||||
|
" # Again for convenience we'll stick with the convention that all_h[0] is the net input and all_f[k] in the net output\n",
|
||||||
|
"\n",
|
||||||
|
" # Compute derivatives of net output with respect to loss\n",
|
||||||
|
" all_dl_df[K] = np.array(d_loss_d_output(all_f[K],y))\n",
|
||||||
|
"\n",
|
||||||
|
" # Now work backwards through the network\n",
|
||||||
|
" for layer in range(K,-1,-1):\n",
|
||||||
|
" # TODO Calculate the derivatives of biases at layer this from all_dl_df[layer]. (eq 7.21)\n",
|
||||||
|
" # NOTE! To take a copy of matrix X, use Z=np.array(X)\n",
|
||||||
|
" # REPLACE THIS LINE\n",
|
||||||
|
" all_dl_dbiases[layer] = np.zeros_like(all_biases[layer])\n",
|
||||||
|
"\n",
|
||||||
|
" # TODO Calculate the derivatives of weight at layer from all_dl_df[K] and all_h[K] (eq 7.22)\n",
|
||||||
|
" # Don't forget to use np.matmul\n",
|
||||||
|
" # REPLACE THIS LINE\n",
|
||||||
|
" all_dl_dweights[layer] = np.zeros_like(all_weights[layer])\n",
|
||||||
|
"\n",
|
||||||
|
" # TODO: calculate the derivatives of activations from weight and derivatives of next preactivations (eq 7.20)\n",
|
||||||
|
" # REPLACE THIS LINE\n",
|
||||||
|
" all_dl_dh[layer] = np.zeros_like(all_h[layer])\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" if layer > 0:\n",
|
||||||
|
" # TODO Calculate the derivatives of the pre-activation f with respect to activation h (deriv of ReLu function)\n",
|
||||||
|
" # REPLACE THIS LINE\n",
|
||||||
|
" all_dl_df[layer-1] = np.zeros_like(all_f[layer-1])\n",
|
||||||
|
"\n",
|
||||||
|
" return all_dl_dweights, all_dl_dbiases"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "LJng7WpRPLMz"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"all_dl_dweights, all_dl_dbiases = backward_pass(all_weights, all_biases, all_f, all_h, y)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "9A9MHc4sQvbp"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"np.set_printoptions(precision=3)\n",
|
||||||
|
"# Make space for derivatives computed by finite differences\n",
|
||||||
|
"all_dl_dweights_fd = [None] * (K+1)\n",
|
||||||
|
"all_dl_dbiases_fd = [None] * (K+1)\n",
|
||||||
|
"\n",
|
||||||
|
"# Let's test if we have the derivatives right using finite differences\n",
|
||||||
|
"delta_fd = 0.000001\n",
|
||||||
|
"\n",
|
||||||
|
"# Test the dervatives of the bias vectors\n",
|
||||||
|
"for layer in range(K):\n",
|
||||||
|
" dl_dbias = np.zeros_like(all_dl_dbiases[layer])\n",
|
||||||
|
" # For every element in the bias\n",
|
||||||
|
" for row in range(all_biases[layer].shape[0]):\n",
|
||||||
|
" # Take copy of biases We'll change one element each time\n",
|
||||||
|
" all_biases_copy = [np.array(x) for x in all_biases]\n",
|
||||||
|
" all_biases_copy[layer][row] += delta_fd\n",
|
||||||
|
" network_output_1, *_ = compute_network_output(net_input, all_weights, all_biases_copy)\n",
|
||||||
|
" network_output_2, *_ = compute_network_output(net_input, all_weights, all_biases)\n",
|
||||||
|
" dl_dbias[row] = (least_squares_loss(network_output_1, y) - least_squares_loss(network_output_2,y))/delta_fd\n",
|
||||||
|
" all_dl_dbiases_fd[layer] = np.array(dl_dbias)\n",
|
||||||
|
" print(\"Bias %d, derivatives from backprop:\"%(layer))\n",
|
||||||
|
" print(all_dl_dbiases[layer])\n",
|
||||||
|
" print(\"Bias %d, derivatives from finite differences\"%(layer))\n",
|
||||||
|
" print(all_dl_dbiases_fd[layer])\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"# Test the derivatives of the weights matrices\n",
|
||||||
|
"for layer in range(K):\n",
|
||||||
|
" dl_dweight = np.zeros_like(all_dl_dweights[layer])\n",
|
||||||
|
" # For every element in the bias\n",
|
||||||
|
" for row in range(all_weights[layer].shape[0]):\n",
|
||||||
|
" for col in range(all_weights[layer].shape[1]):\n",
|
||||||
|
" # Take copy of biases We'll change one element each time\n",
|
||||||
|
" all_weights_copy = [np.array(x) for x in all_weights]\n",
|
||||||
|
" all_weights_copy[layer][row][col] += delta_fd\n",
|
||||||
|
" network_output_1, *_ = compute_network_output(net_input, all_weights_copy, all_biases)\n",
|
||||||
|
" network_output_2, *_ = compute_network_output(net_input, all_weights, all_biases)\n",
|
||||||
|
" dl_dweight[row][col] = (least_squares_loss(network_output_1, y) - least_squares_loss(network_output_2,y))/delta_fd\n",
|
||||||
|
" all_dl_dweights_fd[layer] = np.array(dl_dweight)\n",
|
||||||
|
" print(\"Weight %d, derivatives from backprop:\"%(layer))\n",
|
||||||
|
" print(all_dl_dweights[layer])\n",
|
||||||
|
" print(\"Weight %d, derivatives from finite differences\"%(layer))\n",
|
||||||
|
" print(all_dl_dweights_fd[layer])"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "PK-UtE3hreAK"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
354
Notebooks/Chap07/7_3_Initialization.ipynb
Normal file
354
Notebooks/Chap07/7_3_Initialization.ipynb
Normal file
@@ -0,0 +1,354 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyNHLXFpiSnUzAbzhtOk+bxu",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap07/7_3_Initialization.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 7.3: Initialization**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook explores weight initialization in deep neural networks as described in section 7.5 of the book.\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "L6chybAVFJW2"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "LdIDglk1FFcG"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"First let's define a neural network. We'll just choose the weights and biases randomly for now"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "nnUoI0m6GyjC"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def init_params(K, D, sigma_sq_omega):\n",
|
||||||
|
" # Set seed so we always get the same random numbers\n",
|
||||||
|
" np.random.seed(0)\n",
|
||||||
|
"\n",
|
||||||
|
" # Input layer\n",
|
||||||
|
" D_i = 1\n",
|
||||||
|
" # Output layer\n",
|
||||||
|
" D_o = 1\n",
|
||||||
|
"\n",
|
||||||
|
" # Make empty lists\n",
|
||||||
|
" all_weights = [None] * (K+1)\n",
|
||||||
|
" all_biases = [None] * (K+1)\n",
|
||||||
|
"\n",
|
||||||
|
" # Create input and output layers\n",
|
||||||
|
" all_weights[0] = np.random.normal(size=(D, D_i))*np.sqrt(sigma_sq_omega)\n",
|
||||||
|
" all_weights[-1] = np.random.normal(size=(D_o, D)) * np.sqrt(sigma_sq_omega)\n",
|
||||||
|
" all_biases[0] = np.zeros((D,1))\n",
|
||||||
|
" all_biases[-1]= np.zeros((D_o,1))\n",
|
||||||
|
"\n",
|
||||||
|
" # Create intermediate layers\n",
|
||||||
|
" for layer in range(1,K):\n",
|
||||||
|
" all_weights[layer] = np.random.normal(size=(D,D))*np.sqrt(sigma_sq_omega)\n",
|
||||||
|
" all_biases[layer] = np.zeros((D,1))\n",
|
||||||
|
"\n",
|
||||||
|
" return all_weights, all_biases"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "WVM4Tc_jGI0Q"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Define the Rectified Linear Unit (ReLU) function\n",
|
||||||
|
"def ReLU(preactivation):\n",
|
||||||
|
" activation = preactivation.clip(0.0)\n",
|
||||||
|
" return activation"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "jZh-7bPXIDq4"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def compute_network_output(net_input, all_weights, all_biases):\n",
|
||||||
|
"\n",
|
||||||
|
" # Retrieve number of layers\n",
|
||||||
|
" K = len(all_weights) -1\n",
|
||||||
|
"\n",
|
||||||
|
" # We'll store the pre-activations at each layer in a list \"all_f\"\n",
|
||||||
|
" # and the activations in a second list[all_h].\n",
|
||||||
|
" all_f = [None] * (K+1)\n",
|
||||||
|
" all_h = [None] * (K+1)\n",
|
||||||
|
"\n",
|
||||||
|
" #For convenience, we'll set\n",
|
||||||
|
" # all_h[0] to be the input, and all_f[K] will be the output\n",
|
||||||
|
" all_h[0] = net_input\n",
|
||||||
|
"\n",
|
||||||
|
" # Run through the layers, calculating all_f[0...K-1] and all_h[1...K]\n",
|
||||||
|
" for layer in range(K):\n",
|
||||||
|
" # Update preactivations and activations at this layer according to eqn 7.5\n",
|
||||||
|
" all_f[layer] = all_biases[layer] + np.matmul(all_weights[layer], all_h[layer])\n",
|
||||||
|
" all_h[layer+1] = ReLU(all_f[layer])\n",
|
||||||
|
"\n",
|
||||||
|
" # Compute the output from the last hidden layer\n",
|
||||||
|
" all_f[K] = all_biases[K] + np.matmul(all_weights[K], all_h[K])\n",
|
||||||
|
"\n",
|
||||||
|
" # Retrieve the output\n",
|
||||||
|
" net_output = all_f[K]\n",
|
||||||
|
"\n",
|
||||||
|
" return net_output, all_f, all_h"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "LgquJUJvJPaN"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's investigate how this the size of the outputs vary as we change the initialization variance:\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "bIUrcXnOqChl"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Number of layers\n",
|
||||||
|
"K = 5\n",
|
||||||
|
"# Number of neurons per layer\n",
|
||||||
|
"D = 8\n",
|
||||||
|
" # Input layer\n",
|
||||||
|
"D_i = 1\n",
|
||||||
|
"# Output layer\n",
|
||||||
|
"D_o = 1\n",
|
||||||
|
"# Set variance of initial weights to 1\n",
|
||||||
|
"sigma_sq_omega = 1.0\n",
|
||||||
|
"# Initialize parameters\n",
|
||||||
|
"all_weights, all_biases = init_params(K,D,sigma_sq_omega)\n",
|
||||||
|
"\n",
|
||||||
|
"n_data = 1000\n",
|
||||||
|
"data_in = np.random.normal(size=(1,n_data))\n",
|
||||||
|
"net_output, all_f, all_h = compute_network_output(data_in, all_weights, all_biases)\n",
|
||||||
|
"\n",
|
||||||
|
"for layer in range(K):\n",
|
||||||
|
" print(\"Layer %d, std of hidden units = %3.3f\"%(layer, np.std(all_h[layer])))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "A55z3rKBqO7M"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# You can see that the values of the hidden units are increasing on average (the variance is across all hidden units at the layer\n",
|
||||||
|
"# and the 1000 training examples\n",
|
||||||
|
"\n",
|
||||||
|
"# TO DO\n",
|
||||||
|
"# Change this to 50 layers with 80 hidden units per layer\n",
|
||||||
|
"\n",
|
||||||
|
"# TO DO\n",
|
||||||
|
"# Now experiment with sigma_sq_omega to try to stop the variance of the forward computation explode"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "VL_SO4tar3DC"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's define a loss function. We'll just use the least squares loss function. We'll also write a function to compute dloss_doutput\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "SxVTKp3IcoBF"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def least_squares_loss(net_output, y):\n",
|
||||||
|
" return np.sum((net_output-y) * (net_output-y))\n",
|
||||||
|
"\n",
|
||||||
|
"def d_loss_d_output(net_output, y):\n",
|
||||||
|
" return 2*(net_output -y);"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "6XqWSYWJdhQR"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Here's the code for the backward pass"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "98WmyqFYWA-0"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# We'll need the indicator function\n",
|
||||||
|
"def indicator_function(x):\n",
|
||||||
|
" x_in = np.array(x)\n",
|
||||||
|
" x_in[x_in>=0] = 1\n",
|
||||||
|
" x_in[x_in<0] = 0\n",
|
||||||
|
" return x_in\n",
|
||||||
|
"\n",
|
||||||
|
"# Main backward pass routine\n",
|
||||||
|
"def backward_pass(all_weights, all_biases, all_f, all_h, y):\n",
|
||||||
|
" # We'll store the derivatives dl_dweights and dl_dbiases in lists as well\n",
|
||||||
|
" all_dl_dweights = [None] * (K+1)\n",
|
||||||
|
" all_dl_dbiases = [None] * (K+1)\n",
|
||||||
|
" # And we'll store the derivatives of the loss with respect to the activation and preactivations in lists\n",
|
||||||
|
" all_dl_df = [None] * (K+1)\n",
|
||||||
|
" all_dl_dh = [None] * (K+1)\n",
|
||||||
|
" # Again for convenience we'll stick with the convention that all_h[0] is the net input and all_f[k] in the net output\n",
|
||||||
|
"\n",
|
||||||
|
" # Compute derivatives of net output with respect to loss\n",
|
||||||
|
" all_dl_df[K] = np.array(d_loss_d_output(all_f[K],y))\n",
|
||||||
|
"\n",
|
||||||
|
" # Now work backwards through the network\n",
|
||||||
|
" for layer in range(K,-1,-1):\n",
|
||||||
|
" # Calculate the derivatives of biases at layer from all_dl_df[K]. (eq 7.13, line 1)\n",
|
||||||
|
" all_dl_dbiases[layer] = np.array(all_dl_df[layer])\n",
|
||||||
|
" # Calculate the derivatives of weight at layer from all_dl_df[K] and all_h[K] (eq 7.13, line 2)\n",
|
||||||
|
" all_dl_dweights[layer] = np.matmul(all_dl_df[layer], all_h[layer].transpose())\n",
|
||||||
|
"\n",
|
||||||
|
" # Calculate the derivatives of activations from weight and derivatives of next preactivations (eq 7.13, line 3 second part)\n",
|
||||||
|
" all_dl_dh[layer] = np.matmul(all_weights[layer].transpose(), all_dl_df[layer])\n",
|
||||||
|
" # Calculate the derivatives of the pre-activation f with respect to activation h (eq 7.13, line 3, first part)\n",
|
||||||
|
" if layer > 0:\n",
|
||||||
|
" all_dl_df[layer-1] = indicator_function(all_f[layer-1]) * all_dl_dh[layer]\n",
|
||||||
|
"\n",
|
||||||
|
" return all_dl_dweights, all_dl_dbiases, all_dl_dh, all_dl_df"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "LJng7WpRPLMz"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's look at what happens to the magnitude of the gradients on the way back."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "phFnbthqwhFi"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Number of layers\n",
|
||||||
|
"K = 5\n",
|
||||||
|
"# Number of neurons per layer\n",
|
||||||
|
"D = 8\n",
|
||||||
|
" # Input layer\n",
|
||||||
|
"D_i = 1\n",
|
||||||
|
"# Output layer\n",
|
||||||
|
"D_o = 1\n",
|
||||||
|
"# Set variance of initial weights to 1\n",
|
||||||
|
"sigma_sq_omega = 1.0\n",
|
||||||
|
"# Initialize parameters\n",
|
||||||
|
"all_weights, all_biases = init_params(K,D,sigma_sq_omega)\n",
|
||||||
|
"\n",
|
||||||
|
"# For simplicity we'll just consider the gradients of the weights and biases between the first and last hidden layer\n",
|
||||||
|
"n_data = 100\n",
|
||||||
|
"aggregate_dl_df = [None] * (K+1)\n",
|
||||||
|
"for layer in range(1,K):\n",
|
||||||
|
" # These 3D arrays will store the gradients for every data point\n",
|
||||||
|
" aggregate_dl_df[layer] = np.zeros((D,n_data))\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"# We'll have to compute the derivatives of the parameters for each data point separately\n",
|
||||||
|
"for c_data in range(n_data):\n",
|
||||||
|
" data_in = np.random.normal(size=(1,1))\n",
|
||||||
|
" y = np.zeros((1,1))\n",
|
||||||
|
" net_output, all_f, all_h = compute_network_output(data_in, all_weights, all_biases)\n",
|
||||||
|
" all_dl_dweights, all_dl_dbiases, all_dl_dh, all_dl_df = backward_pass(all_weights, all_biases, all_f, all_h, y)\n",
|
||||||
|
" for layer in range(1,K):\n",
|
||||||
|
" aggregate_dl_df[layer][:,c_data] = np.squeeze(all_dl_df[layer])\n",
|
||||||
|
"\n",
|
||||||
|
"for layer in range(1,K):\n",
|
||||||
|
" print(\"Layer %d, std of dl_dh = %3.3f\"%(layer, np.std(aggregate_dl_df[layer].ravel())))\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "9A9MHc4sQvbp"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# You can see that the values of the hidden units are increasing on average (the variance is across all hidden units at the layer\n",
|
||||||
|
"# and the 1000 training examples\n",
|
||||||
|
"\n",
|
||||||
|
"# TO DO\n",
|
||||||
|
"# Change this to 50 layers with 80 hidden units per layer\n",
|
||||||
|
"\n",
|
||||||
|
"# TO DO\n",
|
||||||
|
"# Now experiment with sigma_sq_omega to try to stop the variance of the gradients exploding\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "gtokc0VX0839"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
238
Notebooks/Chap08/8_1_MNIST_1D_Performance.ipynb
Normal file
238
Notebooks/Chap08/8_1_MNIST_1D_Performance.ipynb
Normal file
@@ -0,0 +1,238 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"gpuType": "T4",
|
||||||
|
"authorship_tag": "ABX9TyNLj3HOpVB87nRu7oSLuBaU",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
},
|
||||||
|
"accelerator": "GPU"
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap08/8_1_MNIST_1D_Performance.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 8.1: MNIST_1D_Performance**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook runs a simple neural network on the MNIST1D dataset as in figure 8.2a. It uses code from https://github.com/greydanus/mnist1d to generate the data.\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "L6chybAVFJW2"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Run this if you're in a Colab to make a local copy of the MNIST 1D repository\n",
|
||||||
|
"!git clone https://github.com/greydanus/mnist1d"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "ifVjS4cTOqKz"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"import torch, torch.nn as nn\n",
|
||||||
|
"from torch.utils.data import TensorDataset, DataLoader\n",
|
||||||
|
"from torch.optim.lr_scheduler import StepLR\n",
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt\n",
|
||||||
|
"import mnist1d"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "qyE7G1StPIqO"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Let's generate a training and test dataset using the MNIST1D code. The dataset gets saved as a .pkl file so it doesn't have to be regenerated each time."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "F7LNq72SP6jO"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"args = mnist1d.data.get_dataset_args()\n",
|
||||||
|
"data = mnist1d.data.get_dataset(args, path='./mnist1d_data.pkl', download=False, regenerate=False)\n",
|
||||||
|
"\n",
|
||||||
|
"# The training and test input and outputs are in\n",
|
||||||
|
"# data['x'], data['y'], data['x_test'], and data['y_test']\n",
|
||||||
|
"print(\"Examples in training set: {}\".format(len(data['y'])))\n",
|
||||||
|
"print(\"Examples in test set: {}\".format(len(data['y_test'])))\n",
|
||||||
|
"print(\"Length of each example: {}\".format(data['x'].shape[-1]))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "YLxf7dJfPaqw"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"D_i = 40 # Input dimensions\n",
|
||||||
|
"D_k = 100 # Hidden dimensions\n",
|
||||||
|
"D_o = 10 # Output dimensions\n",
|
||||||
|
"# TO DO:\n",
|
||||||
|
"# Define a model with two hidden layers of size 100\n",
|
||||||
|
"# And ReLU activations between them\n",
|
||||||
|
"# Replace this line (see Figure 7.8 of book for help):\n",
|
||||||
|
"model = torch.nn.Sequential(torch.nn.Linear(D_i, D_o));\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"def weights_init(layer_in):\n",
|
||||||
|
" # TO DO:\n",
|
||||||
|
" # Initialize the parameters with He initialization\n",
|
||||||
|
" # Replace this line (see figure 7.8 of book for help)\n",
|
||||||
|
" print(\"Initializing layer\")\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"# Call the function you just defined\n",
|
||||||
|
"model.apply(weights_init)\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "FxaB5vc0uevl"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# choose cross entropy loss function (equation 5.24)\n",
|
||||||
|
"loss_function = torch.nn.CrossEntropyLoss()\n",
|
||||||
|
"# construct SGD optimizer and initialize learning rate and momentum\n",
|
||||||
|
"optimizer = torch.optim.SGD(model.parameters(), lr = 0.05, momentum=0.9)\n",
|
||||||
|
"# object that decreases learning rate by half every 10 epochs\n",
|
||||||
|
"scheduler = StepLR(optimizer, step_size=10, gamma=0.5)\n",
|
||||||
|
"# create 100 dummy data points and store in data loader class\n",
|
||||||
|
"x_train = torch.tensor(data['x'].astype('float32'))\n",
|
||||||
|
"y_train = torch.tensor(data['y'].transpose().astype('long'))\n",
|
||||||
|
"x_test= torch.tensor(data['x_test'].astype('float32'))\n",
|
||||||
|
"y_test = torch.tensor(data['y_test'].astype('long'))\n",
|
||||||
|
"\n",
|
||||||
|
"# load the data into a class that creates the batches\n",
|
||||||
|
"data_loader = DataLoader(TensorDataset(x_train,y_train), batch_size=100, shuffle=True, worker_init_fn=np.random.seed(1))\n",
|
||||||
|
"\n",
|
||||||
|
"# Initialize model weights\n",
|
||||||
|
"model.apply(weights_init)\n",
|
||||||
|
"\n",
|
||||||
|
"# loop over the dataset n_epoch times\n",
|
||||||
|
"n_epoch = 50\n",
|
||||||
|
"# store the loss and the % correct at each epoch\n",
|
||||||
|
"losses_train = np.zeros((n_epoch))\n",
|
||||||
|
"errors_train = np.zeros((n_epoch))\n",
|
||||||
|
"losses_test = np.zeros((n_epoch))\n",
|
||||||
|
"errors_test = np.zeros((n_epoch))\n",
|
||||||
|
"\n",
|
||||||
|
"for epoch in range(n_epoch):\n",
|
||||||
|
" # loop over batches\n",
|
||||||
|
" for i, batch in enumerate(data_loader):\n",
|
||||||
|
" # retrieve inputs and labels for this batch\n",
|
||||||
|
" x_batch, y_batch = batch\n",
|
||||||
|
" # zero the parameter gradients\n",
|
||||||
|
" optimizer.zero_grad()\n",
|
||||||
|
" # forward pass -- calculate model output\n",
|
||||||
|
" pred = model(x_batch)\n",
|
||||||
|
" # compute the loss\n",
|
||||||
|
" loss = loss_function(pred, y_batch)\n",
|
||||||
|
" # backward pass\n",
|
||||||
|
" loss.backward()\n",
|
||||||
|
" # SGD update\n",
|
||||||
|
" optimizer.step()\n",
|
||||||
|
"\n",
|
||||||
|
" # Run whole dataset to get statistics -- normally wouldn't do this\n",
|
||||||
|
" pred_train = model(x_train)\n",
|
||||||
|
" pred_test = model(x_test)\n",
|
||||||
|
" _, predicted_train_class = torch.max(pred_train.data, 1)\n",
|
||||||
|
" _, predicted_test_class = torch.max(pred_test.data, 1)\n",
|
||||||
|
" errors_train[epoch] = 100 - 100 * (predicted_train_class == y_train).float().sum() / len(y_train)\n",
|
||||||
|
" errors_test[epoch]= 100 - 100 * (predicted_test_class == y_test).float().sum() / len(y_test)\n",
|
||||||
|
" losses_train[epoch] = loss_function(pred_train, y_train).item()\n",
|
||||||
|
" losses_test[epoch]= loss_function(pred_test, y_test).item()\n",
|
||||||
|
" print(f'Epoch {epoch:5d}, train loss {losses_train[epoch]:.6f}, train error {errors_train[epoch]:3.2f}, test loss {losses_test[epoch]:.6f}, test error {errors_test[epoch]:3.2f}')\n",
|
||||||
|
"\n",
|
||||||
|
" # tell scheduler to consider updating learning rate\n",
|
||||||
|
" scheduler.step()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "_rX6N3VyyQTY"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Plot the results\n",
|
||||||
|
"fig, ax = plt.subplots()\n",
|
||||||
|
"ax.plot(errors_train,'r-',label='train')\n",
|
||||||
|
"ax.plot(errors_test,'b-',label='test')\n",
|
||||||
|
"ax.set_ylim(0,100); ax.set_xlim(0,n_epoch)\n",
|
||||||
|
"ax.set_xlabel('Epoch'); ax.set_ylabel('Error')\n",
|
||||||
|
"ax.set_title('TrainError %3.2f, Test Error %3.2f'%(errors_train[-1],errors_test[-1]))\n",
|
||||||
|
"ax.legend()\n",
|
||||||
|
"plt.show()\n",
|
||||||
|
"\n",
|
||||||
|
"# Plot the results\n",
|
||||||
|
"fig, ax = plt.subplots()\n",
|
||||||
|
"ax.plot(losses_train,'r-',label='train')\n",
|
||||||
|
"ax.plot(losses_test,'b-',label='test')\n",
|
||||||
|
"ax.set_xlim(0,n_epoch)\n",
|
||||||
|
"ax.set_xlabel('Epoch'); ax.set_ylabel('Loss')\n",
|
||||||
|
"ax.set_title('Train loss %3.2f, Test loss %3.2f'%(losses_train[-1],losses_test[-1]))\n",
|
||||||
|
"ax.legend()\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "yI-l6kA_EH9G"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"**TO DO**\n",
|
||||||
|
"\n",
|
||||||
|
"Play with the model -- try changing the number of layers, hidden units, learning rate, batch size, momentum or anything else you like. See if you can improve the test results.\n",
|
||||||
|
"\n",
|
||||||
|
"Is it a good idea to optimize the hyperparameters in this way? Will the final result be a good estimate of the true test performance?"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "q-yT6re6GZS4"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
350
Notebooks/Chap08/8_2_Bias_Variance_Trade_Off.ipynb
Normal file
350
Notebooks/Chap08/8_2_Bias_Variance_Trade_Off.ipynb
Normal file
@@ -0,0 +1,350 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyPz1B8kFc21JvGTDwqniloA",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap08/8_2_Bias_Variance_Trade_Off.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 8.2: Bias-Variance Trade-Off**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook investigates the bias-variance trade-off for the toy model used throughout chapter 8 and reproduces the bias/variance trade off curves seen in figure 8.9.\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "L6chybAVFJW2"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "01Cu4SGZOVAi"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# The true function that we are trying to estimate, defined on [0,1]\n",
|
||||||
|
"def true_function(x):\n",
|
||||||
|
" y = np.exp(np.sin(x*(2*3.1413)))\n",
|
||||||
|
" return y"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "bSK2_EGyOgHu"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Generate some data points with or without noise\n",
|
||||||
|
"def generate_data(n_data, sigma_y=0.3):\n",
|
||||||
|
" # Generate x values quasi uniformly\n",
|
||||||
|
" x = np.ones(n_data)\n",
|
||||||
|
" for i in range(n_data):\n",
|
||||||
|
" x[i] = np.random.uniform(i/n_data, (i+1)/n_data, 1)\n",
|
||||||
|
"\n",
|
||||||
|
" # y value from running through functoin and adding noise\n",
|
||||||
|
" y = np.ones(n_data)\n",
|
||||||
|
" for i in range(n_data):\n",
|
||||||
|
" y[i] = true_function(x[i])\n",
|
||||||
|
" y[i] += np.random.normal(0, sigma_y, 1)\n",
|
||||||
|
" return x,y\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "yzZr2tcJO5pq"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Draw the fitted function, together win uncertainty used to generate points\n",
|
||||||
|
"def plot_function(x_func, y_func, x_data=None,y_data=None, x_model = None, y_model =None, sigma_func = None, sigma_model=None):\n",
|
||||||
|
"\n",
|
||||||
|
" fig,ax = plt.subplots()\n",
|
||||||
|
" ax.plot(x_func, y_func, 'k-')\n",
|
||||||
|
" if sigma_func is not None:\n",
|
||||||
|
" ax.fill_between(x_func, y_func-2*sigma_func, y_func+2*sigma_func, color='lightgray')\n",
|
||||||
|
"\n",
|
||||||
|
" if x_data is not None:\n",
|
||||||
|
" ax.plot(x_data, y_data, 'o', color='#d18362')\n",
|
||||||
|
"\n",
|
||||||
|
" if x_model is not None:\n",
|
||||||
|
" ax.plot(x_model, y_model, '-', color='#7fe7de')\n",
|
||||||
|
"\n",
|
||||||
|
" if sigma_model is not None:\n",
|
||||||
|
" ax.fill_between(x_model, y_model-2*sigma_model, y_model+2*sigma_model, color='lightgray')\n",
|
||||||
|
"\n",
|
||||||
|
" ax.set_xlim(0,1)\n",
|
||||||
|
" ax.set_xlabel('Input, $x$')\n",
|
||||||
|
" ax.set_ylabel('Output, $y$')\n",
|
||||||
|
" plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "xfq1SD_ZOi6G"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Generate true function\n",
|
||||||
|
"x_func = np.linspace(0, 1.0, 100)\n",
|
||||||
|
"y_func = true_function(x_func);\n",
|
||||||
|
"\n",
|
||||||
|
"# Generate some data points\n",
|
||||||
|
"np.random.seed(1)\n",
|
||||||
|
"sigma_func = 0.3\n",
|
||||||
|
"n_data = 15\n",
|
||||||
|
"x_data,y_data = generate_data(n_data, sigma_func)\n",
|
||||||
|
"\n",
|
||||||
|
"# Plot the functinon, data and uncertainty\n",
|
||||||
|
"plot_function(x_func, y_func, x_data, y_data, sigma_func=sigma_func)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "2tP-p7B6Qnuf"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Define model -- beta is a scalar and omega has size n_hidden,1\n",
|
||||||
|
"def network(x, beta, omega):\n",
|
||||||
|
" # Retrieve number of hidden units\n",
|
||||||
|
" n_hidden = omega.shape[0]\n",
|
||||||
|
"\n",
|
||||||
|
" y = np.zeros_like(x)\n",
|
||||||
|
" for c_hidden in range(n_hidden):\n",
|
||||||
|
" # Evaluate activations based on shifted lines (figure 8.4b-d)\n",
|
||||||
|
" line_vals = x - c_hidden/n_hidden\n",
|
||||||
|
" h = line_vals * (line_vals > 0)\n",
|
||||||
|
" # Weight activations by omega parameters and sum\n",
|
||||||
|
" y = y + omega[c_hidden] * h\n",
|
||||||
|
" # Add bias, beta\n",
|
||||||
|
" y = y + beta\n",
|
||||||
|
"\n",
|
||||||
|
" return y"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "zYMLtS3nT-0y"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# This fits the n_hidden+1 parameters (see fig 8.4a) in closed form.\n",
|
||||||
|
"# If you have studied linear algebra, then you will know it is a least\n",
|
||||||
|
"# squares solution of the form (A^TA)^-1A^Tb. If you don't recognize that,\n",
|
||||||
|
"# then just take it on trust that this gives you the best possible solution.\n",
|
||||||
|
"def fit_model_closed_form(x,y,n_hidden):\n",
|
||||||
|
" n_data = len(x)\n",
|
||||||
|
" A = np.ones((n_data, n_hidden+1))\n",
|
||||||
|
" for i in range(n_data):\n",
|
||||||
|
" for j in range(1,n_hidden+1):\n",
|
||||||
|
" A[i,j] = x[i]-(j-1)/n_hidden\n",
|
||||||
|
" if A[i,j] < 0:\n",
|
||||||
|
" A[i,j] = 0;\n",
|
||||||
|
"\n",
|
||||||
|
" ATA = np.matmul(np.transpose(A), A)\n",
|
||||||
|
" ATAInv = np.linalg.inv(ATA)\n",
|
||||||
|
" ATAInvAT = np.matmul(ATAInv, np.transpose(A))\n",
|
||||||
|
" beta_omega = np.matmul(ATAInvAT,y)\n",
|
||||||
|
" beta = beta_omega[0]\n",
|
||||||
|
" omega = beta_omega[1:]\n",
|
||||||
|
"\n",
|
||||||
|
" return beta, omega\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "MinJxLh1XTHx"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Closed form solution\n",
|
||||||
|
"beta, omega = fit_model_closed_form(x_data,y_data,n_hidden=3)\n",
|
||||||
|
"\n",
|
||||||
|
"# Get prediction for model across graph grange\n",
|
||||||
|
"x_model = np.linspace(0,1,100);\n",
|
||||||
|
"y_model = network(x_model, beta, omega)\n",
|
||||||
|
"\n",
|
||||||
|
"# Draw the function and the model\n",
|
||||||
|
"plot_function(x_func, y_func, x_data,y_data, x_model, y_model)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "HP7fiwNFSfWz"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Run the model many times with different datasets and return the mean and variance\n",
|
||||||
|
"def get_model_mean_variance(n_data, n_datasets, n_hidden, sigma_func):\n",
|
||||||
|
"\n",
|
||||||
|
" # Create array that stores model results in rows\n",
|
||||||
|
" y_model_all = np.zeros((n_datasets, x_model.shape[0]))\n",
|
||||||
|
"\n",
|
||||||
|
" for c_dataset in range(n_datasets):\n",
|
||||||
|
" # TODO -- Generate n_data x,y, pairs with standard divation sigma_func\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" x_data,y_data = np.zeros([1,n_data]),np.zeros([1,n_data])\n",
|
||||||
|
"\n",
|
||||||
|
" # TODO -- Fit the model\n",
|
||||||
|
" # Replace this line:\n",
|
||||||
|
" beta = 0; omega = np.zeros([n_hidden,1])\n",
|
||||||
|
"\n",
|
||||||
|
" # TODO -- Run the fitted model on x_model\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" y_model = np.zeros_like(x_model);\n",
|
||||||
|
"\n",
|
||||||
|
" # Store the model results\n",
|
||||||
|
" y_model_all[c_dataset,:] = y_model\n",
|
||||||
|
"\n",
|
||||||
|
" # Get mean and standard deviation of model\n",
|
||||||
|
" mean_model = np.mean(y_model_all,axis=0)\n",
|
||||||
|
" std_model = np.std(y_model_all,axis=0)\n",
|
||||||
|
"\n",
|
||||||
|
" # Return the mean and standard deviation of the fitted model\n",
|
||||||
|
" return mean_model, std_model"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "bL553uSaYidy"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Let's generate N random data sets, fit the model N times and look the mean and variance\n",
|
||||||
|
"n_datasets = 100\n",
|
||||||
|
"n_data = 15\n",
|
||||||
|
"sigma_func = 0.3\n",
|
||||||
|
"n_hidden = 5\n",
|
||||||
|
"\n",
|
||||||
|
"# Get mean and variance of fitted model\n",
|
||||||
|
"np.random.seed(1)\n",
|
||||||
|
"mean_model, std_model = get_model_mean_variance(n_data, n_datasets, n_hidden, sigma_func) ;\n",
|
||||||
|
"\n",
|
||||||
|
"# Plot the results\n",
|
||||||
|
"plot_function(x_func, y_func, x_data,y_data, x_model, mean_model, sigma_model=std_model)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "Wxk64t2SoX9c"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# TODO -- Experiment with changing the number of data points and the number of hidden variables\n",
|
||||||
|
"# in the model. Get a feeling for what happens in terms of the bias (squared deviation between cyan and black lines)\n",
|
||||||
|
"# and the variance (gray region) as we manipulate these quantities."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "QO6mFaKNJ3J_"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Plot the noise, bias and variance as a function of capacity\n",
|
||||||
|
"hidden_variables = [1,2,3,4,5,6,7,8,9,10,11,12]\n",
|
||||||
|
"bias = np.zeros((len(hidden_variables),1)) ;\n",
|
||||||
|
"variance = np.zeros((len(hidden_variables),1)) ;\n",
|
||||||
|
"\n",
|
||||||
|
"n_datasets = 100\n",
|
||||||
|
"n_data = 15\n",
|
||||||
|
"sigma_func = 0.3\n",
|
||||||
|
"n_hidden = 5\n",
|
||||||
|
"\n",
|
||||||
|
"# Set random seed so that get same result every time\n",
|
||||||
|
"np.random.seed(1)\n",
|
||||||
|
"\n",
|
||||||
|
"for c_hidden in range(len(hidden_variables)):\n",
|
||||||
|
" # Get mean and variance of fitted model\n",
|
||||||
|
" mean_model, std_model = get_model_mean_variance(n_data, n_datasets, hidden_variables[c_hidden], sigma_func) ;\n",
|
||||||
|
" # TODO -- Estimate bias and variance\n",
|
||||||
|
" # Replace these lines\n",
|
||||||
|
"\n",
|
||||||
|
" # Compute variance -- average of the model variance (average squared deviation of fitted models around mean fitted model)\n",
|
||||||
|
" variance[c_hidden] = 0\n",
|
||||||
|
" # Compute bias (average squared deviaton of mean fitted model around true function)\n",
|
||||||
|
" bias[c_hidden] = 0\n",
|
||||||
|
"\n",
|
||||||
|
"# Plot the results\n",
|
||||||
|
"fig,ax = plt.subplots()\n",
|
||||||
|
"ax.plot(hidden_variables, variance, 'k-')\n",
|
||||||
|
"ax.plot(hidden_variables, bias, 'r-')\n",
|
||||||
|
"ax.plot(hidden_variables, variance+bias, 'g-')\n",
|
||||||
|
"ax.set_xlim(0,12)\n",
|
||||||
|
"ax.set_ylim(0,0.5)\n",
|
||||||
|
"ax.set_xlabel(\"Model capacity\")\n",
|
||||||
|
"ax.set_ylabel(\"Variance\")\n",
|
||||||
|
"ax.legend(['Variance', 'Bias', 'Bias + Variance'])\n",
|
||||||
|
"plt.show()\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "ICKjqAlx3Ka9"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [],
|
||||||
|
"metadata": {
|
||||||
|
"id": "WKUyOAywL_b2"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
270
Notebooks/Chap08/8_3_Double_Descent.ipynb
Normal file
270
Notebooks/Chap08/8_3_Double_Descent.ipynb
Normal file
@@ -0,0 +1,270 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"gpuType": "T4",
|
||||||
|
"authorship_tag": "ABX9TyN/KUpEObCKnHZ/4Onp5sHG",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
},
|
||||||
|
"accelerator": "GPU"
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap08/8_3_Double_Descent.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 8.3: Double Descent**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook investigates double descent as described in section 8.4 of the book.\n",
|
||||||
|
"\n",
|
||||||
|
"It uses the MNIST-1D database which can be found at https://github.com/greydanus/mnist1d\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "L6chybAVFJW2"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Run this if you're in a Colab to make a local copy of the MNIST 1D repository\n",
|
||||||
|
"!git clone https://github.com/greydanus/mnist1d"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "fn9BP5N5TguP"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"import torch, torch.nn as nn\n",
|
||||||
|
"from torch.utils.data import TensorDataset, DataLoader\n",
|
||||||
|
"from torch.optim.lr_scheduler import StepLR\n",
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt\n",
|
||||||
|
"import mnist1d\n",
|
||||||
|
"import random\n",
|
||||||
|
"random.seed(0)\n",
|
||||||
|
"\n",
|
||||||
|
"# Try attaching to GPU -- Use \"Change Runtime Type to change to GPUT\"\n",
|
||||||
|
"DEVICE = str(torch.device('cuda' if torch.cuda.is_available() else 'cpu'))\n",
|
||||||
|
"print('Using:', DEVICE)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "hFxuHpRqTgri"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"args = mnist1d.data.get_dataset_args()\n",
|
||||||
|
"args.num_samples = 8000\n",
|
||||||
|
"args.train_split = 0.5\n",
|
||||||
|
"args.corr_noise_scale = 0.25\n",
|
||||||
|
"args.iid_noise_scale=2e-2\n",
|
||||||
|
"data = mnist1d.data.get_dataset(args, path='./mnist1d_data.pkl', download=False, regenerate=True)\n",
|
||||||
|
"\n",
|
||||||
|
"# Add 15% noise to training labels\n",
|
||||||
|
"for c_y in range(len(data['y'])):\n",
|
||||||
|
" random_number = random.random()\n",
|
||||||
|
" if random_number < 0.15 :\n",
|
||||||
|
" random_int = int(random.random() * 10)\n",
|
||||||
|
" data['y'][c_y] = random_int\n",
|
||||||
|
"\n",
|
||||||
|
"# The training and test input and outputs are in\n",
|
||||||
|
"# data['x'], data['y'], data['x_test'], and data['y_test']\n",
|
||||||
|
"print(\"Examples in training set: {}\".format(len(data['y'])))\n",
|
||||||
|
"print(\"Examples in test set: {}\".format(len(data['y_test'])))\n",
|
||||||
|
"print(\"Length of each example: {}\".format(data['x'].shape[-1]))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "PW2gyXL5UkLU"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Initialize the parameters with He initialization\n",
|
||||||
|
"def weights_init(layer_in):\n",
|
||||||
|
" if isinstance(layer_in, nn.Linear):\n",
|
||||||
|
" nn.init.kaiming_uniform_(layer_in.weight)\n",
|
||||||
|
" layer_in.bias.data.fill_(0.0)\n",
|
||||||
|
"\n",
|
||||||
|
"# Return an initialized model with two hidden layers and n_hidden hidden units at each\n",
|
||||||
|
"def get_model(n_hidden):\n",
|
||||||
|
"\n",
|
||||||
|
" D_i = 40 # Input dimensions\n",
|
||||||
|
" D_k = n_hidden # Hidden dimensions\n",
|
||||||
|
" D_o = 10 # Output dimensions\n",
|
||||||
|
"\n",
|
||||||
|
" # Define a model with two hidden layers of size 100\n",
|
||||||
|
" # And ReLU activations between them\n",
|
||||||
|
" model = nn.Sequential(\n",
|
||||||
|
" nn.Linear(D_i, D_k),\n",
|
||||||
|
" nn.ReLU(),\n",
|
||||||
|
" nn.Linear(D_k, D_k),\n",
|
||||||
|
" nn.ReLU(),\n",
|
||||||
|
" nn.Linear(D_k, D_o))\n",
|
||||||
|
"\n",
|
||||||
|
" # Call the function you just defined\n",
|
||||||
|
" model.apply(weights_init)\n",
|
||||||
|
"\n",
|
||||||
|
" # Return the model\n",
|
||||||
|
" return model ;"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "hAIvZOAlTnk9"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def fit_model(model, data):\n",
|
||||||
|
"\n",
|
||||||
|
" # choose cross entropy loss function (equation 5.24)\n",
|
||||||
|
" loss_function = torch.nn.CrossEntropyLoss()\n",
|
||||||
|
" # construct SGD optimizer and initialize learning rate and momentum\n",
|
||||||
|
" # optimizer = torch.optim.Adam(model.parameters(), lr=0.01)\n",
|
||||||
|
" optimizer = torch.optim.SGD(model.parameters(), lr = 0.01, momentum=0.9)\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" # create 100 dummy data points and store in data loader class\n",
|
||||||
|
" x_train = torch.tensor(data['x'].astype('float32'))\n",
|
||||||
|
" y_train = torch.tensor(data['y'].transpose().astype('long'))\n",
|
||||||
|
" x_test= torch.tensor(data['x_test'].astype('float32'))\n",
|
||||||
|
" y_test = torch.tensor(data['y_test'].astype('long'))\n",
|
||||||
|
"\n",
|
||||||
|
" # load the data into a class that creates the batches\n",
|
||||||
|
" data_loader = DataLoader(TensorDataset(x_train,y_train), batch_size=100, shuffle=True, worker_init_fn=np.random.seed(1))\n",
|
||||||
|
"\n",
|
||||||
|
" # loop over the dataset n_epoch times\n",
|
||||||
|
" n_epoch = 1000\n",
|
||||||
|
"\n",
|
||||||
|
" for epoch in range(n_epoch):\n",
|
||||||
|
" # loop over batches\n",
|
||||||
|
" for i, batch in enumerate(data_loader):\n",
|
||||||
|
" # retrieve inputs and labels for this batch\n",
|
||||||
|
" x_batch, y_batch = batch\n",
|
||||||
|
" # zero the parameter gradients\n",
|
||||||
|
" optimizer.zero_grad()\n",
|
||||||
|
" # forward pass -- calculate model output\n",
|
||||||
|
" pred = model(x_batch)\n",
|
||||||
|
" # compute the loss\n",
|
||||||
|
" loss = loss_function(pred, y_batch)\n",
|
||||||
|
" # backward pass\n",
|
||||||
|
" loss.backward()\n",
|
||||||
|
" # SGD update\n",
|
||||||
|
" optimizer.step()\n",
|
||||||
|
"\n",
|
||||||
|
" # Run whole dataset to get statistics -- normally wouldn't do this\n",
|
||||||
|
" pred_train = model(x_train)\n",
|
||||||
|
" pred_test = model(x_test)\n",
|
||||||
|
" _, predicted_train_class = torch.max(pred_train.data, 1)\n",
|
||||||
|
" _, predicted_test_class = torch.max(pred_test.data, 1)\n",
|
||||||
|
" errors_train = 100 - 100 * (predicted_train_class == y_train).float().sum() / len(y_train)\n",
|
||||||
|
" errors_test= 100 - 100 * (predicted_test_class == y_test).float().sum() / len(y_test)\n",
|
||||||
|
" losses_train = loss_function(pred_train, y_train).item()\n",
|
||||||
|
" losses_test= loss_function(pred_test, y_test).item()\n",
|
||||||
|
" if epoch%100 ==0 :\n",
|
||||||
|
" print(f'Epoch {epoch:5d}, train loss {losses_train:.6f}, train error {errors_train:3.2f}, test loss {losses_test:.6f}, test error {errors_test:3.2f}')\n",
|
||||||
|
"\n",
|
||||||
|
" return errors_train, errors_test\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "AazlQhheWmHk"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"The following code produces the double descent curve by training the model with different numbers of hidden units and plotting the test error.\n",
|
||||||
|
"\n",
|
||||||
|
"TO DO:\n",
|
||||||
|
"\n",
|
||||||
|
"*Before* you run the code, and considering that there are 4000 training examples predict:<br>\n",
|
||||||
|
"\n",
|
||||||
|
"1. At what capacity do you think the training error will become zero?\n",
|
||||||
|
"2. At what capacity do you expect the first minima of the double descent curve to appear?\n",
|
||||||
|
"3. At what capacity do you expect the maximum of the double descent curve to appear?"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "IcP4UPMudxPS"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# This code will take a while (~30 mins on GPU) to run! Go and make a cup of coffee!\n",
|
||||||
|
"\n",
|
||||||
|
"hidden_variables = np.array([2,4,6,8,10,14,18,22,26,30,35,40,45,50,55,60,70,80,90,100,120,140,160,180,200,250,300,400]) ;\n",
|
||||||
|
"errors_train_all = np.zeros_like(hidden_variables)\n",
|
||||||
|
"errors_test_all = np.zeros_like(hidden_variables)\n",
|
||||||
|
"\n",
|
||||||
|
"# For each hidden variable size\n",
|
||||||
|
"for c_hidden in range(len(hidden_variables)):\n",
|
||||||
|
" print(f'Training model with {hidden_variables[c_hidden]:3d} hidden variables')\n",
|
||||||
|
" # Get a model\n",
|
||||||
|
" model = get_model(hidden_variables[c_hidden]) ;\n",
|
||||||
|
" # Train the model\n",
|
||||||
|
" errors_train, errors_test = fit_model(model, data)\n",
|
||||||
|
" # Store the results\n",
|
||||||
|
" errors_train_all[c_hidden] = errors_train\n",
|
||||||
|
" errors_test_all[c_hidden]= errors_test"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "K4OmBZGHWXpk"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Plot the results\n",
|
||||||
|
"fig, ax = plt.subplots()\n",
|
||||||
|
"ax.plot(hidden_variables, errors_train_all,'r-',label='train')\n",
|
||||||
|
"ax.plot(hidden_variables, errors_test_all,'b-',label='test')\n",
|
||||||
|
"ax.set_ylim(0,100);\n",
|
||||||
|
"ax.set_xlabel('No hidden variables'); ax.set_ylabel('Error')\n",
|
||||||
|
"ax.legend()\n",
|
||||||
|
"plt.show()\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "Rw-iRboTXbck"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
236
Notebooks/Chap08/8_4_High_Dimensional_Spaces.ipynb
Normal file
236
Notebooks/Chap08/8_4_High_Dimensional_Spaces.ipynb
Normal file
@@ -0,0 +1,236 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyPXPDEQiwNw+kYhWfg4kjz6",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap08/8_4_High_Dimensional_Spaces.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 8.4: High-dimensional spaces**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook investigates the strange properties of high-dimensional spaces as discussed in the notes at the end of chapter 8.\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "EjLK-kA1KnYX"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "4ESMmnkYEVAb"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt\n",
|
||||||
|
"import scipy.special as sci"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# How close are points in high dimensions?\n",
|
||||||
|
"\n",
|
||||||
|
"In this part of the notebook, we investigate how close random points are in 2D, 100D, and 1000D. In each case, we generate 1000 points and calculate the Euclidean distance between each pair. "
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "MonbPEitLNgN"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Fix the random seed so we all have the same random numbers\n",
|
||||||
|
"np.random.seed(0)\n",
|
||||||
|
"n_data = 1000\n",
|
||||||
|
"# Create 1000 data examples (columns) each with 2 dimensions (rows)\n",
|
||||||
|
"n_dim = 2\n",
|
||||||
|
"x_2D = np.random.normal(size=(n_dim,n_data))\n",
|
||||||
|
"# Create 1000 data examples (columns) each with 100 dimensions (rows)\n",
|
||||||
|
"n_dim = 100\n",
|
||||||
|
"x_100D = np.random.normal(size=(n_dim,n_data))\n",
|
||||||
|
"# Create 1000 data examples (columns) each with 1000 dimensions (rows)\n",
|
||||||
|
"n_dim = 1000\n",
|
||||||
|
"x_1000D = np.random.normal(size=(n_dim,n_data))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "vZSHVmcWEk14"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def distance_ratio(x):\n",
|
||||||
|
" # TODO -- replace the two lines below to calculate the largest and smallest Euclidean distance between\n",
|
||||||
|
" # the data points in the columns of x. DO NOT include the distance between the data point\n",
|
||||||
|
" # and itself (which is obviously zero)\n",
|
||||||
|
" smallest_dist = 1.0\n",
|
||||||
|
" largest_dist = 1.0\n",
|
||||||
|
"\n",
|
||||||
|
" # Calculate the ratio and return\n",
|
||||||
|
" dist_ratio = largest_dist / smallest_dist\n",
|
||||||
|
" return dist_ratio"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "PhVmnUs8ErD9"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"print('Ratio of largest to smallest distance 2D: %3.3f'%(distance_ratio(x_2D)))\n",
|
||||||
|
"print('Ratio of largest to smallest distance 100D: %3.3f'%(distance_ratio(x_100D)))\n",
|
||||||
|
"print('Ratio of largest to smallest distance 1000D: %3.3f'%(distance_ratio(x_1000D)))\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "0NdPxfn5GQuJ"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"If you did this right, you will see that the distance between the nearest and farthest two points in high dimensions is almost the same. "
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "uT68c0k8uBxs"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# Volume of a hypersphere\n",
|
||||||
|
"\n",
|
||||||
|
"In the second part of this notebook we calculate the volume of a hypersphere of radius 0.5 (i.e., of diameter 1) as a function of the radius. Note that you you can check your answer by doing the calculation for 2D using the standard formula for the area of a circle and making sure it matches."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "b2FYKV1SL4Z7"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def volume_of_hypersphere(diameter, dimensions):\n",
|
||||||
|
" # Formula given in Problem 8.7 of the book\n",
|
||||||
|
" # You will need sci.special.gamma()\n",
|
||||||
|
" # Check out: https://docs.scipy.org/doc/scipy/reference/generated/scipy.special.gamma.html\n",
|
||||||
|
" # Also use this value for pi\n",
|
||||||
|
" pi = np.pi\n",
|
||||||
|
" # TODO replace this code with formula for the volume of a hypersphere\n",
|
||||||
|
" volume = 1.0\n",
|
||||||
|
"\n",
|
||||||
|
" return volume\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "CZoNhD8XJaHR"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"diameter = 1.0\n",
|
||||||
|
"for c_dim in range(1,11):\n",
|
||||||
|
" print(\"Volume of unit diameter hypersphere in %d dimensions is %3.3f\"%(c_dim, volume_of_hypersphere(diameter, c_dim)))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "fNTBlg_GPEUh"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"You should see that the volume decreases to almost nothing in high dimensions. All of the volume is in the corners of the unit hypercube (which always has volume 1)."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "PtaeGSNBunJl"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# Proportion of hypersphere in outer shell\n",
|
||||||
|
"\n",
|
||||||
|
"In the third part of the notebook you will calculate what proportion of the volume of a hypersphere is in the outer 1% of the radius/diameter. Calculate the volume of a hypersphere and then the volume of a hypersphere with 0.99 of the radius and then figure out the ratio. "
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "GdyMeOBmoXyF"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def get_prop_of_volume_in_outer_1_percent(dimension):\n",
|
||||||
|
" # TODO -- replace this line\n",
|
||||||
|
" proportion = 1.0\n",
|
||||||
|
"\n",
|
||||||
|
" return proportion"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "8_CxZ2AIpQ8w"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# While we're here, let's look at how much of the volume is in the outer 1% of the radius\n",
|
||||||
|
"for c_dim in [1,2,10,20,50,100,150,200,250,300]:\n",
|
||||||
|
" print('Proportion of volume in outer 1 percent of radius in %d dimensions =%3.3f'%(c_dim, get_prop_of_volume_in_outer_1_percent(c_dim)))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "LtMDIn2qPVfJ"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"You should see see that by the time we get to 300 dimensions most of the volume is in the outer 1 percent. <br><br>\n",
|
||||||
|
"\n",
|
||||||
|
"The conclusion of all of this is that in high dimensions you should be sceptical of your intuitions about how things work. I have tried to visualize many things in one or two dimensions in the book, but you should also be sceptical about these visualizations!"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "n_FVeb-ZwzxD"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
538
Notebooks/Chap09/9_1_L2_Regularization.ipynb
Normal file
538
Notebooks/Chap09/9_1_L2_Regularization.ipynb
Normal file
@@ -0,0 +1,538 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyPJzymRTuvoWggIskM2Kamc",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap09/9_1_L2_Regularization.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 9.1: L2 Regularization**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook investigates adding L2 regularization to the loss function for the Gabor model as in figure 9.1.\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "el8l05WQEO46"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "xhmIOLiZELV_"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# import libraries\n",
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt\n",
|
||||||
|
"from matplotlib import cm\n",
|
||||||
|
"from matplotlib.colors import ListedColormap"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Let's create our training data 30 pairs {x_i, y_i}\n",
|
||||||
|
"# We'll try to fit the Gabor model to these data\n",
|
||||||
|
"data = np.array([[-1.920e+00,-1.422e+01,1.490e+00,-1.940e+00,-2.389e+00,-5.090e+00,\n",
|
||||||
|
" -8.861e+00,3.578e+00,-6.010e+00,-6.995e+00,3.634e+00,8.743e-01,\n",
|
||||||
|
" -1.096e+01,4.073e-01,-9.467e+00,8.560e+00,1.062e+01,-1.729e-01,\n",
|
||||||
|
" 1.040e+01,-1.261e+01,1.574e-01,-1.304e+01,-2.156e+00,-1.210e+01,\n",
|
||||||
|
" -1.119e+01,2.902e+00,-8.220e+00,-1.179e+01,-8.391e+00,-4.505e+00],\n",
|
||||||
|
" [-1.051e+00,-2.482e-02,8.896e-01,-4.943e-01,-9.371e-01,4.306e-01,\n",
|
||||||
|
" 9.577e-03,-7.944e-02 ,1.624e-01,-2.682e-01,-3.129e-01,8.303e-01,\n",
|
||||||
|
" -2.365e-02,5.098e-01,-2.777e-01,3.367e-01,1.927e-01,-2.222e-01,\n",
|
||||||
|
" 6.352e-02,6.888e-03,3.224e-02,1.091e-02,-5.706e-01,-5.258e-02,\n",
|
||||||
|
" -3.666e-02,1.709e-01,-4.805e-02,2.008e-01,-1.904e-01,5.952e-01]])"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "4cRkrh9MZ58Z"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Gabor model definition\n",
|
||||||
|
"def model(phi,x):\n",
|
||||||
|
" sin_component = np.sin(phi[0] + 0.06 * phi[1] * x)\n",
|
||||||
|
" gauss_component = np.exp(-(phi[0] + 0.06 * phi[1] * x) * (phi[0] + 0.06 * phi[1] * x) / 32)\n",
|
||||||
|
" y_pred= sin_component * gauss_component\n",
|
||||||
|
" return y_pred"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "WQUERmb2erAe"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Draw model\n",
|
||||||
|
"def draw_model(data,model,phi,title=None):\n",
|
||||||
|
" x_model = np.arange(-15,15,0.1)\n",
|
||||||
|
" y_model = model(phi,x_model)\n",
|
||||||
|
"\n",
|
||||||
|
" fix, ax = plt.subplots()\n",
|
||||||
|
" ax.plot(data[0,:],data[1,:],'bo')\n",
|
||||||
|
" ax.plot(x_model,y_model,'m-')\n",
|
||||||
|
" ax.set_xlim([-15,15]);ax.set_ylim([-1,1])\n",
|
||||||
|
" ax.set_xlabel('x'); ax.set_ylabel('y')\n",
|
||||||
|
" if title is not None:\n",
|
||||||
|
" ax.set_title(title)\n",
|
||||||
|
" plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "qFRe9POHF2le"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Initialize the parmaeters and draw the model\n",
|
||||||
|
"phi = np.zeros((2,1))\n",
|
||||||
|
"phi[0] = -5 # Horizontal offset\n",
|
||||||
|
"phi[1] = 25 # Frequency\n",
|
||||||
|
"draw_model(data,model,phi, \"Initial parameters\")\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "TXx1Tpd1Tl-I"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's\n",
|
||||||
|
"compute the sum of squares loss for the training data"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "QU5mdGvpTtEG"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def compute_loss(data_x, data_y, model, phi):\n",
|
||||||
|
" pred_y = model(phi, data_x)\n",
|
||||||
|
" loss = np.sum((pred_y-data_y)*(pred_y-data_y))\n",
|
||||||
|
" return loss"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "I7dqTY2Gg7CR"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's plot the whole loss function"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "F3trnavPiHpH"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Define pretty colormap\n",
|
||||||
|
"my_colormap_vals_hex =('2a0902', '2b0a03', '2c0b04', '2d0c05', '2e0c06', '2f0d07', '300d08', '310e09', '320f0a', '330f0b', '34100b', '35110c', '36110d', '37120e', '38120f', '39130f', '3a1410', '3b1411', '3c1511', '3d1612', '3e1613', '3f1713', '401714', '411814', '421915', '431915', '451a16', '461b16', '471b17', '481c17', '491d18', '4a1d18', '4b1e19', '4c1f19', '4d1f1a', '4e201b', '50211b', '51211c', '52221c', '53231d', '54231d', '55241e', '56251e', '57261f', '58261f', '592720', '5b2821', '5c2821', '5d2922', '5e2a22', '5f2b23', '602b23', '612c24', '622d25', '632e25', '652e26', '662f26', '673027', '683027', '693128', '6a3229', '6b3329', '6c342a', '6d342a', '6f352b', '70362c', '71372c', '72372d', '73382e', '74392e', '753a2f', '763a2f', '773b30', '783c31', '7a3d31', '7b3e32', '7c3e33', '7d3f33', '7e4034', '7f4134', '804235', '814236', '824336', '834437', '854538', '864638', '874739', '88473a', '89483a', '8a493b', '8b4a3c', '8c4b3c', '8d4c3d', '8e4c3e', '8f4d3f', '904e3f', '924f40', '935041', '945141', '955242', '965343', '975343', '985444', '995545', '9a5646', '9b5746', '9c5847', '9d5948', '9e5a49', '9f5a49', 'a05b4a', 'a15c4b', 'a35d4b', 'a45e4c', 'a55f4d', 'a6604e', 'a7614e', 'a8624f', 'a96350', 'aa6451', 'ab6552', 'ac6552', 'ad6653', 'ae6754', 'af6855', 'b06955', 'b16a56', 'b26b57', 'b36c58', 'b46d59', 'b56e59', 'b66f5a', 'b7705b', 'b8715c', 'b9725d', 'ba735d', 'bb745e', 'bc755f', 'bd7660', 'be7761', 'bf7862', 'c07962', 'c17a63', 'c27b64', 'c27c65', 'c37d66', 'c47e67', 'c57f68', 'c68068', 'c78169', 'c8826a', 'c9836b', 'ca846c', 'cb856d', 'cc866e', 'cd876f', 'ce886f', 'ce8970', 'cf8a71', 'd08b72', 'd18c73', 'd28d74', 'd38e75', 'd48f76', 'd59077', 'd59178', 'd69279', 'd7937a', 'd8957b', 'd9967b', 'da977c', 'da987d', 'db997e', 'dc9a7f', 'dd9b80', 'de9c81', 'de9d82', 'df9e83', 'e09f84', 'e1a185', 'e2a286', 'e2a387', 'e3a488', 'e4a589', 'e5a68a', 'e5a78b', 'e6a88c', 'e7aa8d', 'e7ab8e', 'e8ac8f', 'e9ad90', 'eaae91', 'eaaf92', 'ebb093', 'ecb295', 'ecb396', 'edb497', 'eeb598', 'eeb699', 'efb79a', 'efb99b', 'f0ba9c', 'f1bb9d', 'f1bc9e', 'f2bd9f', 'f2bfa1', 'f3c0a2', 'f3c1a3', 'f4c2a4', 'f5c3a5', 'f5c5a6', 'f6c6a7', 'f6c7a8', 'f7c8aa', 'f7c9ab', 'f8cbac', 'f8ccad', 'f8cdae', 'f9ceb0', 'f9d0b1', 'fad1b2', 'fad2b3', 'fbd3b4', 'fbd5b6', 'fbd6b7', 'fcd7b8', 'fcd8b9', 'fcdaba', 'fddbbc', 'fddcbd', 'fddebe', 'fddfbf', 'fee0c1', 'fee1c2', 'fee3c3', 'fee4c5', 'ffe5c6', 'ffe7c7', 'ffe8c9', 'ffe9ca', 'ffebcb', 'ffeccd', 'ffedce', 'ffefcf', 'fff0d1', 'fff2d2', 'fff3d3', 'fff4d5', 'fff6d6', 'fff7d8', 'fff8d9', 'fffada', 'fffbdc', 'fffcdd', 'fffedf', 'ffffe0')\n",
|
||||||
|
"my_colormap_vals_dec = np.array([int(element,base=16) for element in my_colormap_vals_hex])\n",
|
||||||
|
"r = np.floor(my_colormap_vals_dec/(256*256))\n",
|
||||||
|
"g = np.floor((my_colormap_vals_dec - r *256 *256)/256)\n",
|
||||||
|
"b = np.floor(my_colormap_vals_dec - r * 256 *256 - g * 256)\n",
|
||||||
|
"my_colormap = ListedColormap(np.vstack((r,g,b)).transpose()/255.0)\n",
|
||||||
|
"\n",
|
||||||
|
"def draw_loss_function(compute_loss, data, model, my_colormap, phi_iters = None):\n",
|
||||||
|
"\n",
|
||||||
|
" # Make grid of intercept/slope values to plot\n",
|
||||||
|
" offsets_mesh, freqs_mesh = np.meshgrid(np.arange(-10,10.0,0.1), np.arange(2.5,22.5,0.1))\n",
|
||||||
|
" loss_mesh = np.zeros_like(freqs_mesh)\n",
|
||||||
|
" # Compute loss for every set of parameters\n",
|
||||||
|
" for idslope, slope in np.ndenumerate(freqs_mesh):\n",
|
||||||
|
" loss_mesh[idslope] = compute_loss(data[0,:], data[1,:], model, np.array([[offsets_mesh[idslope]], [slope]]))\n",
|
||||||
|
"\n",
|
||||||
|
" fig,ax = plt.subplots()\n",
|
||||||
|
" fig.set_size_inches(8,8)\n",
|
||||||
|
" ax.contourf(offsets_mesh,freqs_mesh,loss_mesh,256,cmap=my_colormap)\n",
|
||||||
|
" ax.contour(offsets_mesh,freqs_mesh,loss_mesh,20,colors=['#80808080'])\n",
|
||||||
|
" if phi_iters is not None:\n",
|
||||||
|
" ax.plot(phi_iters[0,:], phi_iters[1,:],'go-')\n",
|
||||||
|
" ax.set_ylim([2.5,22.5])\n",
|
||||||
|
" ax.set_xlabel('Offset $\\phi_{0}$'); ax.set_ylabel('Frequency, $\\phi_{1}$')\n",
|
||||||
|
" plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "K-NTHpAAHlCl"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"draw_loss_function(compute_loss, data, model, my_colormap)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "l8HbvIupnTME"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's compute the gradient vector for a given set of parameters:\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{equation}\n",
|
||||||
|
"\\frac{\\partial L}{\\partial \\boldsymbol\\phi} = \\begin{bmatrix}\\frac{\\partial L}{\\partial \\phi_0} \\\\\\frac{\\partial L}{\\partial \\phi_1} \\end{bmatrix}.\n",
|
||||||
|
"\\end{equation}"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "s9Duf05WqqSC"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# These came from writing out the expression for the sum of squares loss and taking the\n",
|
||||||
|
"# derivative with respect to phi0 and phi1. It was a lot of hassle to get it right!\n",
|
||||||
|
"def gabor_deriv_phi0(data_x,data_y,phi0, phi1):\n",
|
||||||
|
" x = 0.06 * phi1 * data_x + phi0\n",
|
||||||
|
" y = data_y\n",
|
||||||
|
" cos_component = np.cos(x)\n",
|
||||||
|
" sin_component = np.sin(x)\n",
|
||||||
|
" gauss_component = np.exp(-0.5 * x *x / 16)\n",
|
||||||
|
" deriv = cos_component * gauss_component - sin_component * gauss_component * x / 16\n",
|
||||||
|
" deriv = 2* deriv * (sin_component * gauss_component - y)\n",
|
||||||
|
" return np.sum(deriv)\n",
|
||||||
|
"\n",
|
||||||
|
"def gabor_deriv_phi1(data_x, data_y,phi0, phi1):\n",
|
||||||
|
" x = 0.06 * phi1 * data_x + phi0\n",
|
||||||
|
" y = data_y\n",
|
||||||
|
" cos_component = np.cos(x)\n",
|
||||||
|
" sin_component = np.sin(x)\n",
|
||||||
|
" gauss_component = np.exp(-0.5 * x *x / 16)\n",
|
||||||
|
" deriv = 0.06 * data_x * cos_component * gauss_component - 0.06 * data_x*sin_component * gauss_component * x / 16\n",
|
||||||
|
" deriv = 2*deriv * (sin_component * gauss_component - y)\n",
|
||||||
|
" return np.sum(deriv)\n",
|
||||||
|
"\n",
|
||||||
|
"def compute_gradient(data_x, data_y, phi):\n",
|
||||||
|
" dl_dphi0 = gabor_deriv_phi0(data_x, data_y, phi[0],phi[1])\n",
|
||||||
|
" dl_dphi1 = gabor_deriv_phi1(data_x, data_y, phi[0],phi[1])\n",
|
||||||
|
" # Return the gradient\n",
|
||||||
|
" return np.array([[dl_dphi0],[dl_dphi1]])"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "UpswmkL2qwBT"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now we are ready to find the minimum. For simplicity, we'll just use regular (non-stochastic) gradient descent with a fixed learning rate."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "5EIjMM9Fw2eT"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def gradient_descent_step(phi, data, model):\n",
|
||||||
|
" # Step 1: Compute the gradient\n",
|
||||||
|
" gradient = compute_gradient(data[0,:],data[1,:], phi)\n",
|
||||||
|
" # Step 2: Update the parameters -- note we want to search in the negative (downhill direction)\n",
|
||||||
|
" alpha = 0.1\n",
|
||||||
|
" phi = phi - alpha * gradient\n",
|
||||||
|
" return phi"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "YVq6rmaWRD2M"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Initialize the parameters\n",
|
||||||
|
"n_steps = 41\n",
|
||||||
|
"phi_all = np.zeros((2,n_steps+1))\n",
|
||||||
|
"phi_all[0,0] = 2.6\n",
|
||||||
|
"phi_all[1,0] = 8.5\n",
|
||||||
|
"\n",
|
||||||
|
"# Measure loss and draw initial model\n",
|
||||||
|
"loss = compute_loss(data[0,:], data[1,:], model, phi_all[:,0:1])\n",
|
||||||
|
"draw_model(data,model,phi_all[:,0:1], \"Initial parameters, Loss = %f\"%(loss))\n",
|
||||||
|
"\n",
|
||||||
|
"for c_step in range (n_steps):\n",
|
||||||
|
" # Do gradient descent step\n",
|
||||||
|
" phi_all[:,c_step+1:c_step+2] = gradient_descent_step(phi_all[:,c_step:c_step+1],data, model)\n",
|
||||||
|
" # Measure loss and draw model every 4th step\n",
|
||||||
|
" if c_step % 8 == 0:\n",
|
||||||
|
" loss = compute_loss(data[0,:], data[1,:], model, phi_all[:,c_step+1:c_step+2])\n",
|
||||||
|
" draw_model(data,model,phi_all[:,c_step+1], \"Iteration %d, loss = %f\"%(c_step+1,loss))\n",
|
||||||
|
"\n",
|
||||||
|
"draw_loss_function(compute_loss, data, model, my_colormap, phi_all)\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "tOLd0gtdRLLS"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Unfortunately, when we start from this position, the solution descends to a local minimum and the final model doesn't fit well.<br><br>\n",
|
||||||
|
"\n",
|
||||||
|
"But what if we had some weak knowledge that the solution was in the vicinity of $\\phi_0=0.0$, $\\phi_{1} = 12.5$ (the center of the plot)?\n",
|
||||||
|
"\n",
|
||||||
|
"Let's add a term to the loss function that penalizes solutions that deviate from this point. \n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{equation}\n",
|
||||||
|
"L'[\\boldsymbol\\phi] = L[\\boldsymbol\\phi]+ \\lambda\\cdot \\Bigl(\\phi_{0}^2+(\\phi_1-12.5)^2\\Bigr)\n",
|
||||||
|
"\\end{equation}\n",
|
||||||
|
"\n",
|
||||||
|
"where $\\lambda$ controls the relative importance of the original loss and the regularization term"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "3kKW2D5vEwhA"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Computes the regularization term\n",
|
||||||
|
"def compute_reg_term(phi0,phi1):\n",
|
||||||
|
" # TODO compute the regularization term (term in large brackets in the above equation)\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" reg_term = 0.0\n",
|
||||||
|
"\n",
|
||||||
|
" return reg_term ;\n",
|
||||||
|
"\n",
|
||||||
|
"# Define the loss function\n",
|
||||||
|
"# Note I called the weighting lambda_ to avoid confusing it with python lambda functions\n",
|
||||||
|
"def compute_loss2(data_x, data_y, model, phi, lambda_):\n",
|
||||||
|
" pred_y = model(phi, data_x)\n",
|
||||||
|
" loss = np.sum((pred_y-data_y)*(pred_y-data_y))\n",
|
||||||
|
" # Add the new term to the loss\n",
|
||||||
|
" loss = loss + lambda_ * compute_reg_term(phi[0],phi[1])\n",
|
||||||
|
"\n",
|
||||||
|
" return loss"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "4IgsQelgDdQ-"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Code to draw the regularization function\n",
|
||||||
|
"def draw_reg_function():\n",
|
||||||
|
"\n",
|
||||||
|
" # Make grid of intercept/slope values to plot\n",
|
||||||
|
" offsets_mesh, freqs_mesh = np.meshgrid(np.arange(-10,10.0,0.1), np.arange(2.5,22.5,0.1))\n",
|
||||||
|
" loss_mesh = np.zeros_like(freqs_mesh)\n",
|
||||||
|
" # Compute loss for every set of parameters\n",
|
||||||
|
" for idslope, slope in np.ndenumerate(freqs_mesh):\n",
|
||||||
|
" loss_mesh[idslope] = compute_reg_term(offsets_mesh[idslope], slope)\n",
|
||||||
|
"\n",
|
||||||
|
" fig,ax = plt.subplots()\n",
|
||||||
|
" fig.set_size_inches(8,8)\n",
|
||||||
|
" ax.contourf(offsets_mesh,freqs_mesh,loss_mesh,256,cmap=my_colormap)\n",
|
||||||
|
" ax.contour(offsets_mesh,freqs_mesh,loss_mesh,20,colors=['#80808080'])\n",
|
||||||
|
" ax.set_ylim([2.5,22.5])\n",
|
||||||
|
" ax.set_xlabel('Offset $\\phi_{0}$'); ax.set_ylabel('Frequency, $\\phi_{1}$')\n",
|
||||||
|
" plt.show()\n",
|
||||||
|
"\n",
|
||||||
|
"# Draw the regularization function. It should look similar to figure 9.1b\n",
|
||||||
|
"draw_reg_function()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "PFl9zWzLNjuK"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Code to draw loss function with regularization\n",
|
||||||
|
"def draw_loss_function_reg(data, model, lambda_, my_colormap, phi_iters = None):\n",
|
||||||
|
"\n",
|
||||||
|
" # Make grid of intercept/slope values to plot\n",
|
||||||
|
" offsets_mesh, freqs_mesh = np.meshgrid(np.arange(-10,10.0,0.1), np.arange(2.5,22.5,0.1))\n",
|
||||||
|
" loss_mesh = np.zeros_like(freqs_mesh)\n",
|
||||||
|
" # Compute loss for every set of parameters\n",
|
||||||
|
" for idslope, slope in np.ndenumerate(freqs_mesh):\n",
|
||||||
|
" loss_mesh[idslope] = compute_loss2(data[0,:], data[1,:], model, np.array([[offsets_mesh[idslope]], [slope]]), lambda_)\n",
|
||||||
|
"\n",
|
||||||
|
" fig,ax = plt.subplots()\n",
|
||||||
|
" fig.set_size_inches(8,8)\n",
|
||||||
|
" ax.contourf(offsets_mesh,freqs_mesh,loss_mesh,256,cmap=my_colormap)\n",
|
||||||
|
" ax.contour(offsets_mesh,freqs_mesh,loss_mesh,20,colors=['#80808080'])\n",
|
||||||
|
" if phi_iters is not None:\n",
|
||||||
|
" ax.plot(phi_iters[0,:], phi_iters[1,:],'go-')\n",
|
||||||
|
" ax.set_ylim([2.5,22.5])\n",
|
||||||
|
" ax.set_xlabel('Offset $\\phi_{0}$'); ax.set_ylabel('Frequency, $\\phi_{1}$')\n",
|
||||||
|
" plt.show()\n",
|
||||||
|
"\n",
|
||||||
|
"# This should look something like figure 9.1c\n",
|
||||||
|
"draw_loss_function_reg(data, model, 0.2, my_colormap)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "mQdEWCQdN5Mt"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# TODO -- Experiment with different values of the regularization weight lambda_\n",
|
||||||
|
"# What do you predict will happen when it is very small (e.g. 0.01)?\n",
|
||||||
|
"# What do you predict will happen when it is large (e.g, 1.0)?\n",
|
||||||
|
"# What happens to the loss at the global minimum when we add the regularization term?\n",
|
||||||
|
"# Does it go up? Go down? Stay the same?"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "da047xjZQqj6"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now we'll compute the derivatives $\\frac{\\partial L'}{\\partial\\phi_0}$ and $\\frac{\\partial L'}{\\partial\\phi_1}$ of the regularized loss function:\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{equation}\n",
|
||||||
|
"L'[\\boldsymbol\\phi] = L[\\boldsymbol\\phi]+ \\lambda\\cdot \\Bigl(\\phi_{0}^2+(\\phi_1-12.5)^2\\Bigr)\n",
|
||||||
|
"\\end{equation}\n",
|
||||||
|
"\n",
|
||||||
|
"so that we can perform gradient descent."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "z7k0QHRNRwtD"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def dldphi0(phi, lambda_):\n",
|
||||||
|
" # TODO compute the derivative with respect to phi0\n",
|
||||||
|
" # Replace this line:]\n",
|
||||||
|
" deriv = 0\n",
|
||||||
|
"\n",
|
||||||
|
" return deriv\n",
|
||||||
|
"\n",
|
||||||
|
"def dldphi1(phi, lambda_):\n",
|
||||||
|
" # TODO compute the derivative with respect to phi1\n",
|
||||||
|
" # Replace this line:]\n",
|
||||||
|
" deriv = 0\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" return deriv\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"def compute_gradient2(data_x, data_y, phi, lambda_):\n",
|
||||||
|
" dl_dphi0 = gabor_deriv_phi0(data_x, data_y, phi[0],phi[1])+dldphi0(np.squeeze(phi), lambda_)\n",
|
||||||
|
" dl_dphi1 = gabor_deriv_phi1(data_x, data_y, phi[0],phi[1])+dldphi1(np.squeeze(phi), lambda_)\n",
|
||||||
|
" # Return the gradient\n",
|
||||||
|
" return np.array([[dl_dphi0],[dl_dphi1]])\n",
|
||||||
|
"\n",
|
||||||
|
"def gradient_descent_step2(phi, lambda_, data, model):\n",
|
||||||
|
" # Step 1: Compute the gradient\n",
|
||||||
|
" gradient = compute_gradient2(data[0,:],data[1,:], phi, lambda_)\n",
|
||||||
|
" # Step 2: Update the parameters -- note we want to search in the negative (downhill direction)\n",
|
||||||
|
" alpha = 0.1\n",
|
||||||
|
" phi = phi - alpha * gradient\n",
|
||||||
|
" return phi"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "0OStdqo3Rv0a"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Finally, let's run gradient descent and draw the result\n",
|
||||||
|
"# Initialize the parameters\n",
|
||||||
|
"n_steps = 41\n",
|
||||||
|
"phi_all = np.zeros((2,n_steps+1))\n",
|
||||||
|
"phi_all[0,0] = 2.6\n",
|
||||||
|
"phi_all[1,0] = 8.5\n",
|
||||||
|
"lambda_ = 0.2\n",
|
||||||
|
"\n",
|
||||||
|
"# Measure loss and draw initial model\n",
|
||||||
|
"loss = compute_loss2(data[0,:], data[1,:], model, phi_all[:,0:1], lambda_)\n",
|
||||||
|
"draw_model(data,model,phi_all[:,0:1], \"Initial parameters, Loss = %f\"%(loss))\n",
|
||||||
|
"\n",
|
||||||
|
"for c_step in range (n_steps):\n",
|
||||||
|
" # Do gradient descent step\n",
|
||||||
|
" phi_all[:,c_step+1:c_step+2] = gradient_descent_step2(phi_all[:,c_step:c_step+1],lambda_, data, model)\n",
|
||||||
|
" # Measure loss and draw model every 4th step\n",
|
||||||
|
" if c_step % 8 == 0:\n",
|
||||||
|
" loss = compute_loss2(data[0,:], data[1,:], model, phi_all[:,c_step+1:c_step+2], lambda_)\n",
|
||||||
|
" draw_model(data,model,phi_all[:,c_step+1], \"Iteration %d, loss = %f\"%(c_step+1,loss))\n",
|
||||||
|
"\n",
|
||||||
|
"draw_loss_function_reg(data, model, lambda_, my_colormap, phi_all)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "c_V-Gv5hWgTE"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"You should see that the gradient descent algorithm now finds the correct minimum. By applying a tiny bit of domain knowledge (the parameter phi0 tends to be near zero and the parameters phi1 tends to be near 12.5), we get a better solution. However, the cost is that this solution is slightly biased towards this prior knowledge."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "wrszSLrqZG4k"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
402
Notebooks/Chap09/9_2_Implicit_Regularization.ipynb
Normal file
402
Notebooks/Chap09/9_2_Implicit_Regularization.ipynb
Normal file
File diff suppressed because one or more lines are too long
326
Notebooks/Chap09/9_3_Ensembling.ipynb
Normal file
326
Notebooks/Chap09/9_3_Ensembling.ipynb
Normal file
@@ -0,0 +1,326 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyNuR7X+PMWRddy+WQr4gr5f",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap09/9_3_Ensembling.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 9.3: Ensembling**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook investigates how ensembling can improve the performance of models. We'll work with the simplified neural network model (figure 8.4 of book) which we can fit in closed form, and so we can eliminate any errors due to not finding the global maximum.\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "el8l05WQEO46"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "xhmIOLiZELV_"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# import libraries\n",
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt\n",
|
||||||
|
"# Define seed so get same results each time\n",
|
||||||
|
"np.random.seed(1)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# The true function that we are trying to estimate, defined on [0,1]\n",
|
||||||
|
"def true_function(x):\n",
|
||||||
|
" y = np.exp(np.sin(x*(2*3.1413)))\n",
|
||||||
|
" return y"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "3hpqmFyQNrbt"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Generate some data points with or without noise\n",
|
||||||
|
"def generate_data(n_data, sigma_y=0.3):\n",
|
||||||
|
" # Generate x values quasi uniformly\n",
|
||||||
|
" x = np.ones(n_data)\n",
|
||||||
|
" for i in range(n_data):\n",
|
||||||
|
" x[i] = np.random.uniform(i/n_data, (i+1)/n_data, 1)\n",
|
||||||
|
"\n",
|
||||||
|
" # y value from running through functoin and adding noise\n",
|
||||||
|
" y = np.ones(n_data)\n",
|
||||||
|
" for i in range(n_data):\n",
|
||||||
|
" y[i] = true_function(x[i])\n",
|
||||||
|
" y[i] += np.random.normal(0, sigma_y, 1)\n",
|
||||||
|
" return x,y"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "skZMM5TbNwq4"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Draw the fitted function, together win uncertainty used to generate points\n",
|
||||||
|
"def plot_function(x_func, y_func, x_data=None,y_data=None, x_model = None, y_model =None, sigma_func = None, sigma_model=None):\n",
|
||||||
|
"\n",
|
||||||
|
" fig,ax = plt.subplots()\n",
|
||||||
|
" ax.plot(x_func, y_func, 'k-')\n",
|
||||||
|
" if sigma_func is not None:\n",
|
||||||
|
" ax.fill_between(x_func, y_func-2*sigma_func, y_func+2*sigma_func, color='lightgray')\n",
|
||||||
|
"\n",
|
||||||
|
" if x_data is not None:\n",
|
||||||
|
" ax.plot(x_data, y_data, 'o', color='#d18362')\n",
|
||||||
|
"\n",
|
||||||
|
" if x_model is not None:\n",
|
||||||
|
" ax.plot(x_model, y_model, '-', color='#7fe7de')\n",
|
||||||
|
"\n",
|
||||||
|
" if sigma_model is not None:\n",
|
||||||
|
" ax.fill_between(x_model, y_model-2*sigma_model, y_model+2*sigma_model, color='lightgray')\n",
|
||||||
|
"\n",
|
||||||
|
" ax.set_xlim(0,1)\n",
|
||||||
|
" ax.set_xlabel('Input, $x$')\n",
|
||||||
|
" ax.set_ylabel('Output, $y$')\n",
|
||||||
|
" plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "ziwD_R7lN0DY"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Generate true function\n",
|
||||||
|
"x_func = np.linspace(0, 1.0, 100)\n",
|
||||||
|
"y_func = true_function(x_func);\n",
|
||||||
|
"\n",
|
||||||
|
"# Generate some data points\n",
|
||||||
|
"np.random.seed(1)\n",
|
||||||
|
"sigma_func = 0.3\n",
|
||||||
|
"n_data = 15\n",
|
||||||
|
"x_data,y_data = generate_data(n_data, sigma_func)\n",
|
||||||
|
"\n",
|
||||||
|
"# Plot the functinon, data and uncertainty\n",
|
||||||
|
"plot_function(x_func, y_func, x_data, y_data, sigma_func=sigma_func)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "2CgKanwaN3NM"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Define model -- beta is a scalar and omega has size n_hidden,1\n",
|
||||||
|
"def network(x, beta, omega):\n",
|
||||||
|
" # Retrieve number of hidden units\n",
|
||||||
|
" n_hidden = omega.shape[0]\n",
|
||||||
|
"\n",
|
||||||
|
" y = np.zeros_like(x)\n",
|
||||||
|
" for c_hidden in range(n_hidden):\n",
|
||||||
|
" # Evaluate activations based on shifted lines (figure 8.4b-d)\n",
|
||||||
|
" line_vals = x - c_hidden/n_hidden\n",
|
||||||
|
" h = line_vals * (line_vals > 0)\n",
|
||||||
|
" # Weight activations by omega parameters and sum\n",
|
||||||
|
" y = y + omega[c_hidden] * h\n",
|
||||||
|
" # Add bias, beta\n",
|
||||||
|
" y = y + beta\n",
|
||||||
|
"\n",
|
||||||
|
" return y"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "gorZ6i97N7AR"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# This fits the n_hidden+1 parameters (see fig 8.4a) in closed form.\n",
|
||||||
|
"# If you have studied linear algebra, then you will know it is a least\n",
|
||||||
|
"# squares solution of the form (A^TA)^-1A^Tb. If you don't recognize that,\n",
|
||||||
|
"# then just take it on trust that this gives you the best possible solution.\n",
|
||||||
|
"def fit_model_closed_form(x,y,n_hidden):\n",
|
||||||
|
" n_data = len(x)\n",
|
||||||
|
" A = np.ones((n_data, n_hidden+1))\n",
|
||||||
|
" for i in range(n_data):\n",
|
||||||
|
" for j in range(1,n_hidden+1):\n",
|
||||||
|
" A[i,j] = x[i]-(j-1)/n_hidden\n",
|
||||||
|
" if A[i,j] < 0:\n",
|
||||||
|
" A[i,j] = 0;\n",
|
||||||
|
"\n",
|
||||||
|
" # Add a tiny bit of regularization\n",
|
||||||
|
" reg_value = 0.00001\n",
|
||||||
|
" regMat = reg_value * np.identity(n_hidden+1)\n",
|
||||||
|
" regMat[0,0] = 0\n",
|
||||||
|
"\n",
|
||||||
|
" ATA = np.matmul(np.transpose(A), A) +regMat\n",
|
||||||
|
" ATAInv = np.linalg.inv(ATA)\n",
|
||||||
|
" ATAInvAT = np.matmul(ATAInv, np.transpose(A))\n",
|
||||||
|
" beta_omega = np.matmul(ATAInvAT,y)\n",
|
||||||
|
" beta = beta_omega[0]\n",
|
||||||
|
" omega = beta_omega[1:]\n",
|
||||||
|
"\n",
|
||||||
|
" return beta, omega"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "bMrLZUIqOwiM"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Closed form solution\n",
|
||||||
|
"beta, omega = fit_model_closed_form(x_data,y_data,n_hidden=14)\n",
|
||||||
|
"\n",
|
||||||
|
"# Get prediction for model across graph grange\n",
|
||||||
|
"x_model = np.linspace(0,1,100);\n",
|
||||||
|
"y_model = network(x_model, beta, omega)\n",
|
||||||
|
"\n",
|
||||||
|
"# Draw the function and the model\n",
|
||||||
|
"plot_function(x_func, y_func, x_data,y_data, x_model, y_model)\n",
|
||||||
|
"\n",
|
||||||
|
"# Compute the mean squared error between the fitted model (cyan) and the true curve (black)\n",
|
||||||
|
"mean_sq_error = np.mean((y_model-y_func) * (y_model-y_func))\n",
|
||||||
|
"print(f\"Mean square error = {mean_sq_error:3.3f}\")"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "mzmtdY8DOz16"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Now let's resample the data with replacement four times.\n",
|
||||||
|
"n_model = 4\n",
|
||||||
|
"# Array to store the prediction from all of our models\n",
|
||||||
|
"all_y_model = np.zeros((n_model, len(y_model)))\n",
|
||||||
|
"\n",
|
||||||
|
"# For each model\n",
|
||||||
|
"for c_model in range(n_model):\n",
|
||||||
|
" # TODO Sample data indices with replacement (use np.random.choice)\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" resampled_indices = np.arange(0,n_data,1);\n",
|
||||||
|
"\n",
|
||||||
|
" # Extract the resampled x and y data\n",
|
||||||
|
" x_data_resampled = x_data[resampled_indices]\n",
|
||||||
|
" y_data_resampled = y_data[resampled_indices]\n",
|
||||||
|
"\n",
|
||||||
|
" # Fit the model\n",
|
||||||
|
" beta, omega = fit_model_closed_form(x_data_resampled,y_data_resampled,n_hidden=14)\n",
|
||||||
|
"\n",
|
||||||
|
" # Run the model\n",
|
||||||
|
" y_model_resampled = network(x_model, beta, omega)\n",
|
||||||
|
"\n",
|
||||||
|
" # Store the results\n",
|
||||||
|
" all_y_model[c_model,:] = y_model_resampled\n",
|
||||||
|
"\n",
|
||||||
|
" # Draw the function and the model\n",
|
||||||
|
" plot_function(x_func, y_func, x_data,y_data, x_model, y_model_resampled)\n",
|
||||||
|
"\n",
|
||||||
|
" # Compute the mean squared error between the fitted model (cyan) and the true curve (black)\n",
|
||||||
|
" mean_sq_error = np.mean((y_model_resampled-y_func) * (y_model_resampled-y_func))\n",
|
||||||
|
" print(f\"Mean square error = {mean_sq_error:3.3f}\")"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "UKrAOEiKO8Go"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Plot the median of the results\n",
|
||||||
|
"# TODO -- find the median prediction\n",
|
||||||
|
"# Replace this line\n",
|
||||||
|
"y_model_median = all_y_model[0,:]\n",
|
||||||
|
"\n",
|
||||||
|
"# Draw the function and the model\n",
|
||||||
|
"plot_function(x_func, y_func, x_data,y_data, x_model, y_model_median)\n",
|
||||||
|
"\n",
|
||||||
|
"# Compute the mean squared error between the fitted model (cyan) and the true curve (black)\n",
|
||||||
|
"mean_sq_error = np.mean((y_model_median-y_func) * (y_model_median-y_func))\n",
|
||||||
|
"print(f\"Mean square error = {mean_sq_error:3.3f}\")"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "cUTaW_GMS6nb"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Plot the median of the results\n",
|
||||||
|
"# TODO -- find the mean prediction\n",
|
||||||
|
"# Replace this line\n",
|
||||||
|
"y_model_mean = all_y_model[0,:]\n",
|
||||||
|
"\n",
|
||||||
|
"# Draw the function and the model\n",
|
||||||
|
"plot_function(x_func, y_func, x_data,y_data, x_model, y_model_mean)\n",
|
||||||
|
"\n",
|
||||||
|
"# Compute the mean squared error between the fitted model (cyan) and the true curve (black)\n",
|
||||||
|
"mean_sq_error = np.mean((y_model_mean-y_func) * (y_model_mean-y_func))\n",
|
||||||
|
"print(f\"Mean square error = {mean_sq_error:3.3f}\")"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "2MKxwVxuRvCx"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"You should see that both the median and mean models are better than any of the individual models. We have improved our performance at the cost of four times as much training time, storage, and inference time."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "K-jDZrfjWwBa"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
424
Notebooks/Chap09/9_4_Bayesian_Approach.ipynb
Normal file
424
Notebooks/Chap09/9_4_Bayesian_Approach.ipynb
Normal file
@@ -0,0 +1,424 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyMB8B4269DVmrcLoCWrhzKF",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap09/9_4_Bayesian_Approach.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 9.4: Bayesian approach**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook investigates the Bayesian approach to model fitting and reproduces figure 9.11 from the book.\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "el8l05WQEO46"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "xhmIOLiZELV_"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# import libraries\n",
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt\n",
|
||||||
|
"# Define seed so get same results each time\n",
|
||||||
|
"np.random.seed(1)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# The true function that we are trying to estimate, defined on [0,1]\n",
|
||||||
|
"def true_function(x):\n",
|
||||||
|
" y = np.exp(np.sin(x*(2*3.1413)))\n",
|
||||||
|
" return y"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "3hpqmFyQNrbt"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Generate some data points with or without noise\n",
|
||||||
|
"def generate_data(n_data, sigma_y=0.3):\n",
|
||||||
|
" # Generate x values quasi uniformly\n",
|
||||||
|
" x = np.ones(n_data)\n",
|
||||||
|
" for i in range(n_data):\n",
|
||||||
|
" x[i] = np.random.uniform(i/n_data, (i+1)/n_data, 1)\n",
|
||||||
|
"\n",
|
||||||
|
" # y value from running through functoin and adding noise\n",
|
||||||
|
" y = np.ones(n_data)\n",
|
||||||
|
" for i in range(n_data):\n",
|
||||||
|
" y[i] = true_function(x[i])\n",
|
||||||
|
" y[i] += np.random.normal(0, sigma_y, 1)\n",
|
||||||
|
" return x,y"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "skZMM5TbNwq4"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Draw the fitted function, together win uncertainty used to generate points\n",
|
||||||
|
"def plot_function(x_func, y_func, x_data=None,y_data=None, x_model = None, y_model =None, sigma_func = None, sigma_model=None):\n",
|
||||||
|
"\n",
|
||||||
|
" fig,ax = plt.subplots()\n",
|
||||||
|
" ax.plot(x_func, y_func, 'k-')\n",
|
||||||
|
" if sigma_func is not None:\n",
|
||||||
|
" ax.fill_between(x_func, y_func-2*sigma_func, y_func+2*sigma_func, color='lightgray')\n",
|
||||||
|
"\n",
|
||||||
|
" if x_data is not None:\n",
|
||||||
|
" ax.plot(x_data, y_data, 'o', color='#d18362')\n",
|
||||||
|
"\n",
|
||||||
|
" if x_model is not None:\n",
|
||||||
|
" ax.plot(x_model, y_model, '-', color='#7fe7de')\n",
|
||||||
|
"\n",
|
||||||
|
" if sigma_model is not None:\n",
|
||||||
|
" ax.fill_between(x_model, y_model-2*sigma_model, y_model+2*sigma_model, color='lightgray')\n",
|
||||||
|
"\n",
|
||||||
|
" ax.set_xlim(0,1)\n",
|
||||||
|
" ax.set_xlabel('Input, $x$')\n",
|
||||||
|
" ax.set_ylabel('Output, $y$')\n",
|
||||||
|
" plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "ziwD_R7lN0DY"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Generate true function\n",
|
||||||
|
"x_func = np.linspace(0, 1.0, 100)\n",
|
||||||
|
"y_func = true_function(x_func);\n",
|
||||||
|
"\n",
|
||||||
|
"# Generate some data points\n",
|
||||||
|
"np.random.seed(1)\n",
|
||||||
|
"sigma_func = 0.3\n",
|
||||||
|
"n_data = 15\n",
|
||||||
|
"x_data,y_data = generate_data(n_data, sigma_func)\n",
|
||||||
|
"\n",
|
||||||
|
"# Plot the functinon, data and uncertainty\n",
|
||||||
|
"plot_function(x_func, y_func, x_data, y_data, sigma_func=sigma_func)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "2CgKanwaN3NM"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Define model -- beta is a scalar and omega has size n_hidden,1\n",
|
||||||
|
"def network(x, beta, omega):\n",
|
||||||
|
" # Retrieve number of hidden units\n",
|
||||||
|
" n_hidden = omega.shape[0]\n",
|
||||||
|
"\n",
|
||||||
|
" y = np.zeros_like(x)\n",
|
||||||
|
" for c_hidden in range(n_hidden):\n",
|
||||||
|
" # Evaluate activations based on shifted lines (figure 8.4b-d)\n",
|
||||||
|
" line_vals = x - c_hidden/n_hidden\n",
|
||||||
|
" h = line_vals * (line_vals > 0)\n",
|
||||||
|
" # Weight activations by omega parameters and sum\n",
|
||||||
|
" y = y + omega[c_hidden] * h\n",
|
||||||
|
" # Add bias, beta\n",
|
||||||
|
" y = y + beta\n",
|
||||||
|
"\n",
|
||||||
|
" return y"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "gorZ6i97N7AR"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's compute a probability distribution over the model parameters using Bayes's rule:\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{equation}\n",
|
||||||
|
" Pr(\\boldsymbol\\phi|\\{\\mathbf{x}_{i},\\mathbf{y}_{i}\\}) = \\frac{\\prod_{i=1}^{I} Pr(\\mathbf{y}_{i}|\\mathbf{x}_{i},\\boldsymbol\\phi) Pr(\\boldsymbol\\phi)}{\\int \\prod_{i=1}^{I} Pr(\\mathbf{y}_{i}|\\mathbf{x}_{i},\\boldsymbol\\phi) Pr(\\boldsymbol\\phi)d\\boldsymbol\\phi } ,\n",
|
||||||
|
"\\end{equation}\n",
|
||||||
|
"\n",
|
||||||
|
"We'll define the prior $Pr(\\boldsymbol\\phi)$ as:\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{equation}\n",
|
||||||
|
"Pr(\\boldsymbol\\phi) = \\mbox{Norm}_{\\boldsymbol\\phi}\\bigl[\\mathbf{0},\\sigma^2_p\\mathbf{I}\\bigr]\n",
|
||||||
|
"\\end{equation}\n",
|
||||||
|
"\n",
|
||||||
|
"where $\\phi=[\\omega_1,\\omega_2\\ldots \\omega_n, \\beta]^T$ and $\\sigma^2_{p}$ is the prior variance.\n",
|
||||||
|
"\n",
|
||||||
|
"The likelihood term $\\prod_{i=1}^{I} Pr(\\mathbf{y}_{i}|\\mathbf{x}_{i},\\boldsymbol\\phi)$ is given by:\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{eqnarray}\n",
|
||||||
|
"\\prod_{i=1}^{I} Pr(\\mathbf{y}_{i}|\\mathbf{x}_{i},\\boldsymbol\\phi) &=& \\prod_{i=1}^{I} \\mbox{Norm}_{y_i}\\bigl[\\mbox{f}[\\mathbf{x}_{i},\\boldsymbol\\phi],\\sigma_d^2\\bigr]\\\\\n",
|
||||||
|
"&=& \\prod_{i=1}^{I} \\mbox{Norm}_{y_i}\\bigl[\\boldsymbol\\omega\\mathbf{h}_i+\\beta,\\sigma_d^2\\bigr]\\\\\n",
|
||||||
|
"&=& \\mbox{Norm}_{\\mathbf{y}}\\bigl[\\mathbf{H}^T\\boldsymbol\\phi,\\sigma^2\\mathbf{I}\\bigr].\n",
|
||||||
|
"\\end{eqnarray}\n",
|
||||||
|
"\n",
|
||||||
|
"where $\\sigma^2$ is the measurement noise and $\\mathbf{h}_{i}$ is the column vector of hidden variables for the $i^{th}$ input. Here the vector $\\mathbf{y}$ and matrix $\\mathbf{H}$ are defined as:\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{equation}\n",
|
||||||
|
"\\mathbf{y} = \\begin{bmatrix}y_1\\\\y_2\\\\\\vdots\\\\y_{I}\\end{bmatrix}\\quad\\mbox{and}\\quad \\mathbf{H} = \\begin{bmatrix}\\mathbf{h}_{1}&\\mathbf{h}_{2}&\\cdots&\\mathbf{h}_{I}\\\\1&1&\\cdots &1\\end{bmatrix}.\n",
|
||||||
|
"\\end{equation}\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "i8T_QduzeBmM"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"To make progress we use the change of variable relation (Appendix C.3.4 of the book) to rewrite the likelihood term as a normal distribution in the parameters $\\boldsymbol\\phi$:\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{eqnarray}\n",
|
||||||
|
"\\prod_{i=1}^{I} Pr(\\mathbf{y}_{i}|\\mathbf{x}_{i},\\boldsymbol\\phi+\\beta)\n",
|
||||||
|
"&=& \\mbox{Norm}_{\\mathbf{y}}\\bigl[\\mathbf{H}^T\\boldsymbol\\phi,\\sigma^2\\bigr]\\\\\n",
|
||||||
|
"&\\propto& \\mbox{Norm}_{\\boldsymbol\\phi}\\bigl[(\\mathbf{H}\\mathbf{H}^T)^{-1}\\mathbf{H}\\mathbf{y},\\sigma^2(\\mathbf{H}\\mathbf{H}^t)^{-1}\\bigr]\n",
|
||||||
|
"\\end{eqnarray}\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "JojV6ueRk49G"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Finally, we can combine the likelihood and prior terms using the product of two normal distributions relation (Appendix C.3.3).\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{eqnarray}\n",
|
||||||
|
" Pr(\\boldsymbol\\phi|\\{\\mathbf{x}_{i},\\mathbf{y}_{i}\\}) &\\propto& \\prod_{i=1}^{I} Pr(\\mathbf{y}_{i}|\\mathbf{x}_{i},\\boldsymbol\\phi) Pr(\\boldsymbol\\phi)\\\\\n",
|
||||||
|
" &\\propto&\\mbox{Norm}_{\\boldsymbol\\phi}\\bigl[(\\mathbf{H}\\mathbf{H}^T)^{-1}\\mathbf{H}\\mathbf{y},\\sigma^2(\\mathbf{H}\\mathbf{H}^T)^{-1}\\bigr] \\mbox{Norm}_{\\boldsymbol\\phi}\\bigl[\\mathbf{0},\\sigma^2_p\\mathbf{I}\\bigr]\\\\\n",
|
||||||
|
" &\\propto&\\mbox{Norm}_{\\boldsymbol\\phi}\\biggl[\\frac{1}{\\sigma^2}\\left(\\frac{1}{\\sigma^2}\\mathbf{H}\\mathbf{H}^T+\\frac{1}{\\sigma_p^2}\\mathbf{I}\\right)^{-1}\\mathbf{H}\\mathbf{y},\\left(\\frac{1}{\\sigma^2}\\mathbf{H}\\mathbf{H}^T+\\frac{1}{\\sigma_p^2}\\mathbf{I}\\right)^{-1}\\biggr].\n",
|
||||||
|
"\\end{eqnarray}\n",
|
||||||
|
"\n",
|
||||||
|
"In fact, since this already a normal distribution, the constant of proportionality must be one and we can write\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{eqnarray}\n",
|
||||||
|
" Pr(\\boldsymbol\\phi|\\{\\mathbf{x}_{i},\\mathbf{y}_{i}\\}) &=& \\mbox{Norm}_{\\boldsymbol\\phi}\\biggl[\\frac{1}{\\sigma^2}\\left(\\frac{1}{\\sigma^2}\\mathbf{H}\\mathbf{H}^T+\\frac{1}{\\sigma_p^2}\\mathbf{I}\\right)^{-1}\\mathbf{H}\\mathbf{y},\\left(\\frac{1}{\\sigma^2}\\mathbf{H}\\mathbf{H}^T+\\frac{1}{\\sigma_p^2}\\mathbf{I}\\right)^{-1}\\biggr].\n",
|
||||||
|
"\\end{eqnarray}\n",
|
||||||
|
"\n",
|
||||||
|
"TODO -- On a piece of paper, use the relations in Appendix C.3.3 and C.3.4 to fill in the missing steps and establish that this is the correct formula for the posterior."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "YX0O_Ciwp4W1"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def compute_H(x_data, n_hidden):\n",
|
||||||
|
" psi1 = np.ones((n_hidden+1,1));\n",
|
||||||
|
" psi0 = np.linspace(0.0, 1.0, num=n_hidden, endpoint=False) * -1\n",
|
||||||
|
"\n",
|
||||||
|
" n_data = x_data.size\n",
|
||||||
|
" # First compute the hidden variables\n",
|
||||||
|
" H = np.ones((n_hidden+1, n_data))\n",
|
||||||
|
" for i in range(n_hidden):\n",
|
||||||
|
" for j in range(n_data):\n",
|
||||||
|
" # Compute preactivation\n",
|
||||||
|
" H[i,j] = psi1[i] * x_data[j]+psi0[i]\n",
|
||||||
|
" # Apply ReLU to get activation\n",
|
||||||
|
" if H[i,j] < 0:\n",
|
||||||
|
" H[i,j] = 0;\n",
|
||||||
|
"\n",
|
||||||
|
" return H\n",
|
||||||
|
"\n",
|
||||||
|
"def compute_param_mean_covar(x_data, y_data, n_hidden, sigma_sq, sigma_p_sq):\n",
|
||||||
|
" # Retrieve the matrix containing the hidden variables\n",
|
||||||
|
" H = compute_H(x_data, n_hidden) ;\n",
|
||||||
|
"\n",
|
||||||
|
" # TODO -- Compute the covariance matrix (you will need np.transpose(), np.matmul(), np.linalg.inv())\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" phi_covar = np.identity(n_hidden+1)\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" # TODO -- Compute the mean matrix\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" phi_mean = np.zeros((n_hidden+1,1))\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" return phi_mean, phi_covar"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "nF1AcgNDwm4t"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now we can draw samples from this distribution"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "GjPnlG4q0UFK"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Define parameters\n",
|
||||||
|
"n_hidden = 5\n",
|
||||||
|
"sigma_sq = sigma_func * sigma_func\n",
|
||||||
|
"# Arbitrary large value reflecting the fact we are uncertain about the\n",
|
||||||
|
"# parameters before we see any data\n",
|
||||||
|
"sigma_p_sq = 1000\n",
|
||||||
|
"\n",
|
||||||
|
"# Compute the mean and covariance matrix\n",
|
||||||
|
"phi_mean, phi_covar = compute_param_mean_covar(x_data, y_data, n_hidden, sigma_sq, sigma_p_sq)\n",
|
||||||
|
"\n",
|
||||||
|
"# Let's draw the mean model\n",
|
||||||
|
"x_model = x_func\n",
|
||||||
|
"y_model_mean = network(x_model, phi_mean[-1], phi_mean[0:n_hidden])\n",
|
||||||
|
"plot_function(x_func, y_func, x_data, y_data, x_model, y_model_mean)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "K4vYc82D0BMq"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# TODO Draw two samples from the normal distribution over the parameters\n",
|
||||||
|
"# Replace these lines\n",
|
||||||
|
"phi_sample1 = np.zeros((n_hidden+1,1))\n",
|
||||||
|
"phi_sample2 = np.zeros((n_hidden+1,1))\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"# Run the network for these two sample sets of parameters\n",
|
||||||
|
"y_model_sample1 = network(x_model, phi_sample1[-1], phi_sample1[0:n_hidden])\n",
|
||||||
|
"y_model_sample2 = network(x_model, phi_sample2[-1], phi_sample2[0:n_hidden])\n",
|
||||||
|
"\n",
|
||||||
|
"# Draw the two models\n",
|
||||||
|
"plot_function(x_func, y_func, x_data, y_data, x_model, y_model_sample1)\n",
|
||||||
|
"plot_function(x_func, y_func, x_data, y_data, x_model, y_model_sample2)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "TVIjhubkSw-R"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now we need to perform inference for a new data points $\\mathbf{x}^*$ with corresponding hidden values $\\mathbf{h}^*$. Instead of having a single estimate of the parameters, we have a distribution over the possible parameters. So we marginalize (integrate) over this distribution to account for all possible values:\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{eqnarray}\n",
|
||||||
|
"Pr(y^*|\\mathbf{x}^*) &=& \\int Pr(y^{*}|\\mathbf{x}^*,\\boldsymbol\\phi)Pr(\\boldsymbol\\phi|\\{\\mathbf{x}_{i},\\mathbf{y}_{i}\\}) d\\boldsymbol\\phi\\\\\n",
|
||||||
|
"&=& \\int \\mbox{Norm}_{y^*}\\bigl[\\begin{bmatrix}\\mathbf{h}^{*T}&1\\end{bmatrix}\\boldsymbol\\phi,\\sigma^2]\\cdot\\mbox{Norm}_{\\boldsymbol\\phi}\\biggl[\\frac{1}{\\sigma^2}\\left(\\frac{1}{\\sigma^2}\\mathbf{H}\\mathbf{H}^T+\\frac{1}{\\sigma_p^2}\\mathbf{I}\\right)^{-1}\\mathbf{H}\\mathbf{y},\\left(\\frac{1}{\\sigma^2}\\mathbf{H}\\mathbf{H}^T+\\frac{1}{\\sigma_p^2}\\mathbf{I}\\right)^{-1}\\biggr]d\\boldsymbol\\phi\\\\\n",
|
||||||
|
"&=& \\mbox{Norm}_{y^*}\\biggl[\\frac{1}{\\sigma^2} \\begin{bmatrix}\\mathbf{h}^{*T}&1\\end{bmatrix}\\left(\\frac{1}{\\sigma^2}\\mathbf{H}\\mathbf{H}^T+\\frac{1}{\\sigma_p^2}\\mathbf{I}\\right)^{-1}\\mathbf{H}\\mathbf{y}, \\begin{bmatrix}\\mathbf{h}^{*T}&1\\end{bmatrix}\\left(\\frac{1}{\\sigma^2}\\mathbf{H}\\mathbf{H}^T+\\frac{1}{\\sigma_p^2}\\mathbf{I}\\right)^{-1}\n",
|
||||||
|
"\\begin{bmatrix}\\mathbf{h}^*\\\\1\\end{bmatrix}\\biggr]\n",
|
||||||
|
"\\end{eqnarray}\n",
|
||||||
|
"\n",
|
||||||
|
"To compute this, we reformulated the integrand using the relations from appendices\n",
|
||||||
|
"C.3.3 and C.3.4 as the product of a normal distribution in $\\boldsymbol\\phi$ and a constant with respect\n",
|
||||||
|
"to $\\boldsymbol\\phi$. The integral of the normal distribution must be one, and so the finnal result is just the constant. This constant is itself a normal distribution in $y^*$. <br>\n",
|
||||||
|
"\n",
|
||||||
|
"If you feel so inclined you can work through the math of this yourself."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "GiNg5EroUiUb"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Predict mean and variance of y_star from x_star\n",
|
||||||
|
"def inference(x_star, x_data, y_data, sigma_sq, sigma_p_sq, n_hidden):\n",
|
||||||
|
"\n",
|
||||||
|
" # Compute hidden variables\n",
|
||||||
|
" h_star = compute_H(x_star, n_hidden);\n",
|
||||||
|
" H = compute_H(x_data, n_hidden);\n",
|
||||||
|
"\n",
|
||||||
|
" # TODO: Compute mean and variance of y*\n",
|
||||||
|
" # Replace these lines:\n",
|
||||||
|
" y_star_mean = 0\n",
|
||||||
|
" y_star_var = 1\n",
|
||||||
|
"\n",
|
||||||
|
" return y_star_mean, y_star_var"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "ILxT4EfW2lUm"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"x_model = x_func\n",
|
||||||
|
"y_model = np.zeros_like(x_model)\n",
|
||||||
|
"y_model_std = np.zeros_like(x_model)\n",
|
||||||
|
"for c_model in range(len(x_model)):\n",
|
||||||
|
" y_star_mean, y_star_var = inference(x_model[c_model]*np.ones((1,1)), x_data, y_data, sigma_sq, sigma_p_sq, n_hidden)\n",
|
||||||
|
" y_model[c_model] = y_star_mean\n",
|
||||||
|
" y_model_std[c_model] = np.sqrt(y_star_var)\n",
|
||||||
|
"\n",
|
||||||
|
"# Draw the model\n",
|
||||||
|
"plot_function(x_func, y_func, x_data, y_data, x_model, y_model, sigma_model=y_model_std)\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "87cjUjMaixHZ"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"TODO:\n",
|
||||||
|
"\n",
|
||||||
|
"1. Experiment running this again with different numbers of hidden units. Make a prediction for what will happen when you increase / decrease them.\n",
|
||||||
|
"2. Experiment with what happens if you make the prior variance $\\sigma^2_p$ to a smaller value like 1. How do you explain the results?"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "8Hcbe_16sK0F"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
346
Notebooks/Chap09/9_5_Augmentation.ipynb
Normal file
346
Notebooks/Chap09/9_5_Augmentation.ipynb
Normal file
@@ -0,0 +1,346 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyM3wq9CHLjekkIXIgXRxueE",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap09/9_5_Augmentation.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 9.5: Augmentation**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook investigates data augmentation for the MNIST-1D model.\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "el8l05WQEO46"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Run this if you're in a Colab to make a local copy of the MNIST 1D repository\n",
|
||||||
|
"!git clone https://github.com/greydanus/mnist1d"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "syvgxgRr3myY"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"import torch, torch.nn as nn\n",
|
||||||
|
"from torch.utils.data import TensorDataset, DataLoader\n",
|
||||||
|
"from torch.optim.lr_scheduler import StepLR\n",
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt\n",
|
||||||
|
"import mnist1d\n",
|
||||||
|
"import random"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "ckrNsYd13pMe"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"args = mnist1d.data.get_dataset_args()\n",
|
||||||
|
"data = mnist1d.data.get_dataset(args, path='./mnist1d_data.pkl', download=False, regenerate=False)\n",
|
||||||
|
"\n",
|
||||||
|
"# The training and test input and outputs are in\n",
|
||||||
|
"# data['x'], data['y'], data['x_test'], and data['y_test']\n",
|
||||||
|
"print(\"Examples in training set: {}\".format(len(data['y'])))\n",
|
||||||
|
"print(\"Examples in test set: {}\".format(len(data['y_test'])))\n",
|
||||||
|
"print(\"Length of each example: {}\".format(data['x'].shape[-1]))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "D_Woo9U730lZ"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"D_i = 40 # Input dimensions\n",
|
||||||
|
"D_k = 200 # Hidden dimensions\n",
|
||||||
|
"D_o = 10 # Output dimensions\n",
|
||||||
|
"\n",
|
||||||
|
"# Define a model with two hidden layers of size 100\n",
|
||||||
|
"# And ReLU activations between them\n",
|
||||||
|
"model = nn.Sequential(\n",
|
||||||
|
"nn.Linear(D_i, D_k),\n",
|
||||||
|
"nn.ReLU(),\n",
|
||||||
|
"nn.Linear(D_k, D_k),\n",
|
||||||
|
"nn.ReLU(),\n",
|
||||||
|
"nn.Linear(D_k, D_o))\n",
|
||||||
|
"\n",
|
||||||
|
"def weights_init(layer_in):\n",
|
||||||
|
" # Initialize the parameters with He initialization\n",
|
||||||
|
" if isinstance(layer_in, nn.Linear):\n",
|
||||||
|
" nn.init.kaiming_uniform_(layer_in.weight)\n",
|
||||||
|
" layer_in.bias.data.fill_(0.0)\n",
|
||||||
|
"\n",
|
||||||
|
"# Call the function you just defined\n",
|
||||||
|
"model.apply(weights_init)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "JfIFWFIL33eF"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# choose cross entropy loss function (equation 5.24)\n",
|
||||||
|
"loss_function = torch.nn.CrossEntropyLoss()\n",
|
||||||
|
"# construct SGD optimizer and initialize learning rate and momentum\n",
|
||||||
|
"optimizer = torch.optim.SGD(model.parameters(), lr = 0.05, momentum=0.9)\n",
|
||||||
|
"# object that decreases learning rate by half every 10 epochs\n",
|
||||||
|
"scheduler = StepLR(optimizer, step_size=10, gamma=0.5)\n",
|
||||||
|
"# create 100 dummy data points and store in data loader class\n",
|
||||||
|
"x_train = torch.tensor(data['x'].astype('float32'))\n",
|
||||||
|
"y_train = torch.tensor(data['y'].transpose().astype('long'))\n",
|
||||||
|
"x_test= torch.tensor(data['x_test'].astype('float32'))\n",
|
||||||
|
"y_test = torch.tensor(data['y_test'].astype('long'))\n",
|
||||||
|
"\n",
|
||||||
|
"# load the data into a class that creates the batches\n",
|
||||||
|
"data_loader = DataLoader(TensorDataset(x_train,y_train), batch_size=100, shuffle=True, worker_init_fn=np.random.seed(1))\n",
|
||||||
|
"\n",
|
||||||
|
"# Initialize model weights\n",
|
||||||
|
"model.apply(weights_init)\n",
|
||||||
|
"\n",
|
||||||
|
"# loop over the dataset n_epoch times\n",
|
||||||
|
"n_epoch = 50\n",
|
||||||
|
"# store the loss and the % correct at each epoch\n",
|
||||||
|
"errors_train = np.zeros((n_epoch))\n",
|
||||||
|
"errors_test = np.zeros((n_epoch))\n",
|
||||||
|
"\n",
|
||||||
|
"for epoch in range(n_epoch):\n",
|
||||||
|
" # loop over batches\n",
|
||||||
|
" for i, batch in enumerate(data_loader):\n",
|
||||||
|
" # retrieve inputs and labels for this batch\n",
|
||||||
|
" x_batch, y_batch = batch\n",
|
||||||
|
" # zero the parameter gradients\n",
|
||||||
|
" optimizer.zero_grad()\n",
|
||||||
|
" # forward pass -- calculate model output\n",
|
||||||
|
" pred = model(x_batch)\n",
|
||||||
|
" # compute the loss\n",
|
||||||
|
" loss = loss_function(pred, y_batch)\n",
|
||||||
|
" # backward pass\n",
|
||||||
|
" loss.backward()\n",
|
||||||
|
" # SGD update\n",
|
||||||
|
" optimizer.step()\n",
|
||||||
|
"\n",
|
||||||
|
" # Run whole dataset to get statistics -- normally wouldn't do this\n",
|
||||||
|
" pred_train = model(x_train)\n",
|
||||||
|
" pred_test = model(x_test)\n",
|
||||||
|
" _, predicted_train_class = torch.max(pred_train.data, 1)\n",
|
||||||
|
" _, predicted_test_class = torch.max(pred_test.data, 1)\n",
|
||||||
|
" errors_train[epoch] = 100 - 100 * (predicted_train_class == y_train).float().sum() / len(y_train)\n",
|
||||||
|
" errors_test[epoch]= 100 - 100 * (predicted_test_class == y_test).float().sum() / len(y_test)\n",
|
||||||
|
" print(f'Epoch {epoch:5d}, train error {errors_train[epoch]:3.2f}, test error {errors_test[epoch]:3.2f}')"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "YFfVbTPE4BkJ"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Plot the results\n",
|
||||||
|
"fig, ax = plt.subplots()\n",
|
||||||
|
"ax.plot(errors_train,'r-',label='train')\n",
|
||||||
|
"ax.plot(errors_test,'b-',label='test')\n",
|
||||||
|
"ax.set_ylim(0,100); ax.set_xlim(0,n_epoch)\n",
|
||||||
|
"ax.set_xlabel('Epoch'); ax.set_ylabel('Error')\n",
|
||||||
|
"ax.set_title('TrainError %3.2f, Test Error %3.2f'%(errors_train[-1],errors_test[-1]))\n",
|
||||||
|
"ax.legend()\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "FmGDd4vB8LyM"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"The best test performance is about 33%. Let's see if we can improve on that by augmenting the data."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "55XvoPDO8Qp-"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def augment(data_in):\n",
|
||||||
|
" # Create output vector\n",
|
||||||
|
" data_out = np.zeros_like(data_in)\n",
|
||||||
|
"\n",
|
||||||
|
" # TODO: Shift the input data by a random offset\n",
|
||||||
|
" # (rotating, so points that would go off the end, are added back to the beginning)\n",
|
||||||
|
" # Replace this line:\n",
|
||||||
|
" data_out = np.zeros_like(data_in) ;\n",
|
||||||
|
"\n",
|
||||||
|
" # TODO: # Randomly scale the data by a factor drawn from a uniform distribution over [0.8,1.2]\n",
|
||||||
|
" # Replace this line:\n",
|
||||||
|
" data_out = np.array(data_out)\n",
|
||||||
|
"\n",
|
||||||
|
" return data_out"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "IP6z2iox8MOF"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"n_data_orig = data['x'].shape[0]\n",
|
||||||
|
"# We'll double the amount o fdata\n",
|
||||||
|
"n_data_augment = n_data_orig+4000\n",
|
||||||
|
"augmented_x = np.zeros((n_data_augment, D_i))\n",
|
||||||
|
"augmented_y = np.zeros(n_data_augment)\n",
|
||||||
|
"# First n_data_orig rows are original data\n",
|
||||||
|
"augmented_x[0:n_data_orig,:] = data['x']\n",
|
||||||
|
"augmented_y[0:n_data_orig] = data['y']\n",
|
||||||
|
"\n",
|
||||||
|
"# Fill in rest of with augmented data\n",
|
||||||
|
"for c_augment in range(n_data_orig, n_data_augment):\n",
|
||||||
|
" # Choose a data point randomly\n",
|
||||||
|
" random_data_index = random.randint(0, n_data_orig-1)\n",
|
||||||
|
" # Augment the point and store\n",
|
||||||
|
" augmented_x[c_augment,:] = augment(data['x'][random_data_index,:])\n",
|
||||||
|
" augmented_y[c_augment] = data['y'][random_data_index]\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "bzN0lu5J95AJ"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# choose cross entropy loss function (equation 5.24)\n",
|
||||||
|
"loss_function = torch.nn.CrossEntropyLoss()\n",
|
||||||
|
"# construct SGD optimizer and initialize learning rate and momentum\n",
|
||||||
|
"optimizer = torch.optim.SGD(model.parameters(), lr = 0.05, momentum=0.9)\n",
|
||||||
|
"# object that decreases learning rate by half every 50 epochs\n",
|
||||||
|
"scheduler = StepLR(optimizer, step_size=10, gamma=0.5)\n",
|
||||||
|
"# create 100 dummy data points and store in data loader class\n",
|
||||||
|
"x_train = torch.tensor(augmented_x.astype('float32'))\n",
|
||||||
|
"y_train = torch.tensor(augmented_y.transpose().astype('long'))\n",
|
||||||
|
"x_test= torch.tensor(data['x_test'].astype('float32'))\n",
|
||||||
|
"y_test = torch.tensor(data['y_test'].astype('long'))\n",
|
||||||
|
"\n",
|
||||||
|
"# load the data into a class that creates the batches\n",
|
||||||
|
"data_loader = DataLoader(TensorDataset(x_train,y_train), batch_size=100, shuffle=True, worker_init_fn=np.random.seed(1))\n",
|
||||||
|
"\n",
|
||||||
|
"# Initialize model weights\n",
|
||||||
|
"model.apply(weights_init)\n",
|
||||||
|
"\n",
|
||||||
|
"# loop over the dataset n_epoch times\n",
|
||||||
|
"n_epoch = 50\n",
|
||||||
|
"# store the loss and the % correct at each epoch\n",
|
||||||
|
"errors_train_aug = np.zeros((n_epoch))\n",
|
||||||
|
"errors_test_aug = np.zeros((n_epoch))\n",
|
||||||
|
"\n",
|
||||||
|
"for epoch in range(n_epoch):\n",
|
||||||
|
" # loop over batches\n",
|
||||||
|
" for i, batch in enumerate(data_loader):\n",
|
||||||
|
" # retrieve inputs and labels for this batch\n",
|
||||||
|
" x_batch, y_batch = batch\n",
|
||||||
|
" # zero the parameter gradients\n",
|
||||||
|
" optimizer.zero_grad()\n",
|
||||||
|
" # forward pass -- calculate model output\n",
|
||||||
|
" pred = model(x_batch)\n",
|
||||||
|
" # compute the loss\n",
|
||||||
|
" loss = loss_function(pred, y_batch)\n",
|
||||||
|
" # backward pass\n",
|
||||||
|
" loss.backward()\n",
|
||||||
|
" # SGD update\n",
|
||||||
|
" optimizer.step()\n",
|
||||||
|
"\n",
|
||||||
|
" # Run whole dataset to get statistics -- normally wouldn't do this\n",
|
||||||
|
" pred_train = model(x_train)\n",
|
||||||
|
" pred_test = model(x_test)\n",
|
||||||
|
" _, predicted_train_class = torch.max(pred_train.data, 1)\n",
|
||||||
|
" _, predicted_test_class = torch.max(pred_test.data, 1)\n",
|
||||||
|
" errors_train_aug[epoch] = 100 - 100 * (predicted_train_class == y_train).float().sum() / len(y_train)\n",
|
||||||
|
" errors_test_aug[epoch]= 100 - 100 * (predicted_test_class == y_test).float().sum() / len(y_test)\n",
|
||||||
|
" print(f'Epoch {epoch:5d}, train error {errors_train_aug[epoch]:3.2f}, test error {errors_test_aug[epoch]:3.2f}')"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "hZUNrXpS_kRs"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Plot the results\n",
|
||||||
|
"fig, ax = plt.subplots()\n",
|
||||||
|
"ax.plot(errors_train,'r-',label='train')\n",
|
||||||
|
"ax.plot(errors_test,'b-',label='test')\n",
|
||||||
|
"ax.plot(errors_test_aug,'g-',label='test (augmented)')\n",
|
||||||
|
"ax.set_ylim(0,100); ax.set_xlim(0,n_epoch)\n",
|
||||||
|
"ax.set_xlabel('Epoch'); ax.set_ylabel('Error')\n",
|
||||||
|
"ax.set_title('TrainError %3.2f, Test Error %3.2f'%(errors_train_aug[-1],errors_test_aug[-1]))\n",
|
||||||
|
"ax.legend()\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "IcnAW4ixBnuc"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Hopefully, you should see an improvement in performance when we augment the data."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "jgsR7ScJHc9b"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
376
Notebooks/Chap10/10_1_1D_Convolution.ipynb
Normal file
376
Notebooks/Chap10/10_1_1D_Convolution.ipynb
Normal file
@@ -0,0 +1,376 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyPHUNRkJMI5LujaxIXNV60m",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap10/10_1_1D_Convolution.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 10.1: 1D Convolution**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook investigates 1D convolutional layers.\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "el8l05WQEO46"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "nw7k5yCtOzoK"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Define a signal that we can apply convolution to\n",
|
||||||
|
"x = [5.2, 5.3, 5.4, 5.1, 10.1, 10.3, 9.9, 10.3, 3.2, 3.4, 3.3, 3.1]"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "lSSHuoEqO3Ly"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Draw the signal\n",
|
||||||
|
"fig,ax = plt.subplots()\n",
|
||||||
|
"ax.plot(x, 'k-')\n",
|
||||||
|
"ax.set_xlim(0,11)\n",
|
||||||
|
"ax.set_ylim(0, 12)\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "zVssv_wiREc2"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Now let's define a zero-padded convolution operation\n",
|
||||||
|
"# with a convolution kernel size of 3, a stride of 1, and a dilation of 0\n",
|
||||||
|
"# as in figure 10.2a-c. Write it yourself, don't call a library routine!\n",
|
||||||
|
"# Don't forget that Python arrays are indexed from zero, not from 1 as in the book figures\n",
|
||||||
|
"def conv_3_1_0_zp(x_in, omega):\n",
|
||||||
|
" x_out = np.zeros_like(x_in)\n",
|
||||||
|
" # TODO -- write this function\n",
|
||||||
|
" # replace this line\n",
|
||||||
|
" x_out = x_out\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" return x_out"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "MmfXED12RvNq"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's see what kind of things convolution can do\n",
|
||||||
|
"First, it can average nearby values, smoothing the function:"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "Fof_Rs98Zovq"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"\n",
|
||||||
|
"omega = [0.33,0.33,0.33]\n",
|
||||||
|
"h = conv_3_1_0_zp(x, omega)\n",
|
||||||
|
"\n",
|
||||||
|
"# Check that you have computed this correctly\n",
|
||||||
|
"print(f\"Sum of output is {np.sum(h):3.3}, should be 71.1\")\n",
|
||||||
|
"\n",
|
||||||
|
"# Draw the signal\n",
|
||||||
|
"fig,ax = plt.subplots()\n",
|
||||||
|
"ax.plot(x, 'k-',label='before')\n",
|
||||||
|
"ax.plot(h, 'r-',label='after')\n",
|
||||||
|
"ax.set_xlim(0,11)\n",
|
||||||
|
"ax.set_ylim(0, 12)\n",
|
||||||
|
"ax.legend()\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "HOcPZR6iWXsa"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Notice how the red function is a smoothed version of the black one as it has averaged adjacent values. The first and last outputs are considerably lower than the original curve though. Make sure that you understand why!<br><br>\n",
|
||||||
|
"\n",
|
||||||
|
"With different weights, the convolution can be used to find sharp changes in the function:"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "PBkNKUylZr-k"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"\n",
|
||||||
|
"omega = [-0.5,0,0.5]\n",
|
||||||
|
"h2 = conv_3_1_0_zp(x, omega)\n",
|
||||||
|
"\n",
|
||||||
|
"# Draw the signal\n",
|
||||||
|
"fig,ax = plt.subplots()\n",
|
||||||
|
"ax.plot(x, 'k-',label='before')\n",
|
||||||
|
"ax.plot(h2, 'r-',label='after')\n",
|
||||||
|
"ax.set_xlim(0,11)\n",
|
||||||
|
"# ax.set_ylim(0, 12)\n",
|
||||||
|
"ax.legend()\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "o8T5WKeuZrgS"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Notice that the convolution has a peak where the original function went up and trough where it went down. It is roughly zero where the function is locally flat. This convolution approximates a derivative. <br> <br>\n",
|
||||||
|
"\n",
|
||||||
|
"Now let's define the convolutions from figure 10.3. "
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "ogfCVThJgtPx"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Now let's define a zero-padded convolution operation\n",
|
||||||
|
"# with a convolution kernel size of 3, a stride of 2, and a dilation of 0\n",
|
||||||
|
"# as in figure 10.2a-c. Write it yourself, don't call a library routine!\n",
|
||||||
|
"def conv_3_2_0_zp(x_in, omega):\n",
|
||||||
|
" x_out = np.zeros(int(np.ceil(len(x_in)/2)))\n",
|
||||||
|
" # TODO -- write this function\n",
|
||||||
|
" # replace this line\n",
|
||||||
|
" x_out = x_out\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" return x_out"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "5QYrQmFMiDBj"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"omega = [0.33,0.33,0.33]\n",
|
||||||
|
"h3 = conv_3_2_0_zp(x, omega)\n",
|
||||||
|
"\n",
|
||||||
|
"# If you have done this right, the output length should be six and it should\n",
|
||||||
|
"# contain every other value from the original convolution with stride 1\n",
|
||||||
|
"print(h)\n",
|
||||||
|
"print(h3)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "CD96lnDHX72A"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Now let's define a zero-padded convolution operation\n",
|
||||||
|
"# with a convolution kernel size of 5, a stride of 1, and a dilation of 0\n",
|
||||||
|
"# as in figure 10.2a-c. Write it yourself, don't call a library routine!\n",
|
||||||
|
"def conv_5_1_0_zp(x_in, omega):\n",
|
||||||
|
" x_out = np.zeros_like(x_in)\n",
|
||||||
|
" # TODO -- write this function\n",
|
||||||
|
" # replace this line\n",
|
||||||
|
" x_out = x_out\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" return x_out"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "lw46-gNUjDw7"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"\n",
|
||||||
|
"omega2 = [0.2, 0.2, 0.2, 0.2, 0.2]\n",
|
||||||
|
"h4 = conv_5_1_0_zp(x, omega2)\n",
|
||||||
|
"\n",
|
||||||
|
"# Check that you have computed this correctly\n",
|
||||||
|
"print(f\"Sum of output is {np.sum(h4):3.3}, should be 69.6\")\n",
|
||||||
|
"\n",
|
||||||
|
"# Draw the signal\n",
|
||||||
|
"fig,ax = plt.subplots()\n",
|
||||||
|
"ax.plot(x, 'k-',label='before')\n",
|
||||||
|
"ax.plot(h4, 'r-',label='after')\n",
|
||||||
|
"ax.set_xlim(0,11)\n",
|
||||||
|
"ax.set_ylim(0, 12)\n",
|
||||||
|
"ax.legend()\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "JkKBL-nFk4bf"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Finally let's define a zero-padded convolution operation\n",
|
||||||
|
"# with a convolution kernel size of 3, a stride of 1, and a dilation of 1\n",
|
||||||
|
"# as in figure 10.2a-c. Write it yourself, don't call a library routine!\n",
|
||||||
|
"# Don't forget that Python arrays are indexed from zero, not from 1 as in the book figures\n",
|
||||||
|
"def conv_3_1_1_zp(x_in, omega):\n",
|
||||||
|
" x_out = np.zeros_like(x_in)\n",
|
||||||
|
" # TODO -- write this function\n",
|
||||||
|
" # replace this line\n",
|
||||||
|
" x_out = x_out\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" return x_out"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "_aBcW46AljI0"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"omega = [0.33,0.33,0.33]\n",
|
||||||
|
"h5 = conv_3_1_1_zp(x, omega)\n",
|
||||||
|
"\n",
|
||||||
|
"# Check that you have computed this correctly\n",
|
||||||
|
"print(f\"Sum of output is {np.sum(h5):3.3}, should be 68.3\")\n",
|
||||||
|
"\n",
|
||||||
|
"# Draw the signal\n",
|
||||||
|
"fig,ax = plt.subplots()\n",
|
||||||
|
"ax.plot(x, 'k-',label='before')\n",
|
||||||
|
"ax.plot(h5, 'r-',label='after')\n",
|
||||||
|
"ax.set_xlim(0,11)\n",
|
||||||
|
"ax.set_ylim(0, 12)\n",
|
||||||
|
"ax.legend()\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "En-ByCqWlvMI"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Finally, let's investigate representing convolutions as full matrices, and show we get the same answer."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "loBwu125lXx1"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Compute matrix in figure 10.4 d\n",
|
||||||
|
"def get_conv_mat_3_1_0_zp(n_out, omega):\n",
|
||||||
|
" omega_mat = np.zeros((n_out,n_out))\n",
|
||||||
|
" # TODO Fill in this matix\n",
|
||||||
|
" # Replace this line:\n",
|
||||||
|
" omega_mat = omega_mat\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" return omega_mat"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "U2RFWfGgs72j"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Run original convolution\n",
|
||||||
|
"omega = np.array([-1.0,0.5,-0.2])\n",
|
||||||
|
"h6 = conv_3_1_0_zp(x, omega)\n",
|
||||||
|
"print(h6)\n",
|
||||||
|
"\n",
|
||||||
|
"# If you have done this right, you should get the same answer\n",
|
||||||
|
"omega_mat = get_conv_mat_3_1_0_zp(len(x), omega)\n",
|
||||||
|
"h7 = np.matmul(omega_mat, x)\n",
|
||||||
|
"print(h7)\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "20IYxku8lMty"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"TODO: What do you expect to happen if we apply the last convolution twice? Can this be represented as a single convolution? If so, then what is it?"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "rYoQVhBfu8R4"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
253
Notebooks/Chap10/10_2_Convolution_for_MNIST_1D.ipynb
Normal file
253
Notebooks/Chap10/10_2_Convolution_for_MNIST_1D.ipynb
Normal file
@@ -0,0 +1,253 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyOgDisWDe/zHpfTGCH8AZ3i",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap10/10_2_Convolution_for_MNIST_1D.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 10.2: Convolution for MNIST-1D**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook investigates a 1D convolutional network for MNIST-1D as in figure 10.7 and 10.8a.\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
|
||||||
|
"\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "t9vk9Elugvmi"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Run this if you're in a Colab to make a local copy of the MNIST 1D repository\n",
|
||||||
|
"!git clone https://github.com/greydanus/mnist1d"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "D5yLObtZCi9J"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import os\n",
|
||||||
|
"import torch, torch.nn as nn\n",
|
||||||
|
"from torch.utils.data import TensorDataset, DataLoader\n",
|
||||||
|
"from torch.optim.lr_scheduler import StepLR\n",
|
||||||
|
"import matplotlib.pyplot as plt\n",
|
||||||
|
"import mnist1d\n",
|
||||||
|
"import random"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "YrXWAH7sUWvU"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"args = mnist1d.data.get_dataset_args()\n",
|
||||||
|
"data = mnist1d.data.get_dataset(args, path='./mnist1d_data.pkl', download=False, regenerate=False)\n",
|
||||||
|
"\n",
|
||||||
|
"# The training and test input and outputs are in\n",
|
||||||
|
"# data['x'], data['y'], data['x_test'], and data['y_test']\n",
|
||||||
|
"print(\"Examples in training set: {}\".format(len(data['y'])))\n",
|
||||||
|
"print(\"Examples in test set: {}\".format(len(data['y_test'])))\n",
|
||||||
|
"print(\"Length of each example: {}\".format(data['x'].shape[-1]))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "twI72ZCrCt5z"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Load in the data\n",
|
||||||
|
"train_data_x = data['x'].transpose()\n",
|
||||||
|
"train_data_y = data['y']\n",
|
||||||
|
"val_data_x = data['x_test'].transpose()\n",
|
||||||
|
"val_data_y = data['y_test']\n",
|
||||||
|
"# Print out sizes\n",
|
||||||
|
"print(\"Train data: %d examples (columns), each of which has %d dimensions (rows)\"%((train_data_x.shape[1],train_data_x.shape[0])))\n",
|
||||||
|
"print(\"Validation data: %d examples (columns), each of which has %d dimensions (rows)\"%((val_data_x.shape[1],val_data_x.shape[0])))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "8bKADvLHbiV5"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Define the network"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "_sFvRDGrl4qe"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# There are 40 input dimensions and 10 output dimensions for this data\n",
|
||||||
|
"# The inputs correspond to the 40 offsets in the MNIST1D template.\n",
|
||||||
|
"D_i = 40\n",
|
||||||
|
"# The outputs correspond to the 10 digits\n",
|
||||||
|
"D_o = 10\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"# TODO Create a model with the following layers\n",
|
||||||
|
"# 1. Convolutional layer, (input=length 40 and 1 channel, kernel size 3x3, stride 2, padding=\"valid\", 15 output channels )\n",
|
||||||
|
"# 2. ReLU\n",
|
||||||
|
"# 3. Convolutional layer, (input=length 19 and 15 channels, kernel size 3x3, stride 2, padding=\"valid\", 15 output channels )\n",
|
||||||
|
"# 4. ReLU\n",
|
||||||
|
"# 5. Convolutional layer, (input=length 9 and 15 channels, kernel size 3x3, stride 2, padding=\"valid\", 15 output channels)\n",
|
||||||
|
"# 6. ReLU\n",
|
||||||
|
"# 7. Flatten (converts 4x15) to length 60\n",
|
||||||
|
"# 8. Linear layer (input size = 60, output size = 10)\n",
|
||||||
|
"# References:\n",
|
||||||
|
"# https://pytorch.org/docs/1.13/generated/torch.nn.Conv1d.html?highlight=conv1d#torch.nn.Conv1d\n",
|
||||||
|
"# https://pytorch.org/docs/stable/generated/torch.nn.Flatten.html\n",
|
||||||
|
"# https://pytorch.org/docs/1.13/generated/torch.nn.Linear.html?highlight=linear#torch.nn.Linear\n",
|
||||||
|
"\n",
|
||||||
|
"# Replace the following function:\n",
|
||||||
|
"model = nn.Sequential(\n",
|
||||||
|
"nn.Flatten(),\n",
|
||||||
|
"nn.Linear(40, 100),\n",
|
||||||
|
"nn.ReLU(),\n",
|
||||||
|
"nn.Linear(100, 100),\n",
|
||||||
|
"nn.ReLU(),\n",
|
||||||
|
"nn.Linear(100, 10))\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "FslroPJJffrh"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# He initialization of weights\n",
|
||||||
|
"def weights_init(layer_in):\n",
|
||||||
|
" if isinstance(layer_in, nn.Linear):\n",
|
||||||
|
" nn.init.kaiming_uniform_(layer_in.weight)\n",
|
||||||
|
" layer_in.bias.data.fill_(0.0)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "YgLaex1pfhqz"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# choose cross entropy loss function (equation 5.24 in the loss notes)\n",
|
||||||
|
"loss_function = nn.CrossEntropyLoss()\n",
|
||||||
|
"# construct SGD optimizer and initialize learning rate and momentum\n",
|
||||||
|
"optimizer = torch.optim.SGD(model.parameters(), lr = 0.05, momentum=0.9)\n",
|
||||||
|
"# object that decreases learning rate by half every 20 epochs\n",
|
||||||
|
"scheduler = StepLR(optimizer, step_size=20, gamma=0.5)\n",
|
||||||
|
"# create 100 dummy data points and store in data loader class\n",
|
||||||
|
"x_train = torch.tensor(train_data_x.transpose().astype('float32'))\n",
|
||||||
|
"y_train = torch.tensor(train_data_y.astype('long'))\n",
|
||||||
|
"x_val= torch.tensor(val_data_x.transpose().astype('float32'))\n",
|
||||||
|
"y_val = torch.tensor(val_data_y.astype('long'))\n",
|
||||||
|
"\n",
|
||||||
|
"# load the data into a class that creates the batches\n",
|
||||||
|
"data_loader = DataLoader(TensorDataset(x_train,y_train), batch_size=100, shuffle=True, worker_init_fn=np.random.seed(1))\n",
|
||||||
|
"\n",
|
||||||
|
"# Initialize model weights\n",
|
||||||
|
"model.apply(weights_init)\n",
|
||||||
|
"\n",
|
||||||
|
"# loop over the dataset n_epoch times\n",
|
||||||
|
"n_epoch = 100\n",
|
||||||
|
"# store the loss and the % correct at each epoch\n",
|
||||||
|
"losses_train = np.zeros((n_epoch))\n",
|
||||||
|
"errors_train = np.zeros((n_epoch))\n",
|
||||||
|
"losses_val = np.zeros((n_epoch))\n",
|
||||||
|
"errors_val = np.zeros((n_epoch))\n",
|
||||||
|
"\n",
|
||||||
|
"for epoch in range(n_epoch):\n",
|
||||||
|
" # loop over batches\n",
|
||||||
|
" for i, data in enumerate(data_loader):\n",
|
||||||
|
" # retrieve inputs and labels for this batch\n",
|
||||||
|
" x_batch, y_batch = data\n",
|
||||||
|
" # zero the parameter gradients\n",
|
||||||
|
" optimizer.zero_grad()\n",
|
||||||
|
" # forward pass -- calculate model output\n",
|
||||||
|
" pred = model(x_batch[:,None,:])\n",
|
||||||
|
" # compute the loss\n",
|
||||||
|
" loss = loss_function(pred, y_batch)\n",
|
||||||
|
" # backward pass\n",
|
||||||
|
" loss.backward()\n",
|
||||||
|
" # SGD update\n",
|
||||||
|
" optimizer.step()\n",
|
||||||
|
"\n",
|
||||||
|
" # Run whole dataset to get statistics -- normally wouldn't do this\n",
|
||||||
|
" pred_train = model(x_train[:,None,:])\n",
|
||||||
|
" pred_val = model(x_val[:,None,:])\n",
|
||||||
|
" _, predicted_train_class = torch.max(pred_train.data, 1)\n",
|
||||||
|
" _, predicted_val_class = torch.max(pred_val.data, 1)\n",
|
||||||
|
" errors_train[epoch] = 100 - 100 * (predicted_train_class == y_train).float().sum() / len(y_train)\n",
|
||||||
|
" errors_val[epoch]= 100 - 100 * (predicted_val_class == y_val).float().sum() / len(y_val)\n",
|
||||||
|
" losses_train[epoch] = loss_function(pred_train, y_train).item()\n",
|
||||||
|
" losses_val[epoch]= loss_function(pred_val, y_val).item()\n",
|
||||||
|
" print(f'Epoch {epoch:5d}, train loss {losses_train[epoch]:.6f}, train error {errors_train[epoch]:3.2f}, val loss {losses_val[epoch]:.6f}, percent error {errors_val[epoch]:3.2f}')\n",
|
||||||
|
"\n",
|
||||||
|
" # tell scheduler to consider updating learning rate\n",
|
||||||
|
" scheduler.step()\n",
|
||||||
|
"\n",
|
||||||
|
"# Plot the results\n",
|
||||||
|
"fig, ax = plt.subplots()\n",
|
||||||
|
"ax.plot(errors_train,'r-',label='train')\n",
|
||||||
|
"ax.plot(errors_val,'b-',label='validation')\n",
|
||||||
|
"ax.set_ylim(0,100); ax.set_xlim(0,n_epoch)\n",
|
||||||
|
"ax.set_xlabel('Epoch'); ax.set_ylabel('Error')\n",
|
||||||
|
"ax.set_title('Part I: Validation Result %3.2f'%(errors_val[-1]))\n",
|
||||||
|
"ax.legend()\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "NYw8I_3mmX5c"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
431
Notebooks/Chap10/10_3_2D_Convolution.ipynb
Normal file
431
Notebooks/Chap10/10_3_2D_Convolution.ipynb
Normal file
@@ -0,0 +1,431 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyMmbD0cKYvIHXbKX4AupA1x",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap10/10_3_2D_Convolution.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 10.3: 2D Convolution**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook investigates the 2D convolution operation. It asks you to hand code the convolution so we can be sure that we are computing the same thing as in PyTorch. The next notebook uses the convolutional layers in PyTorch directly.\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "VB_crnDGASX-"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import torch\n",
|
||||||
|
"# Set to print in reasonable form\n",
|
||||||
|
"np.set_printoptions(precision=3, floatmode=\"fixed\")\n",
|
||||||
|
"torch.set_printoptions(precision=3)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "YAoWDUb_DezG"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"This routine performs convolution in PyTorch"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "eAwYWXzAElHG"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Perform convolution in PyTorch\n",
|
||||||
|
"def conv_pytorch(image, conv_weights, stride=1, pad =1):\n",
|
||||||
|
" # Convert image and kernel to tensors\n",
|
||||||
|
" image_tensor = torch.from_numpy(image) # (batchSize, channelsIn, imageHeightIn, =imageWidthIn)\n",
|
||||||
|
" conv_weights_tensor = torch.from_numpy(conv_weights) # (channelsOut, channelsIn, kernelHeight, kernelWidth)\n",
|
||||||
|
" # Do the convolution\n",
|
||||||
|
" output_tensor = torch.nn.functional.conv2d(image_tensor, conv_weights_tensor, stride=stride, padding=pad)\n",
|
||||||
|
" # Convert back from PyTorch and return\n",
|
||||||
|
" return(output_tensor.numpy()) # (batchSize channelsOut imageHeightOut imageHeightIn)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "xsmUIN-3BlWr"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"First we'll start with the simplest 2D convolution. Just one channel in and one channel out. A single image in the batch."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "A3Sm8bUWtDNO"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Perform convolution in numpy\n",
|
||||||
|
"def conv_numpy_1(image, weights, pad=1):\n",
|
||||||
|
"\n",
|
||||||
|
" # Perform zero padding\n",
|
||||||
|
" if pad != 0:\n",
|
||||||
|
" image = np.pad(image, ((0, 0), (0 ,0), (pad, pad), (pad, pad)),'constant')\n",
|
||||||
|
"\n",
|
||||||
|
" # Get sizes of image array and kernel weights\n",
|
||||||
|
" batchSize, channelsIn, imageHeightIn, imageWidthIn = image.shape\n",
|
||||||
|
" channelsOut, channelsIn, kernelHeight, kernelWidth = weights.shape\n",
|
||||||
|
"\n",
|
||||||
|
" # Get size of output arrays\n",
|
||||||
|
" imageHeightOut = np.floor(1 + imageHeightIn - kernelHeight).astype(int)\n",
|
||||||
|
" imageWidthOut = np.floor(1 + imageWidthIn - kernelWidth).astype(int)\n",
|
||||||
|
"\n",
|
||||||
|
" # Create output\n",
|
||||||
|
" out = np.zeros((batchSize, channelsOut, imageHeightOut, imageWidthOut), dtype=np.float32)\n",
|
||||||
|
"\n",
|
||||||
|
" for c_y in range(imageHeightOut):\n",
|
||||||
|
" for c_x in range(imageWidthOut):\n",
|
||||||
|
" for c_kernel_y in range(kernelHeight):\n",
|
||||||
|
" for c_kernel_x in range(kernelWidth):\n",
|
||||||
|
" # TODO -- Retrieve the image pixel and the weight from the convolution\n",
|
||||||
|
" # Only one image in batch, one input channel and one output channel, so these indices should all be zero\n",
|
||||||
|
" # Replace the two lines below\n",
|
||||||
|
" this_pixel_value = 1.0\n",
|
||||||
|
" this_weight = 1.0\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" # Multiply these together and add to the output at this position\n",
|
||||||
|
" out[0, 0, c_y, c_x] += np.sum(this_pixel_value * this_weight)\n",
|
||||||
|
"\n",
|
||||||
|
" return out"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "EF8FWONVLo1Q"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Set random seed so we always get same answer\n",
|
||||||
|
"np.random.seed(1)\n",
|
||||||
|
"n_batch = 1\n",
|
||||||
|
"image_height = 4\n",
|
||||||
|
"image_width = 6\n",
|
||||||
|
"channels_in = 1\n",
|
||||||
|
"kernel_size = 3\n",
|
||||||
|
"channels_out = 1\n",
|
||||||
|
"\n",
|
||||||
|
"# Create random input image\n",
|
||||||
|
"input_image= np.random.normal(size=(n_batch, channels_in, image_height, image_width))\n",
|
||||||
|
"# Create random convolution kernel weights\n",
|
||||||
|
"conv_weights = np.random.normal(size=(channels_out, channels_in, kernel_size, kernel_size))\n",
|
||||||
|
"\n",
|
||||||
|
"# Perform convolution using PyTorch\n",
|
||||||
|
"conv_results_pytorch = conv_pytorch(input_image, conv_weights, stride=1, pad=1)\n",
|
||||||
|
"print(\"PyTorch Results\")\n",
|
||||||
|
"print(conv_results_pytorch)\n",
|
||||||
|
"\n",
|
||||||
|
"# Perform convolution in numpy\n",
|
||||||
|
"print(\"Your results\")\n",
|
||||||
|
"conv_results_numpy = conv_numpy_1(input_image, conv_weights)\n",
|
||||||
|
"print(conv_results_numpy)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "iw9KqXZTHN8v"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Let's now add in the possibility of using different strides"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "IYj_lxeGzaHX"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Perform convolution in numpy\n",
|
||||||
|
"def conv_numpy_2(image, weights, stride=1, pad=1):\n",
|
||||||
|
"\n",
|
||||||
|
" # Perform zero padding\n",
|
||||||
|
" if pad != 0:\n",
|
||||||
|
" image = np.pad(image, ((0, 0), (0 ,0), (pad, pad), (pad, pad)),'constant')\n",
|
||||||
|
"\n",
|
||||||
|
" # Get sizes of image array and kernel weights\n",
|
||||||
|
" batchSize, channelsIn, imageHeightIn, imageWidthIn = image.shape\n",
|
||||||
|
" channelsOut, channelsIn, kernelHeight, kernelWidth = weights.shape\n",
|
||||||
|
"\n",
|
||||||
|
" # Get size of output arrays\n",
|
||||||
|
" imageHeightOut = np.floor(1 + (imageHeightIn - kernelHeight) / stride).astype(int)\n",
|
||||||
|
" imageWidthOut = np.floor(1 + (imageWidthIn - kernelWidth) / stride).astype(int)\n",
|
||||||
|
"\n",
|
||||||
|
" # Create output\n",
|
||||||
|
" out = np.zeros((batchSize, channelsOut, imageHeightOut, imageWidthOut), dtype=np.float32)\n",
|
||||||
|
"\n",
|
||||||
|
" for c_y in range(imageHeightOut):\n",
|
||||||
|
" for c_x in range(imageWidthOut):\n",
|
||||||
|
" for c_kernel_y in range(kernelHeight):\n",
|
||||||
|
" for c_kernel_x in range(kernelWidth):\n",
|
||||||
|
" # TODO -- Retrieve the image pixel and the weight from the convolution\n",
|
||||||
|
" # Only one image in batch, one input channel and one output channel, so these indices should all be zero\n",
|
||||||
|
" # Replace the two lines below\n",
|
||||||
|
" this_pixel_value = 1.0\n",
|
||||||
|
" this_weight = 1.0\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" # Multiply these together and add to the output at this position\n",
|
||||||
|
" out[0, 0, c_y, c_x] += np.sum(this_pixel_value * this_weight)\n",
|
||||||
|
"\n",
|
||||||
|
" return out"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "GiujmLhqHN1F"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Set random seed so we always get same answer\n",
|
||||||
|
"np.random.seed(1)\n",
|
||||||
|
"n_batch = 1\n",
|
||||||
|
"image_height = 12\n",
|
||||||
|
"image_width = 10\n",
|
||||||
|
"channels_in = 1\n",
|
||||||
|
"kernel_size = 3\n",
|
||||||
|
"channels_out = 1\n",
|
||||||
|
"stride = 2\n",
|
||||||
|
"\n",
|
||||||
|
"# Create random input image\n",
|
||||||
|
"input_image= np.random.normal(size=(n_batch, channels_in, image_height, image_width))\n",
|
||||||
|
"# Create random convolution kernel weights\n",
|
||||||
|
"conv_weights = np.random.normal(size=(channels_out, channels_in, kernel_size, kernel_size))\n",
|
||||||
|
"\n",
|
||||||
|
"# Perform convolution using PyTorch\n",
|
||||||
|
"conv_results_pytorch = conv_pytorch(input_image, conv_weights, stride, pad=1)\n",
|
||||||
|
"print(\"PyTorch Results\")\n",
|
||||||
|
"print(conv_results_pytorch)\n",
|
||||||
|
"\n",
|
||||||
|
"# Perform convolution in numpy\n",
|
||||||
|
"print(\"Your results\")\n",
|
||||||
|
"conv_results_numpy = conv_numpy_2(input_image, conv_weights, stride, pad=1)\n",
|
||||||
|
"print(conv_results_numpy)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "FeJy6Bvozgxq"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now we'll introduce multiple input and output channels"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "3flq1Wan2gX-"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Perform convolution in numpy\n",
|
||||||
|
"def conv_numpy_3(image, weights, stride=1, pad=1):\n",
|
||||||
|
"\n",
|
||||||
|
" # Perform zero padding\n",
|
||||||
|
" if pad != 0:\n",
|
||||||
|
" image = np.pad(image, ((0, 0), (0 ,0), (pad, pad), (pad, pad)),'constant')\n",
|
||||||
|
"\n",
|
||||||
|
" # Get sizes of image array and kernel weights\n",
|
||||||
|
" batchSize, channelsIn, imageHeightIn, imageWidthIn = image.shape\n",
|
||||||
|
" channelsOut, channelsIn, kernelHeight, kernelWidth = weights.shape\n",
|
||||||
|
"\n",
|
||||||
|
" # Get size of output arrays\n",
|
||||||
|
" imageHeightOut = np.floor(1 + (imageHeightIn - kernelHeight) / stride).astype(int)\n",
|
||||||
|
" imageWidthOut = np.floor(1 + (imageWidthIn - kernelWidth) / stride).astype(int)\n",
|
||||||
|
"\n",
|
||||||
|
" # Create output\n",
|
||||||
|
" out = np.zeros((batchSize, channelsOut, imageHeightOut, imageWidthOut), dtype=np.float32)\n",
|
||||||
|
"\n",
|
||||||
|
" for c_y in range(imageHeightOut):\n",
|
||||||
|
" for c_x in range(imageWidthOut):\n",
|
||||||
|
" for c_channel_out in range(channelsOut):\n",
|
||||||
|
" for c_channel_in in range(channelsIn):\n",
|
||||||
|
" for c_kernel_y in range(kernelHeight):\n",
|
||||||
|
" for c_kernel_x in range(kernelWidth):\n",
|
||||||
|
" # TODO -- Retrieve the image pixel and the weight from the convolution\n",
|
||||||
|
" # Only one image in batch so this index should be zero\n",
|
||||||
|
" # Replace the two lines below\n",
|
||||||
|
" this_pixel_value = 1.0\n",
|
||||||
|
" this_weight = 1.0\n",
|
||||||
|
"\n",
|
||||||
|
" # Multiply these together and add to the output at this position\n",
|
||||||
|
" out[0, c_channel_out, c_y, c_x] += np.sum(this_pixel_value * this_weight)\n",
|
||||||
|
" return out"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "AvdRWGiU2ppX"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Set random seed so we always get same answer\n",
|
||||||
|
"np.random.seed(1)\n",
|
||||||
|
"n_batch = 1\n",
|
||||||
|
"image_height = 4\n",
|
||||||
|
"image_width = 6\n",
|
||||||
|
"channels_in = 5\n",
|
||||||
|
"kernel_size = 3\n",
|
||||||
|
"channels_out = 2\n",
|
||||||
|
"\n",
|
||||||
|
"# Create random input image\n",
|
||||||
|
"input_image= np.random.normal(size=(n_batch, channels_in, image_height, image_width))\n",
|
||||||
|
"# Create random convolution kernel weights\n",
|
||||||
|
"conv_weights = np.random.normal(size=(channels_out, channels_in, kernel_size, kernel_size))\n",
|
||||||
|
"\n",
|
||||||
|
"# Perform convolution using PyTorch\n",
|
||||||
|
"conv_results_pytorch = conv_pytorch(input_image, conv_weights, stride=1, pad=1)\n",
|
||||||
|
"print(\"PyTorch Results\")\n",
|
||||||
|
"print(conv_results_pytorch)\n",
|
||||||
|
"\n",
|
||||||
|
"# Perform convolution in numpy\n",
|
||||||
|
"print(\"Your results\")\n",
|
||||||
|
"conv_results_numpy = conv_numpy_3(input_image, conv_weights, stride=1, pad=1)\n",
|
||||||
|
"print(conv_results_numpy)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "mdSmjfvY4li2"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now we'll do the full convolution with multiple images (batch size > 1), and multiple input channels, multiple output channels."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "Q2MUFebdsJbH"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Perform convolution in numpy\n",
|
||||||
|
"def conv_numpy_4(image, weights, stride=1, pad=1):\n",
|
||||||
|
"\n",
|
||||||
|
" # Perform zero padding\n",
|
||||||
|
" if pad != 0:\n",
|
||||||
|
" image = np.pad(image, ((0, 0), (0 ,0), (pad, pad), (pad, pad)),'constant')\n",
|
||||||
|
"\n",
|
||||||
|
" # Get sizes of image array and kernel weights\n",
|
||||||
|
" batchSize, channelsIn, imageHeightIn, imageWidthIn = image.shape\n",
|
||||||
|
" channelsOut, channelsIn, kernelHeight, kernelWidth = weights.shape\n",
|
||||||
|
"\n",
|
||||||
|
" # Get size of output arrays\n",
|
||||||
|
" imageHeightOut = np.floor(1 + (imageHeightIn - kernelHeight) / stride).astype(int)\n",
|
||||||
|
" imageWidthOut = np.floor(1 + (imageWidthIn - kernelWidth) / stride).astype(int)\n",
|
||||||
|
"\n",
|
||||||
|
" # Create output\n",
|
||||||
|
" out = np.zeros((batchSize, channelsOut, imageHeightOut, imageWidthOut), dtype=np.float32)\n",
|
||||||
|
"\n",
|
||||||
|
" for c_batch in range(batchSize):\n",
|
||||||
|
" for c_y in range(imageHeightOut):\n",
|
||||||
|
" for c_x in range(imageWidthOut):\n",
|
||||||
|
" for c_channel_out in range(channelsOut):\n",
|
||||||
|
" for c_channel_in in range(channelsIn):\n",
|
||||||
|
" for c_kernel_y in range(kernelHeight):\n",
|
||||||
|
" for c_kernel_x in range(kernelWidth):\n",
|
||||||
|
" # TODO -- Retrieve the image pixel and the weight from the convolution\n",
|
||||||
|
" # Replace the two lines below\n",
|
||||||
|
" this_pixel_value = 1.0\n",
|
||||||
|
" this_weight = 1.0\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" # Multiply these together and add to the output at this position\n",
|
||||||
|
" out[c_batch, c_channel_out, c_y, c_x] += np.sum(this_pixel_value * this_weight)\n",
|
||||||
|
" return out"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "5WePF-Y-sC1y"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "1w2GEBtqAM2P"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# Set random seed so we always get same answer\n",
|
||||||
|
"np.random.seed(1)\n",
|
||||||
|
"n_batch = 2\n",
|
||||||
|
"image_height = 4\n",
|
||||||
|
"image_width = 6\n",
|
||||||
|
"channels_in = 5\n",
|
||||||
|
"kernel_size = 3\n",
|
||||||
|
"channels_out = 2\n",
|
||||||
|
"\n",
|
||||||
|
"# Create random input image\n",
|
||||||
|
"input_image= np.random.normal(size=(n_batch, channels_in, image_height, image_width))\n",
|
||||||
|
"# Create random convolution kernel weights\n",
|
||||||
|
"conv_weights = np.random.normal(size=(channels_out, channels_in, kernel_size, kernel_size))\n",
|
||||||
|
"\n",
|
||||||
|
"# Perform convolution using PyTorch\n",
|
||||||
|
"conv_results_pytorch = conv_pytorch(input_image, conv_weights, stride=1, pad=1)\n",
|
||||||
|
"print(\"PyTorch Results\")\n",
|
||||||
|
"print(conv_results_pytorch)\n",
|
||||||
|
"\n",
|
||||||
|
"# Perform convolution in numpy\n",
|
||||||
|
"print(\"Your results\")\n",
|
||||||
|
"conv_results_numpy = conv_numpy_4(input_image, conv_weights, stride=1, pad=1)\n",
|
||||||
|
"print(conv_results_numpy)"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
520
Notebooks/Chap10/10_4_Downsampling_and_Upsampling.ipynb
Normal file
520
Notebooks/Chap10/10_4_Downsampling_and_Upsampling.ipynb
Normal file
@@ -0,0 +1,520 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyMbSR8fzpXvO6TIQdO7bI0H",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap10/10_4_Downsampling_and_Upsampling.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 10.4: Downsampling and Upsampling**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook investigates the down sampling and downsampling methods discussed in section 10.4 of the book.\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "t9vk9Elugvmi"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt\n",
|
||||||
|
"from PIL import Image\n",
|
||||||
|
"from numpy import asarray"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "YrXWAH7sUWvU"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Define 4 by 4 original patch\n",
|
||||||
|
"orig_4_4 = np.array([[1, 3, 5,3 ], [6,2,0,8], [4,6,1,4], [2,8,0,3]])\n",
|
||||||
|
"print(orig_4_4)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "WPRoJcC_JXE2"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def subsample(x_in):\n",
|
||||||
|
" x_out = np.zeros(( int(np.ceil(x_in.shape[0]/2)), int(np.ceil(x_in.shape[1]/2)) ))\n",
|
||||||
|
" # TO DO -- write the subsampling routine\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" x_out = x_out\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" return x_out"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "qneyOiZRJubi"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"print(\"Original:\")\n",
|
||||||
|
"print(orig_4_4)\n",
|
||||||
|
"print(\"Subsampled:\")\n",
|
||||||
|
"print(subsample(orig_4_4))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "O_i0y72_JwGZ"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Let's try that on an image to get a feel for how it works:"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "AobyC8IILbCO"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"!wget https://raw.githubusercontent.com/udlbook/udlbook/main/Notebooks/Chap10/test_image.png"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "3dJEo-6DM-Py"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# load the image\n",
|
||||||
|
"image = Image.open('test_image.png')\n",
|
||||||
|
"# convert image to numpy array\n",
|
||||||
|
"data = asarray(image)\n",
|
||||||
|
"data_subsample = subsample(data);\n",
|
||||||
|
"\n",
|
||||||
|
"plt.figure(figsize=(5,5))\n",
|
||||||
|
"plt.imshow(data, cmap='gray')\n",
|
||||||
|
"plt.show()\n",
|
||||||
|
"\n",
|
||||||
|
"plt.figure(figsize=(5,5))\n",
|
||||||
|
"plt.imshow(data_subsample, cmap='gray')\n",
|
||||||
|
"plt.show()\n",
|
||||||
|
"\n",
|
||||||
|
"data_subsample2 = subsample(data_subsample)\n",
|
||||||
|
"plt.figure(figsize=(5,5))\n",
|
||||||
|
"plt.imshow(data_subsample2, cmap='gray')\n",
|
||||||
|
"plt.show()\n",
|
||||||
|
"\n",
|
||||||
|
"data_subsample3 = subsample(data_subsample2)\n",
|
||||||
|
"plt.figure(figsize=(5,5))\n",
|
||||||
|
"plt.imshow(data_subsample3, cmap='gray')\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "HCZVutk6NB6B"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Now let's try max-pooling\n",
|
||||||
|
"def maxpool(x_in):\n",
|
||||||
|
" x_out = np.zeros(( int(np.floor(x_in.shape[0]/2)), int(np.floor(x_in.shape[1]/2)) ))\n",
|
||||||
|
" # TO DO -- write the maxpool routine\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" x_out = x_out\n",
|
||||||
|
"\n",
|
||||||
|
" return x_out"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "Z99uYehaPtJa"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"print(\"Original:\")\n",
|
||||||
|
"print(orig_4_4)\n",
|
||||||
|
"print(\"Maxpooled:\")\n",
|
||||||
|
"print(maxpool(orig_4_4))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "J4KMTMmG9P44"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Let's see what Rick looks like:\n",
|
||||||
|
"data_maxpool = maxpool(data);\n",
|
||||||
|
"\n",
|
||||||
|
"plt.figure(figsize=(5,5))\n",
|
||||||
|
"plt.imshow(data, cmap='gray')\n",
|
||||||
|
"plt.show()\n",
|
||||||
|
"\n",
|
||||||
|
"plt.figure(figsize=(5,5))\n",
|
||||||
|
"plt.imshow(data_maxpool, cmap='gray')\n",
|
||||||
|
"plt.show()\n",
|
||||||
|
"\n",
|
||||||
|
"data_maxpool2 = maxpool(data_maxpool)\n",
|
||||||
|
"plt.figure(figsize=(5,5))\n",
|
||||||
|
"plt.imshow(data_maxpool2, cmap='gray')\n",
|
||||||
|
"plt.show()\n",
|
||||||
|
"\n",
|
||||||
|
"data_maxpool3 = maxpool(data_maxpool2)\n",
|
||||||
|
"plt.figure(figsize=(5,5))\n",
|
||||||
|
"plt.imshow(data_maxpool3, cmap='gray')\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "0ES0sB8t9Wyv"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"You can see that the stripes on his shirt gradually turn to white because we keep retaining the brightest local pixels."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "nMtSdBGlAktq"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Finally, let's try mean pooling\n",
|
||||||
|
"def meanpool(x_in):\n",
|
||||||
|
" x_out = np.zeros(( int(np.floor(x_in.shape[0]/2)), int(np.floor(x_in.shape[1]/2)) ))\n",
|
||||||
|
" # TO DO -- write the meanpool routine\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" x_out = x_out\n",
|
||||||
|
"\n",
|
||||||
|
" return x_out"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "ZQBjBtmB_aGQ"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"print(\"Original:\")\n",
|
||||||
|
"print(orig_4_4)\n",
|
||||||
|
"print(\"Meanpooled:\")\n",
|
||||||
|
"print(meanpool(orig_4_4))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "N4VDlWNt_8dp"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Let's see what Rick looks like:\n",
|
||||||
|
"data_meanpool = meanpool(data);\n",
|
||||||
|
"\n",
|
||||||
|
"plt.figure(figsize=(5,5))\n",
|
||||||
|
"plt.imshow(data, cmap='gray')\n",
|
||||||
|
"plt.show()\n",
|
||||||
|
"\n",
|
||||||
|
"plt.figure(figsize=(5,5))\n",
|
||||||
|
"plt.imshow(data_meanpool, cmap='gray')\n",
|
||||||
|
"plt.show()\n",
|
||||||
|
"\n",
|
||||||
|
"data_meanpool2 = meanpool(data_maxpool)\n",
|
||||||
|
"plt.figure(figsize=(5,5))\n",
|
||||||
|
"plt.imshow(data_meanpool2, cmap='gray')\n",
|
||||||
|
"plt.show()\n",
|
||||||
|
"\n",
|
||||||
|
"data_meanpool3 = meanpool(data_meanpool2)\n",
|
||||||
|
"plt.figure(figsize=(5,5))\n",
|
||||||
|
"plt.imshow(data_meanpool3, cmap='gray')\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "Lkg5zUYo_-IV"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Notice that the three low resolution images look quite different. <br>\n",
|
||||||
|
"\n",
|
||||||
|
"Now let's upscale them again"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "J7VssF4pBf2y"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Define 2 by 2 original patch\n",
|
||||||
|
"orig_2_2 = np.array([[2, 4], [4,8]])\n",
|
||||||
|
"print(orig_2_2)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "Q4N7i76FA_YH"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Let's first use the duplication method\n",
|
||||||
|
"def duplicate(x_in):\n",
|
||||||
|
" x_out = np.zeros(( x_in.shape[0]*2, x_in.shape[1]*2 ))\n",
|
||||||
|
" # TO DO -- write the duplication routine\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" x_out = x_out\n",
|
||||||
|
"\n",
|
||||||
|
" return x_out"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "6eSjnl3cB5g4"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"print(\"Original:\")\n",
|
||||||
|
"print(orig_2_2)\n",
|
||||||
|
"print(\"Duplicated:\")\n",
|
||||||
|
"print(duplicate(orig_2_2))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "4FtRcvXrFLg7"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Let's re-upsample, sub-sampled rick\n",
|
||||||
|
"data_duplicate = duplicate(data_subsample3);\n",
|
||||||
|
"\n",
|
||||||
|
"plt.figure(figsize=(5,5))\n",
|
||||||
|
"plt.imshow(data_subsample3, cmap='gray')\n",
|
||||||
|
"plt.show()\n",
|
||||||
|
"\n",
|
||||||
|
"plt.figure(figsize=(5,5))\n",
|
||||||
|
"plt.imshow(data_duplicate, cmap='gray')\n",
|
||||||
|
"plt.show()\n",
|
||||||
|
"\n",
|
||||||
|
"data_duplicate2 = duplicate(data_duplicate)\n",
|
||||||
|
"plt.figure(figsize=(5,5))\n",
|
||||||
|
"plt.imshow(data_duplicate2, cmap='gray')\n",
|
||||||
|
"plt.show()\n",
|
||||||
|
"\n",
|
||||||
|
"data_duplicate3 = duplicate(data_duplicate2)\n",
|
||||||
|
"plt.figure(figsize=(5,5))\n",
|
||||||
|
"plt.imshow(data_duplicate3, cmap='gray')\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "agq0YN34FQfA"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"They look the same, but if you look at the axes, you'll see that the pixels are just duplicated."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "bCQrJ_M8GUFs"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Now let's try max pooling back up\n",
|
||||||
|
"# The input x_high_res is the original high res image, from which you can deduce the position of the maximum index\n",
|
||||||
|
"def max_unpool(x_in, x_high_res):\n",
|
||||||
|
" x_out = np.zeros(( x_in.shape[0]*2, x_in.shape[1]*2 ))\n",
|
||||||
|
" # TO DO -- write the subsampling routine\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" x_out = x_out\n",
|
||||||
|
"\n",
|
||||||
|
" return x_out"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "uDUDChmBF71_"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"print(\"Original:\")\n",
|
||||||
|
"print(orig_2_2)\n",
|
||||||
|
"print(\"Max unpooled:\")\n",
|
||||||
|
"print(max_unpool(orig_2_2,orig_4_4))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "EmjptCVNHq74"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Let's re-upsample, sub-sampled rick\n",
|
||||||
|
"data_max_unpool= max_unpool(data_maxpool3,data_maxpool2);\n",
|
||||||
|
"\n",
|
||||||
|
"plt.figure(figsize=(5,5))\n",
|
||||||
|
"plt.imshow(data_maxpool3, cmap='gray')\n",
|
||||||
|
"plt.show()\n",
|
||||||
|
"\n",
|
||||||
|
"plt.figure(figsize=(5,5))\n",
|
||||||
|
"plt.imshow(data_max_unpool, cmap='gray')\n",
|
||||||
|
"plt.show()\n",
|
||||||
|
"\n",
|
||||||
|
"data_max_unpool2 = max_unpool(data_max_unpool, data_maxpool)\n",
|
||||||
|
"plt.figure(figsize=(5,5))\n",
|
||||||
|
"plt.imshow(data_max_unpool2, cmap='gray')\n",
|
||||||
|
"plt.show()\n",
|
||||||
|
"\n",
|
||||||
|
"data_max_unpool3 = max_unpool(data_max_unpool2, data)\n",
|
||||||
|
"plt.figure(figsize=(5,5))\n",
|
||||||
|
"plt.imshow(data_max_unpool3, cmap='gray')\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "SSPhTuV6H4ZH"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Finally, we'll try upsampling using bilinear interpolation. We'll treat the positions off the image as zeros by padding the original image and round fractional values upwards using np.ceil()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "sBx36bvbJHrK"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def bilinear(x_in):\n",
|
||||||
|
" x_out = np.zeros(( x_in.shape[0]*2, x_in.shape[1]*2 ))\n",
|
||||||
|
" x_in_pad = np.zeros((x_in.shape[0]+1, x_in.shape[1]+1))\n",
|
||||||
|
" x_in_pad[0:x_in.shape[0],0:x_in.shape[1]] = x_in\n",
|
||||||
|
" # TO DO -- write the duplication routine\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" x_out = x_out\n",
|
||||||
|
"\n",
|
||||||
|
" return x_out"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "00XpfQo3Ivdf"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"print(\"Original:\")\n",
|
||||||
|
"print(orig_2_2)\n",
|
||||||
|
"print(\"Bilinear:\")\n",
|
||||||
|
"print(bilinear(orig_2_2))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "qI5oRVCCNRob"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Let's re-upsample, sub-sampled rick\n",
|
||||||
|
"data_bilinear = bilinear(data_meanpool3);\n",
|
||||||
|
"\n",
|
||||||
|
"plt.figure(figsize=(5,5))\n",
|
||||||
|
"plt.imshow(data_meanpool3, cmap='gray')\n",
|
||||||
|
"plt.show()\n",
|
||||||
|
"\n",
|
||||||
|
"plt.figure(figsize=(5,5))\n",
|
||||||
|
"plt.imshow(data_bilinear, cmap='gray')\n",
|
||||||
|
"plt.show()\n",
|
||||||
|
"\n",
|
||||||
|
"data_bilinear2 = bilinear(data_bilinear)\n",
|
||||||
|
"plt.figure(figsize=(5,5))\n",
|
||||||
|
"plt.imshow(data_bilinear2, cmap='gray')\n",
|
||||||
|
"plt.show()\n",
|
||||||
|
"\n",
|
||||||
|
"data_bilinear3 = duplicate(data_bilinear2)\n",
|
||||||
|
"plt.figure(figsize=(5,5))\n",
|
||||||
|
"plt.imshow(data_bilinear3, cmap='gray')\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "4m0bkhdmNRec"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
296
Notebooks/Chap10/10_5_Convolution_For_MNIST.ipynb
Normal file
296
Notebooks/Chap10/10_5_Convolution_For_MNIST.ipynb
Normal file
@@ -0,0 +1,296 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyNAcc98STMeyQgh9SbVHWG+",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap10/10_5_Convolution_For_MNIST.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 10.5: Convolution for MNIST**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook builds a proper network for 2D convolution. It works with the MNIST dataset (figure 15.15a), which was the original classic dataset for classifying images. The network will take a 28x28 grayscale image and classify it into one of 10 classes representing a digit.\n",
|
||||||
|
"\n",
|
||||||
|
"The code is adapted from https://nextjournal.com/gkoehler/pytorch-mnist\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "t9vk9Elugvmi"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"import torch\n",
|
||||||
|
"import torchvision\n",
|
||||||
|
"import torch.nn as nn\n",
|
||||||
|
"import torch.nn.functional as F\n",
|
||||||
|
"import torch.optim as optim\n",
|
||||||
|
"import matplotlib.pyplot as plt\n",
|
||||||
|
"import random"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "YrXWAH7sUWvU"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Run this once to load the train and test data straight into a dataloader class\n",
|
||||||
|
"# that will provide the batches\n",
|
||||||
|
"batch_size_train = 64\n",
|
||||||
|
"batch_size_test = 1000\n",
|
||||||
|
"train_loader = torch.utils.data.DataLoader(\n",
|
||||||
|
" torchvision.datasets.MNIST('/files/', train=True, download=True,\n",
|
||||||
|
" transform=torchvision.transforms.Compose([\n",
|
||||||
|
" torchvision.transforms.ToTensor(),\n",
|
||||||
|
" torchvision.transforms.Normalize(\n",
|
||||||
|
" (0.1307,), (0.3081,))\n",
|
||||||
|
" ])),\n",
|
||||||
|
" batch_size=batch_size_train, shuffle=True)\n",
|
||||||
|
"\n",
|
||||||
|
"test_loader = torch.utils.data.DataLoader(\n",
|
||||||
|
" torchvision.datasets.MNIST('/files/', train=False, download=True,\n",
|
||||||
|
" transform=torchvision.transforms.Compose([\n",
|
||||||
|
" torchvision.transforms.ToTensor(),\n",
|
||||||
|
" torchvision.transforms.Normalize(\n",
|
||||||
|
" (0.1307,), (0.3081,))\n",
|
||||||
|
" ])),\n",
|
||||||
|
" batch_size=batch_size_test, shuffle=True)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "wScBGXXFVadm"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Let's draw some of the training data\n",
|
||||||
|
"examples = enumerate(test_loader)\n",
|
||||||
|
"batch_idx, (example_data, example_targets) = next(examples)\n",
|
||||||
|
"\n",
|
||||||
|
"fig = plt.figure()\n",
|
||||||
|
"for i in range(6):\n",
|
||||||
|
" plt.subplot(2,3,i+1)\n",
|
||||||
|
" plt.tight_layout()\n",
|
||||||
|
" plt.imshow(example_data[i][0], cmap='gray', interpolation='none')\n",
|
||||||
|
" plt.title(\"Ground Truth: {}\".format(example_targets[i]))\n",
|
||||||
|
" plt.xticks([])\n",
|
||||||
|
" plt.yticks([])\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "8bKADvLHbiV5"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Define the network. This is a more typical way to define a network than the sequential structure. We define a class for the network, and define the parameters in the constructor. Then we use a function called forward to actually run the network. It's easy to see how you might use residual connections in this format."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "_sFvRDGrl4qe"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"from os import X_OK\n",
|
||||||
|
"# TODO Change this class to implement\n",
|
||||||
|
"# 1. A valid convolution with kernel size 5, 1 input channel and 10 output channels\n",
|
||||||
|
"# 2. A max pooling operation over a 2x2 area\n",
|
||||||
|
"# 3. A Relu\n",
|
||||||
|
"# 4. A valid convolution with kernel size 5, 10 input channels and 20 output channels\n",
|
||||||
|
"# 5. A 2D Dropout layer\n",
|
||||||
|
"# 6. A max pooling operation over a 2x2 area\n",
|
||||||
|
"# 7. A relu\n",
|
||||||
|
"# 8. A flattening operation\n",
|
||||||
|
"# 9. A fully connected layer mapping from (whatever dimensions we are at-- find out using .shape) to 50\n",
|
||||||
|
"# 10. A ReLU\n",
|
||||||
|
"# 11. A fully connected layer mapping from 50 to 10 dimensions\n",
|
||||||
|
"# 12. A softmax function.\n",
|
||||||
|
"\n",
|
||||||
|
"# Replace this class which implements a minimal network (which still does okay)\n",
|
||||||
|
"class Net(nn.Module):\n",
|
||||||
|
" def __init__(self):\n",
|
||||||
|
" super(Net, self).__init__()\n",
|
||||||
|
" # Valid convolution, 1 channel in, 2 channels out, stride 1, kernel size = 3\n",
|
||||||
|
" self.conv1 = nn.Conv2d(1, 2, kernel_size=3)\n",
|
||||||
|
" # Dropout for convolutions\n",
|
||||||
|
" self.drop = nn.Dropout2d()\n",
|
||||||
|
" # Fully connected layer\n",
|
||||||
|
" self.fc1 = nn.Linear(338, 10)\n",
|
||||||
|
"\n",
|
||||||
|
" def forward(self, x):\n",
|
||||||
|
" x = self.conv1(x)\n",
|
||||||
|
" x = self.drop(x)\n",
|
||||||
|
" x = F.max_pool2d(x,2)\n",
|
||||||
|
" x = F.relu(x)\n",
|
||||||
|
" x = x.flatten(1)\n",
|
||||||
|
" x = self.fc1(x)\n",
|
||||||
|
" x = F.log_softmax(x)\n",
|
||||||
|
" return x\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "EQkvw2KOPVl7"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# He initialization of weights\n",
|
||||||
|
"def weights_init(layer_in):\n",
|
||||||
|
" if isinstance(layer_in, nn.Linear):\n",
|
||||||
|
" nn.init.kaiming_uniform_(layer_in.weight)\n",
|
||||||
|
" layer_in.bias.data.fill_(0.0)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "qWZtkCZcU_dg"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Create network\n",
|
||||||
|
"model = Net()\n",
|
||||||
|
"# Initialize model weights\n",
|
||||||
|
"model.apply(weights_init)\n",
|
||||||
|
"# Define optimizer\n",
|
||||||
|
"optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "FslroPJJffrh"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Main training routine\n",
|
||||||
|
"def train(epoch):\n",
|
||||||
|
" model.train()\n",
|
||||||
|
" # Get each\n",
|
||||||
|
" for batch_idx, (data, target) in enumerate(train_loader):\n",
|
||||||
|
" optimizer.zero_grad()\n",
|
||||||
|
" output = model(data)\n",
|
||||||
|
" loss = F.nll_loss(output, target)\n",
|
||||||
|
" loss.backward()\n",
|
||||||
|
" optimizer.step()\n",
|
||||||
|
" # Store results\n",
|
||||||
|
" if batch_idx % 10 == 0:\n",
|
||||||
|
" print('Train Epoch: {} [{}/{}]\\tLoss: {:.6f}'.format(\n",
|
||||||
|
" epoch, batch_idx * len(data), len(train_loader.dataset), loss.item()))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "xKQd9PzkQ766"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Run on test data\n",
|
||||||
|
"def test():\n",
|
||||||
|
" model.eval()\n",
|
||||||
|
" test_loss = 0\n",
|
||||||
|
" correct = 0\n",
|
||||||
|
" with torch.no_grad():\n",
|
||||||
|
" for data, target in test_loader:\n",
|
||||||
|
" output = model(data)\n",
|
||||||
|
" test_loss += F.nll_loss(output, target, size_average=False).item()\n",
|
||||||
|
" pred = output.data.max(1, keepdim=True)[1]\n",
|
||||||
|
" correct += pred.eq(target.data.view_as(pred)).sum()\n",
|
||||||
|
" test_loss /= len(test_loader.dataset)\n",
|
||||||
|
" print('\\nTest set: Avg. loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\\n'.format(\n",
|
||||||
|
" test_loss, correct, len(test_loader.dataset),\n",
|
||||||
|
" 100. * correct / len(test_loader.dataset)))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "Byn-f7qWRLxX"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Get initial performance\n",
|
||||||
|
"test()\n",
|
||||||
|
"# Train for three epochs\n",
|
||||||
|
"n_epochs = 3\n",
|
||||||
|
"for epoch in range(1, n_epochs + 1):\n",
|
||||||
|
" train(epoch)\n",
|
||||||
|
" test()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "YgLaex1pfhqz"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Run network on data we got before and show predictions\n",
|
||||||
|
"output = model(example_data)\n",
|
||||||
|
"\n",
|
||||||
|
"fig = plt.figure()\n",
|
||||||
|
"for i in range(10):\n",
|
||||||
|
" plt.subplot(5,5,i+1)\n",
|
||||||
|
" plt.tight_layout()\n",
|
||||||
|
" plt.imshow(example_data[i][0], cmap='gray', interpolation='none')\n",
|
||||||
|
" plt.title(\"Prediction: {}\".format(\n",
|
||||||
|
" output.data.max(1, keepdim=True)[1][i].item()))\n",
|
||||||
|
" plt.xticks([])\n",
|
||||||
|
" plt.yticks([])\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "o7fRUAy9Se1B"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
BIN
Notebooks/Chap10/test_image.png
Normal file
BIN
Notebooks/Chap10/test_image.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 59 KiB |
392
Notebooks/Chap11/11_1_Shattered_Gradients.ipynb
Normal file
392
Notebooks/Chap11/11_1_Shattered_Gradients.ipynb
Normal file
@@ -0,0 +1,392 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyMrF4rB2hTKq7XzLuYsURdL",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap11/11_1_Shattered_Gradients.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 11.1: Shattered gradients**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook investigates the phenomenon of shattered gradients as discussed in section 11.1.1. It replicates some of the experiments in [Balduzzi et al. (2017)](https://arxiv.org/abs/1702.08591).\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "pOZ6Djz0dhoy"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "iaFyNGhU21VJ"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"First let's define a neural network. We'll initialize both the weights and biases randomly with Glorot initialization (He initialization without the factor of two)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "YcNlAxnE3XXn"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# K is width, D is number of hidden units in each layer\n",
|
||||||
|
"def init_params(K, D):\n",
|
||||||
|
" # Set seed so we always get the same random numbers\n",
|
||||||
|
" np.random.seed(1)\n",
|
||||||
|
"\n",
|
||||||
|
" # Input layer\n",
|
||||||
|
" D_i = 1\n",
|
||||||
|
" # Output layer\n",
|
||||||
|
" D_o = 1\n",
|
||||||
|
"\n",
|
||||||
|
" # Glorot initialization\n",
|
||||||
|
" sigma_sq_omega = 1.0/D\n",
|
||||||
|
"\n",
|
||||||
|
" # Make empty lists\n",
|
||||||
|
" all_weights = [None] * (K+1)\n",
|
||||||
|
" all_biases = [None] * (K+1)\n",
|
||||||
|
"\n",
|
||||||
|
" # Create parameters for input and output layers\n",
|
||||||
|
" all_weights[0] = np.random.normal(size=(D, D_i))*np.sqrt(sigma_sq_omega)\n",
|
||||||
|
" all_weights[-1] = np.random.normal(size=(D_o, D)) * np.sqrt(sigma_sq_omega)\n",
|
||||||
|
" all_biases[0] = np.random.normal(size=(D,1))* np.sqrt(sigma_sq_omega)\n",
|
||||||
|
" all_biases[-1]= np.random.normal(size=(D_o,1))* np.sqrt(sigma_sq_omega)\n",
|
||||||
|
"\n",
|
||||||
|
" # Create intermediate layers\n",
|
||||||
|
" for layer in range(1,K):\n",
|
||||||
|
" all_weights[layer] = np.random.normal(size=(D,D))*np.sqrt(sigma_sq_omega)\n",
|
||||||
|
" all_biases[layer] = np.random.normal(size=(D,1))* np.sqrt(sigma_sq_omega)\n",
|
||||||
|
"\n",
|
||||||
|
" return all_weights, all_biases"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "kr-q7hc23Bn9"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"The next two functions define the forward pass of the algorithm"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "kwcn5z7-dq_1"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Define the Rectified Linear Unit (ReLU) function\n",
|
||||||
|
"def ReLU(preactivation):\n",
|
||||||
|
" activation = preactivation.clip(0.0)\n",
|
||||||
|
" return activation\n",
|
||||||
|
"\n",
|
||||||
|
"def forward_pass(net_input, all_weights, all_biases):\n",
|
||||||
|
"\n",
|
||||||
|
" # Retrieve number of layers\n",
|
||||||
|
" K = len(all_weights) -1\n",
|
||||||
|
"\n",
|
||||||
|
" # We'll store the pre-activations at each layer in a list \"all_f\"\n",
|
||||||
|
" # and the activations in a second list[all_h].\n",
|
||||||
|
" all_f = [None] * (K+1)\n",
|
||||||
|
" all_h = [None] * (K+1)\n",
|
||||||
|
"\n",
|
||||||
|
" #For convenience, we'll set\n",
|
||||||
|
" # all_h[0] to be the input, and all_f[K] will be the output\n",
|
||||||
|
" all_h[0] = net_input\n",
|
||||||
|
"\n",
|
||||||
|
" # Run through the layers, calculating all_f[0...K-1] and all_h[1...K]\n",
|
||||||
|
" for layer in range(K):\n",
|
||||||
|
" # Update preactivations and activations at this layer according to eqn 7.5\n",
|
||||||
|
" all_f[layer] = all_biases[layer] + np.matmul(all_weights[layer], all_h[layer])\n",
|
||||||
|
" all_h[layer+1] = ReLU(all_f[layer])\n",
|
||||||
|
"\n",
|
||||||
|
" # Compute the output from the last hidden layer\n",
|
||||||
|
" all_f[K] = all_biases[K] + np.matmul(all_weights[K], all_h[K])\n",
|
||||||
|
"\n",
|
||||||
|
" # Retrieve the output\n",
|
||||||
|
" net_output = all_f[K]\n",
|
||||||
|
"\n",
|
||||||
|
" return net_output, all_f, all_h"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "_2w-Tr7G3sYq"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"The next two functions compute the gradient of the output with respect to the input using the back propagation algorithm."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "aM2l7QafeC8T"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# We'll need the indicator function\n",
|
||||||
|
"def indicator_function(x):\n",
|
||||||
|
" x_in = np.array(x)\n",
|
||||||
|
" x_in[x_in>=0] = 1\n",
|
||||||
|
" x_in[x_in<0] = 0\n",
|
||||||
|
" return x_in\n",
|
||||||
|
"\n",
|
||||||
|
"# Main backward pass routine\n",
|
||||||
|
"def calc_input_output_gradient(x_in, all_weights, all_biases):\n",
|
||||||
|
"\n",
|
||||||
|
" # Run the forward pass\n",
|
||||||
|
" y, all_f, all_h = forward_pass(x_in, all_weights, all_biases)\n",
|
||||||
|
"\n",
|
||||||
|
" # We'll store the derivatives dl_dweights and dl_dbiases in lists as well\n",
|
||||||
|
" all_dl_dweights = [None] * (K+1)\n",
|
||||||
|
" all_dl_dbiases = [None] * (K+1)\n",
|
||||||
|
" # And we'll store the derivatives of the loss with respect to the activation and preactivations in lists\n",
|
||||||
|
" all_dl_df = [None] * (K+1)\n",
|
||||||
|
" all_dl_dh = [None] * (K+1)\n",
|
||||||
|
" # Again for convenience we'll stick with the convention that all_h[0] is the net input and all_f[k] in the net output\n",
|
||||||
|
"\n",
|
||||||
|
" # Compute derivatives of net output with respect to loss\n",
|
||||||
|
" all_dl_df[K] = np.ones_like(all_f[K])\n",
|
||||||
|
"\n",
|
||||||
|
" # Now work backwards through the network\n",
|
||||||
|
" for layer in range(K,-1,-1):\n",
|
||||||
|
" all_dl_dbiases[layer] = np.array(all_dl_df[layer])\n",
|
||||||
|
" all_dl_dweights[layer] = np.matmul(all_dl_df[layer], all_h[layer].transpose())\n",
|
||||||
|
"\n",
|
||||||
|
" all_dl_dh[layer] = np.matmul(all_weights[layer].transpose(), all_dl_df[layer])\n",
|
||||||
|
"\n",
|
||||||
|
" if layer > 0:\n",
|
||||||
|
" all_dl_df[layer-1] = indicator_function(all_f[layer-1]) * all_dl_dh[layer]\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" return all_dl_dh[0],y"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "DwR3eGMgV8bl"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Double check we have the gradient correct using finite differences"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "Ar_VmraReSWe"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"D = 200; K = 3\n",
|
||||||
|
"# Initialize parameters\n",
|
||||||
|
"all_weights, all_biases = init_params(K,D)\n",
|
||||||
|
"\n",
|
||||||
|
"x = np.ones((1,1))\n",
|
||||||
|
"dydx,y = calc_input_output_gradient(x, all_weights, all_biases)\n",
|
||||||
|
"\n",
|
||||||
|
"# Offset for finite gradients\n",
|
||||||
|
"delta = 0.00000001\n",
|
||||||
|
"x1 = x\n",
|
||||||
|
"y1,*_ = forward_pass(x1, all_weights, all_biases)\n",
|
||||||
|
"x2 = x+delta\n",
|
||||||
|
"y2,*_ = forward_pass(x2, all_weights, all_biases)\n",
|
||||||
|
"# Finite difference calculation\n",
|
||||||
|
"dydx_fd = (y2-y1)/delta\n",
|
||||||
|
"\n",
|
||||||
|
"print(\"Gradient calculation=%f, Finite difference gradient=%f\"%(dydx,dydx_fd))\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "KJpQPVd36Haq"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Helper function that computes the derivatives for a 1D array of input values and plots them."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "YC-LAYRKtbxp"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def plot_derivatives(K, D):\n",
|
||||||
|
"\n",
|
||||||
|
" # Initialize parameters\n",
|
||||||
|
" all_weights, all_biases = init_params(K,D)\n",
|
||||||
|
"\n",
|
||||||
|
" x_in = np.arange(-2,2, 4.0/256.0)\n",
|
||||||
|
" x_in = np.resize(x_in, (1,len(x_in)))\n",
|
||||||
|
" dydx,y = calc_input_output_gradient(x_in, all_weights, all_biases)\n",
|
||||||
|
"\n",
|
||||||
|
" fig,ax = plt.subplots()\n",
|
||||||
|
" ax.plot(np.squeeze(x_in), np.squeeze(dydx), 'b-')\n",
|
||||||
|
" ax.set_xlim(-2,2)\n",
|
||||||
|
" ax.set_xlabel('Input, $x$')\n",
|
||||||
|
" ax.set_ylabel('Gradient, $dy/dx$')\n",
|
||||||
|
" ax.set_title('No layers = %d'%(K))\n",
|
||||||
|
" plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "uJr5eDe648jF"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Build a model with one hidden layer and 200 neurons and plot derivatives\n",
|
||||||
|
"D = 200; K = 1\n",
|
||||||
|
"plot_derivatives(K,D)\n",
|
||||||
|
"\n",
|
||||||
|
"# TODO -- Interpret this result\n",
|
||||||
|
"# Why does the plot have some flat regions?\n",
|
||||||
|
"\n",
|
||||||
|
"# TODO -- Add code to plot the derivatives for models with 24 and 50 hidden layers\n",
|
||||||
|
"# with 200 neurons per layer\n",
|
||||||
|
"\n",
|
||||||
|
"# TODO -- Why does this graph not have visible flat regions?\n",
|
||||||
|
"\n",
|
||||||
|
"# TODO -- Why does the magnitude of the gradients decrease as we increase the number\n",
|
||||||
|
"# of hidden layers\n",
|
||||||
|
"\n",
|
||||||
|
"# TODO -- Do you find this a convincing replication of the experiment in the original paper? (I don't)\n",
|
||||||
|
"# Can you help me find why I have failed to replicate this result? udlbookmail@gmail.com"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "56gTMTCb49KO"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Let's look at the autocorrelation function now"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "f_0zjQbxuROQ"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def autocorr(dydx):\n",
|
||||||
|
" # TODO -- compute the autocorrelation function\n",
|
||||||
|
" # Use the numpy function \"correlate\" with the mode set to \"same\"\n",
|
||||||
|
" # Replace this line:\n",
|
||||||
|
" ac = np.ones((256,1))\n",
|
||||||
|
"\n",
|
||||||
|
" return ac"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "ggnO8hfoRN1e"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Helper function to plot the autocorrelation function and normalize so correlation is one with offset of zero"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "EctWSV1RuddK"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def plot_autocorr(K, D):\n",
|
||||||
|
"\n",
|
||||||
|
" # Initialize parameters\n",
|
||||||
|
" all_weights, all_biases = init_params(K,D)\n",
|
||||||
|
"\n",
|
||||||
|
" x_in = np.arange(-2.0,2.0, 4.0/256)\n",
|
||||||
|
" x_in = np.resize(x_in, (1,len(x_in)))\n",
|
||||||
|
" dydx,y = calc_input_output_gradient(x_in, all_weights, all_biases)\n",
|
||||||
|
" ac = autocorr(np.squeeze(dydx))\n",
|
||||||
|
" ac = ac / ac[128]\n",
|
||||||
|
"\n",
|
||||||
|
" y = ac[128:]\n",
|
||||||
|
" x = np.squeeze(x_in)[128:]\n",
|
||||||
|
" fig,ax = plt.subplots()\n",
|
||||||
|
" ax.plot(x,y, 'b-')\n",
|
||||||
|
" ax.set_xlim([0,2])\n",
|
||||||
|
" ax.set_xlabel('Distance')\n",
|
||||||
|
" ax.set_ylabel('Autocorrelation')\n",
|
||||||
|
" ax.set_title('No layers = %d'%(K))\n",
|
||||||
|
" plt.show()\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "2LKlZ9u_WQXN"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Plot the autocorrelation functions\n",
|
||||||
|
"D = 200; K =1\n",
|
||||||
|
"plot_autocorr(K,D)\n",
|
||||||
|
"D = 200; K =50\n",
|
||||||
|
"plot_autocorr(K,D)\n",
|
||||||
|
"\n",
|
||||||
|
"# TODO -- Do you find this a convincing replication of the experiment in the original paper? (I don't)\n",
|
||||||
|
"# Can you help me find why I have failed to replicate this result?"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "RD9JTdjNWw6p"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
277
Notebooks/Chap11/11_2_Residual_Networks.ipynb
Normal file
277
Notebooks/Chap11/11_2_Residual_Networks.ipynb
Normal file
@@ -0,0 +1,277 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyObut1y9atNUuowPT6dMY+I",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap11/11_2_Residual_Networks.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 11.2: Residual Networks**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook adapts the networks for MNIST1D to use residual connections.\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
|
||||||
|
"\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "t9vk9Elugvmi"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Run this if you're in a Colab to make a local copy of the MNIST 1D repository\n",
|
||||||
|
"!git clone https://github.com/greydanus/mnist1d"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "D5yLObtZCi9J"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import os\n",
|
||||||
|
"import torch, torch.nn as nn\n",
|
||||||
|
"from torch.utils.data import TensorDataset, DataLoader\n",
|
||||||
|
"from torch.optim.lr_scheduler import StepLR\n",
|
||||||
|
"import matplotlib.pyplot as plt\n",
|
||||||
|
"import mnist1d\n",
|
||||||
|
"import random"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "YrXWAH7sUWvU"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"args = mnist1d.data.get_dataset_args()\n",
|
||||||
|
"data = mnist1d.data.get_dataset(args, path='./mnist1d_data.pkl', download=False, regenerate=False)\n",
|
||||||
|
"\n",
|
||||||
|
"# The training and test input and outputs are in\n",
|
||||||
|
"# data['x'], data['y'], data['x_test'], and data['y_test']\n",
|
||||||
|
"print(\"Examples in training set: {}\".format(len(data['y'])))\n",
|
||||||
|
"print(\"Examples in test set: {}\".format(len(data['y_test'])))\n",
|
||||||
|
"print(\"Length of each example: {}\".format(data['x'].shape[-1]))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "twI72ZCrCt5z"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Load in the data\n",
|
||||||
|
"train_data_x = data['x'].transpose()\n",
|
||||||
|
"train_data_y = data['y']\n",
|
||||||
|
"val_data_x = data['x_test'].transpose()\n",
|
||||||
|
"val_data_y = data['y_test']\n",
|
||||||
|
"# Print out sizes\n",
|
||||||
|
"print(\"Train data: %d examples (columns), each of which has %d dimensions (rows)\"%((train_data_x.shape[1],train_data_x.shape[0])))\n",
|
||||||
|
"print(\"Validation data: %d examples (columns), each of which has %d dimensions (rows)\"%((val_data_x.shape[1],val_data_x.shape[0])))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "8bKADvLHbiV5"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Define the network"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "_sFvRDGrl4qe"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# There are 40 input dimensions and 10 output dimensions for this data\n",
|
||||||
|
"# The inputs correspond to the 40 offsets in the MNIST1D template.\n",
|
||||||
|
"D_i = 40\n",
|
||||||
|
"# The outputs correspond to the 10 digits\n",
|
||||||
|
"D_o = 10\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"# We will adapt this model to have residual connections around the linear layers\n",
|
||||||
|
"# This is the same model we used in practical 8.1, but we can't use the sequential\n",
|
||||||
|
"# class for residual networks (which aren't strictly sequential). Hence, I've rewritten\n",
|
||||||
|
"# it as a model that inherits from a base class\n",
|
||||||
|
"\n",
|
||||||
|
"class ResidualNetwork(torch.nn.Module):\n",
|
||||||
|
" def __init__(self, input_size, output_size, hidden_size=100):\n",
|
||||||
|
" super(ResidualNetwork, self).__init__()\n",
|
||||||
|
" self.linear1 = nn.Linear(input_size, hidden_size)\n",
|
||||||
|
" self.linear2 = nn.Linear(hidden_size, hidden_size)\n",
|
||||||
|
" self.linear3 = nn.Linear(hidden_size, hidden_size)\n",
|
||||||
|
" self.linear4 = nn.Linear(hidden_size, output_size)\n",
|
||||||
|
" print(\"Initialized MLPBase model with {} parameters\".format(self.count_params()))\n",
|
||||||
|
"\n",
|
||||||
|
" def count_params(self):\n",
|
||||||
|
" return sum([p.view(-1).shape[0] for p in self.parameters()])\n",
|
||||||
|
"\n",
|
||||||
|
"# # TODO -- Add residual connections to this model\n",
|
||||||
|
"# # The order of operations should similar to figure 11.5b\n",
|
||||||
|
"# # linear1 first, ReLU+linear2 in first residual block, ReLU+linear3 in second residual block), linear4 at end\n",
|
||||||
|
"# # Replace this function\n",
|
||||||
|
" def forward(self, x):\n",
|
||||||
|
" h1 = self.linear1(x).relu()\n",
|
||||||
|
" h2 = self.linear2(h1).relu()\n",
|
||||||
|
" h3 = self.linear3(h2).relu()\n",
|
||||||
|
" return self.linear4(h3)\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "FslroPJJffrh"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# He initialization of weights\n",
|
||||||
|
"def weights_init(layer_in):\n",
|
||||||
|
" if isinstance(layer_in, nn.Linear):\n",
|
||||||
|
" nn.init.kaiming_uniform_(layer_in.weight)\n",
|
||||||
|
" layer_in.bias.data.fill_(0.0)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "YgLaex1pfhqz"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"#Define the model\n",
|
||||||
|
"model = ResidualNetwork(40, 10)\n",
|
||||||
|
"\n",
|
||||||
|
"# choose cross entropy loss function (equation 5.24 in the loss notes)\n",
|
||||||
|
"loss_function = nn.CrossEntropyLoss()\n",
|
||||||
|
"# construct SGD optimizer and initialize learning rate and momentum\n",
|
||||||
|
"optimizer = torch.optim.SGD(model.parameters(), lr = 0.05, momentum=0.9)\n",
|
||||||
|
"# object that decreases learning rate by half every 20 epochs\n",
|
||||||
|
"scheduler = StepLR(optimizer, step_size=20, gamma=0.5)\n",
|
||||||
|
"# convert data to torch tensors\n",
|
||||||
|
"x_train = torch.tensor(train_data_x.transpose().astype('float32'))\n",
|
||||||
|
"y_train = torch.tensor(train_data_y.astype('long'))\n",
|
||||||
|
"x_val= torch.tensor(val_data_x.transpose().astype('float32'))\n",
|
||||||
|
"y_val = torch.tensor(val_data_y.astype('long'))\n",
|
||||||
|
"\n",
|
||||||
|
"# load the data into a class that creates the batches\n",
|
||||||
|
"data_loader = DataLoader(TensorDataset(x_train,y_train), batch_size=100, shuffle=True, worker_init_fn=np.random.seed(1))\n",
|
||||||
|
"\n",
|
||||||
|
"# Initialize model weights\n",
|
||||||
|
"model.apply(weights_init)\n",
|
||||||
|
"\n",
|
||||||
|
"# loop over the dataset n_epoch times\n",
|
||||||
|
"n_epoch = 100\n",
|
||||||
|
"# store the loss and the % correct at each epoch\n",
|
||||||
|
"losses_train = np.zeros((n_epoch))\n",
|
||||||
|
"errors_train = np.zeros((n_epoch))\n",
|
||||||
|
"losses_val = np.zeros((n_epoch))\n",
|
||||||
|
"errors_val = np.zeros((n_epoch))\n",
|
||||||
|
"\n",
|
||||||
|
"for epoch in range(n_epoch):\n",
|
||||||
|
" # loop over batches\n",
|
||||||
|
" for i, data in enumerate(data_loader):\n",
|
||||||
|
" # retrieve inputs and labels for this batch\n",
|
||||||
|
" x_batch, y_batch = data\n",
|
||||||
|
" # zero the parameter gradients\n",
|
||||||
|
" optimizer.zero_grad()\n",
|
||||||
|
" # forward pass -- calculate model output\n",
|
||||||
|
" pred = model(x_batch)\n",
|
||||||
|
" # compute the loss\n",
|
||||||
|
" loss = loss_function(pred, y_batch)\n",
|
||||||
|
" # backward pass\n",
|
||||||
|
" loss.backward()\n",
|
||||||
|
" # SGD update\n",
|
||||||
|
" optimizer.step()\n",
|
||||||
|
"\n",
|
||||||
|
" # Run whole dataset to get statistics -- normally wouldn't do this\n",
|
||||||
|
" pred_train = model(x_train)\n",
|
||||||
|
" pred_val = model(x_val)\n",
|
||||||
|
" _, predicted_train_class = torch.max(pred_train.data, 1)\n",
|
||||||
|
" _, predicted_val_class = torch.max(pred_val.data, 1)\n",
|
||||||
|
" errors_train[epoch] = 100 - 100 * (predicted_train_class == y_train).float().sum() / len(y_train)\n",
|
||||||
|
" errors_val[epoch]= 100 - 100 * (predicted_val_class == y_val).float().sum() / len(y_val)\n",
|
||||||
|
" losses_train[epoch] = loss_function(pred_train, y_train).item()\n",
|
||||||
|
" losses_val[epoch]= loss_function(pred_val, y_val).item()\n",
|
||||||
|
" print(f'Epoch {epoch:5d}, train loss {losses_train[epoch]:.6f}, train error {errors_train[epoch]:3.2f}, val loss {losses_val[epoch]:.6f}, percent error {errors_val[epoch]:3.2f}')\n",
|
||||||
|
"\n",
|
||||||
|
" # tell scheduler to consider updating learning rate\n",
|
||||||
|
" scheduler.step()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "NYw8I_3mmX5c"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Plot the results\n",
|
||||||
|
"fig, ax = plt.subplots()\n",
|
||||||
|
"ax.plot(errors_train,'r-',label='train')\n",
|
||||||
|
"ax.plot(errors_val,'b-',label='test')\n",
|
||||||
|
"ax.set_ylim(0,100); ax.set_xlim(0,n_epoch)\n",
|
||||||
|
"ax.set_xlabel('Epoch'); ax.set_ylabel('Error')\n",
|
||||||
|
"ax.set_title('TrainError %3.2f, Val Error %3.2f'%(errors_train[-1],errors_val[-1]))\n",
|
||||||
|
"ax.legend()\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "CcP_VyEmE2sv"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"The primary motivation of residual networks is to allow training of much deeper networks. \n",
|
||||||
|
"\n",
|
||||||
|
"TODO: Try running this network with and without the residual connections. Does adding the residual connections change the performance?"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "wMmqhmxuAx0M"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
328
Notebooks/Chap11/11_3_Batch_Normalization.ipynb
Normal file
328
Notebooks/Chap11/11_3_Batch_Normalization.ipynb
Normal file
@@ -0,0 +1,328 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyOoGS+lY+EhGthebSO4smpj",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap11/11_3_Batch_Normalization.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 11.3: Batch normalization**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook investigates the use of batch normalization in residual networks.\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
|
||||||
|
"\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "t9vk9Elugvmi"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Run this if you're in a Colab to make a local copy of the MNIST 1D repository\n",
|
||||||
|
"!git clone https://github.com/greydanus/mnist1d"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "D5yLObtZCi9J"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import os\n",
|
||||||
|
"import torch, torch.nn as nn\n",
|
||||||
|
"from torch.utils.data import TensorDataset, DataLoader\n",
|
||||||
|
"from torch.optim.lr_scheduler import StepLR\n",
|
||||||
|
"import matplotlib.pyplot as plt\n",
|
||||||
|
"import mnist1d\n",
|
||||||
|
"import random"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "YrXWAH7sUWvU"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"args = mnist1d.data.get_dataset_args()\n",
|
||||||
|
"data = mnist1d.data.get_dataset(args, path='./mnist1d_data.pkl', download=False, regenerate=False)\n",
|
||||||
|
"\n",
|
||||||
|
"# The training and test input and outputs are in\n",
|
||||||
|
"# data['x'], data['y'], data['x_test'], and data['y_test']\n",
|
||||||
|
"print(\"Examples in training set: {}\".format(len(data['y'])))\n",
|
||||||
|
"print(\"Examples in test set: {}\".format(len(data['y_test'])))\n",
|
||||||
|
"print(\"Length of each example: {}\".format(data['x'].shape[-1]))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "twI72ZCrCt5z"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Load in the data\n",
|
||||||
|
"train_data_x = data['x'].transpose()\n",
|
||||||
|
"train_data_y = data['y']\n",
|
||||||
|
"val_data_x = data['x_test'].transpose()\n",
|
||||||
|
"val_data_y = data['y_test']\n",
|
||||||
|
"# Print out sizes\n",
|
||||||
|
"print(\"Train data: %d examples (columns), each of which has %d dimensions (rows)\"%((train_data_x.shape[1],train_data_x.shape[0])))\n",
|
||||||
|
"print(\"Validation data: %d examples (columns), each of which has %d dimensions (rows)\"%((val_data_x.shape[1],val_data_x.shape[0])))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "8bKADvLHbiV5"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def print_variance(name, data):\n",
|
||||||
|
" # First dimension(rows) is batch elements\n",
|
||||||
|
" # Second dimension(columns) is neurons.\n",
|
||||||
|
" np_data = data.detach().numpy()\n",
|
||||||
|
" # Compute variance across neurons and average these variances over members of the batch\n",
|
||||||
|
" neuron_variance = np.mean(np.var(np_data, axis=0))\n",
|
||||||
|
" # Print out the name and the variance\n",
|
||||||
|
" print(\"%s variance=%f\"%(name,neuron_variance))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "3bBpJIV-N-lt"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# He initialization of weights\n",
|
||||||
|
"def weights_init(layer_in):\n",
|
||||||
|
" if isinstance(layer_in, nn.Linear):\n",
|
||||||
|
" nn.init.kaiming_uniform_(layer_in.weight)\n",
|
||||||
|
" layer_in.bias.data.fill_(0.0)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "YgLaex1pfhqz"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def run_one_step_of_model(model, x_train, y_train):\n",
|
||||||
|
" # choose cross entropy loss function (equation 5.24 in the loss notes)\n",
|
||||||
|
" loss_function = nn.CrossEntropyLoss()\n",
|
||||||
|
" # construct SGD optimizer and initialize learning rate and momentum\n",
|
||||||
|
" optimizer = torch.optim.SGD(model.parameters(), lr = 0.05, momentum=0.9)\n",
|
||||||
|
"\n",
|
||||||
|
" # load the data into a class that creates the batches\n",
|
||||||
|
" data_loader = DataLoader(TensorDataset(x_train,y_train), batch_size=200, shuffle=True, worker_init_fn=np.random.seed(1))\n",
|
||||||
|
"\n",
|
||||||
|
" # Initialize model weights\n",
|
||||||
|
" model.apply(weights_init)\n",
|
||||||
|
"\n",
|
||||||
|
" # Get a batch\n",
|
||||||
|
" for i, data in enumerate(data_loader):\n",
|
||||||
|
" # retrieve inputs and labels for this batch\n",
|
||||||
|
" x_batch, y_batch = data\n",
|
||||||
|
" # zero the parameter gradients\n",
|
||||||
|
" optimizer.zero_grad()\n",
|
||||||
|
" # forward pass -- calculate model output\n",
|
||||||
|
" pred = model(x_batch)\n",
|
||||||
|
" # compute the loss\n",
|
||||||
|
" loss = loss_function(pred, y_batch)\n",
|
||||||
|
" # backward pass\n",
|
||||||
|
" loss.backward()\n",
|
||||||
|
" # SGD update\n",
|
||||||
|
" optimizer.step()\n",
|
||||||
|
" # Break out of this loop -- we just want to see the first\n",
|
||||||
|
" # iteration, but usually we would continue\n",
|
||||||
|
" break"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "DFlu45pORQEz"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# convert training data to torch tensors\n",
|
||||||
|
"x_train = torch.tensor(train_data_x.transpose().astype('float32'))\n",
|
||||||
|
"y_train = torch.tensor(train_data_y.astype('long'))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "i7Q0ScWgRe4G"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# This is a simple residual model with 5 residual branches in a row\n",
|
||||||
|
"class ResidualNetwork(torch.nn.Module):\n",
|
||||||
|
" def __init__(self, input_size, output_size, hidden_size=100):\n",
|
||||||
|
" super(ResidualNetwork, self).__init__()\n",
|
||||||
|
" self.linear1 = nn.Linear(input_size, hidden_size)\n",
|
||||||
|
" self.linear2 = nn.Linear(hidden_size, hidden_size)\n",
|
||||||
|
" self.linear3 = nn.Linear(hidden_size, hidden_size)\n",
|
||||||
|
" self.linear4 = nn.Linear(hidden_size, hidden_size)\n",
|
||||||
|
" self.linear5 = nn.Linear(hidden_size, hidden_size)\n",
|
||||||
|
" self.linear6 = nn.Linear(hidden_size, output_size)\n",
|
||||||
|
"\n",
|
||||||
|
" def count_params(self):\n",
|
||||||
|
" return sum([p.view(-1).shape[0] for p in self.parameters()])\n",
|
||||||
|
"\n",
|
||||||
|
" def forward(self, x):\n",
|
||||||
|
" print_variance(\"Input\",x)\n",
|
||||||
|
" f = self.linear1(x)\n",
|
||||||
|
" print_variance(\"First preactivation\",f)\n",
|
||||||
|
" res1 = f+ self.linear2(f.relu())\n",
|
||||||
|
" print_variance(\"After first residual connection\",res1)\n",
|
||||||
|
" res2 = res1 + self.linear3(res1.relu())\n",
|
||||||
|
" print_variance(\"After second residual connection\",res2)\n",
|
||||||
|
" res3 = res2 + self.linear4(res2.relu())\n",
|
||||||
|
" print_variance(\"After third residual connection\",res3)\n",
|
||||||
|
" res4 = res3 + self.linear4(res3.relu())\n",
|
||||||
|
" print_variance(\"After fourth residual connection\",res4)\n",
|
||||||
|
" res5 = res4 + self.linear4(res4.relu())\n",
|
||||||
|
" print_variance(\"After fifth residual connection\",res5)\n",
|
||||||
|
" return self.linear6(res5)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "FslroPJJffrh"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Define the model and run for one step\n",
|
||||||
|
"# Monitoring the variance at each point in the network\n",
|
||||||
|
"n_hidden = 100\n",
|
||||||
|
"n_input = 40\n",
|
||||||
|
"n_output = 10\n",
|
||||||
|
"model = ResidualNetwork(n_input, n_output, n_hidden)\n",
|
||||||
|
"run_one_step_of_model(model, x_train, y_train)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "NYw8I_3mmX5c"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Notice that the variance roughly doubles at each step so it increases exponentially as in figure 11.6b in the book."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "0kZUlWkkW8jE"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# TODO Adapt the residual network below to add a batch norm operation\n",
|
||||||
|
"# before the contents of each residual link as in figure 11.6c in the book\n",
|
||||||
|
"# Use the torch function nn.BatchNorm1d\n",
|
||||||
|
"class ResidualNetworkWithBatchNorm(torch.nn.Module):\n",
|
||||||
|
" def __init__(self, input_size, output_size, hidden_size=100):\n",
|
||||||
|
" super(ResidualNetworkWithBatchNorm, self).__init__()\n",
|
||||||
|
" self.linear1 = nn.Linear(input_size, hidden_size)\n",
|
||||||
|
" self.linear2 = nn.Linear(hidden_size, hidden_size)\n",
|
||||||
|
" self.linear3 = nn.Linear(hidden_size, hidden_size)\n",
|
||||||
|
" self.linear4 = nn.Linear(hidden_size, hidden_size)\n",
|
||||||
|
" self.linear5 = nn.Linear(hidden_size, hidden_size)\n",
|
||||||
|
" self.linear6 = nn.Linear(hidden_size, output_size)\n",
|
||||||
|
"\n",
|
||||||
|
" def count_params(self):\n",
|
||||||
|
" return sum([p.view(-1).shape[0] for p in self.parameters()])\n",
|
||||||
|
"\n",
|
||||||
|
" def forward(self, x):\n",
|
||||||
|
" print_variance(\"Input\",x)\n",
|
||||||
|
" f = self.linear1(x)\n",
|
||||||
|
" print_variance(\"First preactivation\",f)\n",
|
||||||
|
" res1 = f+ self.linear2(f.relu())\n",
|
||||||
|
" print_variance(\"After first residual connection\",res1)\n",
|
||||||
|
" res2 = res1 + self.linear3(res1.relu())\n",
|
||||||
|
" print_variance(\"After second residual connection\",res2)\n",
|
||||||
|
" res3 = res2 + self.linear4(res2.relu())\n",
|
||||||
|
" print_variance(\"After third residual connection\",res3)\n",
|
||||||
|
" res4 = res3 + self.linear4(res3.relu())\n",
|
||||||
|
" print_variance(\"After fourth residual connection\",res4)\n",
|
||||||
|
" res5 = res4 + self.linear4(res4.relu())\n",
|
||||||
|
" print_variance(\"After fifth residual connection\",res5)\n",
|
||||||
|
" return self.linear6(res5)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "5JvMmaRITKGd"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Define the model\n",
|
||||||
|
"n_hidden = 100\n",
|
||||||
|
"n_input = 40\n",
|
||||||
|
"n_output = 10\n",
|
||||||
|
"model = ResidualNetworkWithBatchNorm(n_input, n_output, n_hidden)\n",
|
||||||
|
"run_one_step_of_model(model, x_train, y_train)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "2U3DnlH9Uw6c"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Note that the variance now increases linearly as in figure 11.6c."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "R_ucFq9CXq8D"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
375
Notebooks/Chap12/12_1_Self_Attention.ipynb
Normal file
375
Notebooks/Chap12/12_1_Self_Attention.ipynb
Normal file
@@ -0,0 +1,375 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyOKrX9gmuhl9+KwscpZKr3u",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap12/12_1_Self_Attention.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 12.1: Self Attention**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook builds a self-attnetion mechanism from scratch, as discussed in section 12.2 of the book.\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
|
||||||
|
"\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "t9vk9Elugvmi"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "OLComQyvCIJ7"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"The self-attention mechanism maps $N$ inputs $\\mathbf{x}_{n}\\in\\mathbb{R}^{D}$ and returns $N$ outputs $\\mathbf{x}'_{n}\\in \\mathbb{R}^{D}$. \n",
|
||||||
|
"\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "9OJkkoNqCVK2"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Set seed so we get the same random numbers\n",
|
||||||
|
"np.random.seed(3)\n",
|
||||||
|
"# Number of inputs\n",
|
||||||
|
"N = 3\n",
|
||||||
|
"# Number of dimensions of each input\n",
|
||||||
|
"D = 4\n",
|
||||||
|
"# Create an empty list\n",
|
||||||
|
"all_x = []\n",
|
||||||
|
"# Create elements x_n and append to list\n",
|
||||||
|
"for n in range(N):\n",
|
||||||
|
" all_x.append(np.random.normal(size=(D,1)))\n",
|
||||||
|
"# Print out the list\n",
|
||||||
|
"print(all_x)\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "oAygJwLiCSri"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"We'll also need the weights and biases for the keys, queries, and values (equations 12.2 and 12.4)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "W2iHFbtKMaDp"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Set seed so we get the same random numbers\n",
|
||||||
|
"np.random.seed(0)\n",
|
||||||
|
"\n",
|
||||||
|
"# Choose random values for the parameters\n",
|
||||||
|
"omega_q = np.random.normal(size=(D,D))\n",
|
||||||
|
"omega_k = np.random.normal(size=(D,D))\n",
|
||||||
|
"omega_v = np.random.normal(size=(D,D))\n",
|
||||||
|
"beta_q = np.random.normal(size=(D,1))\n",
|
||||||
|
"beta_k = np.random.normal(size=(D,1))\n",
|
||||||
|
"beta_v = np.random.normal(size=(D,1))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "79TSK7oLMobe"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's compute the queries, keys, and values for each input"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "VxaKQtP3Ng6R"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Make three lists to store queries, keys, and values\n",
|
||||||
|
"all_queries = []\n",
|
||||||
|
"all_keys = []\n",
|
||||||
|
"all_values = []\n",
|
||||||
|
"# For every input\n",
|
||||||
|
"for x in all_x:\n",
|
||||||
|
" # TODO -- compute the keys, queries and values.\n",
|
||||||
|
" # Replace these three lines\n",
|
||||||
|
" query = np.ones_like(x)\n",
|
||||||
|
" key = np.ones_like(x)\n",
|
||||||
|
" value = np.ones_like(x)\n",
|
||||||
|
"\n",
|
||||||
|
" all_queries.append(query)\n",
|
||||||
|
" all_keys.append(key)\n",
|
||||||
|
" all_values.append(value)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "TwDK2tfdNmw9"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"We'll need a softmax function (equation 12.5) -- here, it will take a list of arbirtrary numbers and return a list where the elements are non-negative and sum to one\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "Se7DK6PGPSUk"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def softmax(items_in):\n",
|
||||||
|
"\n",
|
||||||
|
" # TODO Compute the elements of items_out\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" items_out = items_in.copy()\n",
|
||||||
|
"\n",
|
||||||
|
" return items_out ;"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "u93LIcE5PoiM"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now compute the self attention values:"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "8aJVhbKDW7lm"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Create emptymlist for output\n",
|
||||||
|
"all_x_prime = []\n",
|
||||||
|
"\n",
|
||||||
|
"# For each output\n",
|
||||||
|
"for n in range(N):\n",
|
||||||
|
" # Create list for dot products of query N with all keys\n",
|
||||||
|
" all_km_qn = []\n",
|
||||||
|
" # Compute the dot products\n",
|
||||||
|
" for key in all_keys:\n",
|
||||||
|
" # TODO -- compute the appropriate dot product\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" dot_product = 1\n",
|
||||||
|
"\n",
|
||||||
|
" # Store dot product\n",
|
||||||
|
" all_km_qn.append(dot_product)\n",
|
||||||
|
"\n",
|
||||||
|
" # Compute dot product\n",
|
||||||
|
" attention = softmax(all_km_qn)\n",
|
||||||
|
" # Print result (should be positive sum to one)\n",
|
||||||
|
" print(\"Attentions for output \", n)\n",
|
||||||
|
" print(attention)\n",
|
||||||
|
"\n",
|
||||||
|
" # TODO: Compute a weighted sum of all of the values according to the attention\n",
|
||||||
|
" # (equation 12.3)\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" x_prime = np.zeros((D,1))\n",
|
||||||
|
"\n",
|
||||||
|
" all_x_prime.append(x_prime)\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"# Print out true values to check you have it correct\n",
|
||||||
|
"print(\"x_prime_0_calculated:\", all_x_prime[0].transpose())\n",
|
||||||
|
"print(\"x_prime_0_true: [[ 0.94744244 -0.24348429 -0.91310441 -0.44522983]]\")\n",
|
||||||
|
"print(\"x_prime_1_calculated:\", all_x_prime[1].transpose())\n",
|
||||||
|
"print(\"x_prime_1_true: [[ 1.64201168 -0.08470004 4.02764044 2.18690791]]\")\n",
|
||||||
|
"print(\"x_prime_2_calculated:\", all_x_prime[2].transpose())\n",
|
||||||
|
"print(\"x_prime_2_true: [[ 1.61949281 -0.06641533 3.96863308 2.15858316]]\")\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "yimz-5nCW6vQ"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's compute the same thing, but using matrix calculations. We'll store the $N$ inputs $\\mathbf{x}_{n}\\in\\mathbb{R}^{D}$ in the columns of a $D\\times N$ matrix, using equations 12.6 and 12.7/8.\n",
|
||||||
|
"\n",
|
||||||
|
"Note: The book uses column vectors (for compatibility with the rest of the text), but in the wider literature it is more normal to store the inputs in the rows of a matrix; in this case, the computation is the same, but all the matrices are transposed and the operations proceed in the reverse order."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "PJ2vCQ_7C38K"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Define softmax operation that works independently on each column\n",
|
||||||
|
"def softmax_cols(data_in):\n",
|
||||||
|
" # Exponentiate all of the values\n",
|
||||||
|
" exp_values = np.exp(data_in) ;\n",
|
||||||
|
" # Sum over columns\n",
|
||||||
|
" denom = np.sum(exp_values, axis = 0);\n",
|
||||||
|
" # Replicate denominator to N rows\n",
|
||||||
|
" denom = np.matmul(np.ones((data_in.shape[0],1)), denom[np.newaxis,:])\n",
|
||||||
|
" # Compute softmax\n",
|
||||||
|
" softmax = exp_values / denom\n",
|
||||||
|
" # return the answer\n",
|
||||||
|
" return softmax"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "obaQBdUAMXXv"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
" # Now let's compute self attention in matrix form\n",
|
||||||
|
"def self_attention(X,omega_v, omega_q, omega_k, beta_v, beta_q, beta_k):\n",
|
||||||
|
"\n",
|
||||||
|
" # TODO -- Write this function\n",
|
||||||
|
" # 1. Compute queries, keys, and values\n",
|
||||||
|
" # 2. Compute dot products\n",
|
||||||
|
" # 3. Apply softmax to calculate attentions\n",
|
||||||
|
" # 4. Weight values by attentions\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" X_prime = np.zeros_like(X);\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" return X_prime"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "gb2WvQ3SiH8r"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Copy data into matrix\n",
|
||||||
|
"X = np.zeros((D, N))\n",
|
||||||
|
"X[:,0] = np.squeeze(all_x[0])\n",
|
||||||
|
"X[:,1] = np.squeeze(all_x[1])\n",
|
||||||
|
"X[:,2] = np.squeeze(all_x[2])\n",
|
||||||
|
"\n",
|
||||||
|
"# Run the self attention mechanism\n",
|
||||||
|
"X_prime = self_attention(X,omega_v, omega_q, omega_k, beta_v, beta_q, beta_k)\n",
|
||||||
|
"\n",
|
||||||
|
"# Print out the results\n",
|
||||||
|
"print(X_prime)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "MUOJbgJskUpl"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"If you did this correctly, the values should be the same as above.\n",
|
||||||
|
"\n",
|
||||||
|
"TODO: \n",
|
||||||
|
"\n",
|
||||||
|
"Print out the attention matrix\n",
|
||||||
|
"You will see that the values are quite extreme (one is very close to one and the others are very close to zero. Now we'll fix this problem by using scaled dot-product attention."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "as_lRKQFpvz0"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Now let's compute self attention in matrix form\n",
|
||||||
|
"def scaled_dot_product_self_attention(X,omega_v, omega_q, omega_k, beta_v, beta_q, beta_k):\n",
|
||||||
|
"\n",
|
||||||
|
" # TODO -- Write this function\n",
|
||||||
|
" # 1. Compute queries, keys, and values\n",
|
||||||
|
" # 2. Compute dot products\n",
|
||||||
|
" # 3. Scale the dot products as in equation 12.9\n",
|
||||||
|
" # 4. Apply softmax to calculate attentions\n",
|
||||||
|
" # 5. Weight values by attentions\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" X_prime = np.zeros_like(X);\n",
|
||||||
|
"\n",
|
||||||
|
" return X_prime"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "kLU7PUnnqvIh"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Run the self attention mechanism\n",
|
||||||
|
"X_prime = scaled_dot_product_self_attention(X,omega_v, omega_q, omega_k, beta_v, beta_q, beta_k)\n",
|
||||||
|
"\n",
|
||||||
|
"# Print out the results\n",
|
||||||
|
"print(X_prime)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "n18e3XNzmVgL"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"TODO -- Investigate whether the self-attention mechanism is covariant with respect to permulation.\n",
|
||||||
|
"If it is, when we permute the columns of the input matrix $\\mathbf{X}$, the columns of the output matrix $\\mathbf{X}'$ will also be permuted.\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "QDEkIrcgrql-"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
212
Notebooks/Chap12/12_2_Multihead_Self_Attention.ipynb
Normal file
212
Notebooks/Chap12/12_2_Multihead_Self_Attention.ipynb
Normal file
@@ -0,0 +1,212 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyMSk8qTqDYqFnRJVZKlsue0",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap12/12_2_Multihead_Self_Attention.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 12.1: Multhead Self-Attention**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook builds a multihead self-attentionm mechanism as in figure 12.6\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
|
||||||
|
"\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "t9vk9Elugvmi"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "OLComQyvCIJ7"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"The multihead self-attention mechanism maps $N$ inputs $\\mathbf{x}_{n}\\in\\mathbb{R}^{D}$ and returns $N$ outputs $\\mathbf{x}'_{n}\\in \\mathbb{R}^{D}$. \n",
|
||||||
|
"\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "9OJkkoNqCVK2"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Set seed so we get the same random numbers\n",
|
||||||
|
"np.random.seed(3)\n",
|
||||||
|
"# Number of inputs\n",
|
||||||
|
"N = 6\n",
|
||||||
|
"# Number of dimensions of each input\n",
|
||||||
|
"D = 8\n",
|
||||||
|
"# Create an empty list\n",
|
||||||
|
"X = np.random.normal(size=(D,N))\n",
|
||||||
|
"# Print X\n",
|
||||||
|
"print(X)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "oAygJwLiCSri"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"We'll use two heads. We'll need the weights and biases for the keys, queries, and values (equations 12.2 and 12.4). We'll use two heads, and (as in the figure), we'll make the queries keys and values of size D/H"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "W2iHFbtKMaDp"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Number of heads\n",
|
||||||
|
"H = 2\n",
|
||||||
|
"# QDV dimension\n",
|
||||||
|
"H_D = int(D/H)\n",
|
||||||
|
"\n",
|
||||||
|
"# Set seed so we get the same random numbers\n",
|
||||||
|
"np.random.seed(0)\n",
|
||||||
|
"\n",
|
||||||
|
"# Choose random values for the parameters for the first head\n",
|
||||||
|
"omega_q1 = np.random.normal(size=(H_D,D))\n",
|
||||||
|
"omega_k1 = np.random.normal(size=(H_D,D))\n",
|
||||||
|
"omega_v1 = np.random.normal(size=(H_D,D))\n",
|
||||||
|
"beta_q1 = np.random.normal(size=(H_D,1))\n",
|
||||||
|
"beta_k1 = np.random.normal(size=(H_D,1))\n",
|
||||||
|
"beta_v1 = np.random.normal(size=(H_D,1))\n",
|
||||||
|
"\n",
|
||||||
|
"# Choose random values for the parameters for the second head\n",
|
||||||
|
"omega_q2 = np.random.normal(size=(H_D,D))\n",
|
||||||
|
"omega_k2 = np.random.normal(size=(H_D,D))\n",
|
||||||
|
"omega_v2 = np.random.normal(size=(H_D,D))\n",
|
||||||
|
"beta_q2 = np.random.normal(size=(H_D,1))\n",
|
||||||
|
"beta_k2 = np.random.normal(size=(H_D,1))\n",
|
||||||
|
"beta_v2 = np.random.normal(size=(H_D,1))\n",
|
||||||
|
"\n",
|
||||||
|
"# Choose random values for the parameters\n",
|
||||||
|
"omega_c = np.random.normal(size=(D,D))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "79TSK7oLMobe"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's compute the multiscale self-attention"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "VxaKQtP3Ng6R"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Define softmax operation that works independently on each column\n",
|
||||||
|
"def softmax_cols(data_in):\n",
|
||||||
|
" # Exponentiate all of the values\n",
|
||||||
|
" exp_values = np.exp(data_in) ;\n",
|
||||||
|
" # Sum over columns\n",
|
||||||
|
" denom = np.sum(exp_values, axis = 0);\n",
|
||||||
|
" # Replicate denominator to N rows\n",
|
||||||
|
" denom = np.matmul(np.ones((data_in.shape[0],1)), denom[np.newaxis,:])\n",
|
||||||
|
" # Compute softmax\n",
|
||||||
|
" softmax = exp_values / denom\n",
|
||||||
|
" # return the answer\n",
|
||||||
|
" return softmax"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "obaQBdUAMXXv"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
" # Now let's compute self attention in matrix form\n",
|
||||||
|
"def multihead_scaled_self_attention(X,omega_v1, omega_q1, omega_k1, beta_v1, beta_q1, beta_k1, omega_v2, omega_q2, omega_k2, beta_v2, beta_q2, beta_k2, omega_c):\n",
|
||||||
|
"\n",
|
||||||
|
" # TODO Write the multihead scaled self-attention mechanism.\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" X_prime = np.zeros_like(X) ;\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" return X_prime"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "gb2WvQ3SiH8r"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Run the self attention mechanism\n",
|
||||||
|
"X_prime = multihead_scaled_self_attention(X,omega_v1, omega_q1, omega_k1, beta_v1, beta_q1, beta_k1, omega_v2, omega_q2, omega_k2, beta_v2, beta_q2, beta_k2, omega_c)\n",
|
||||||
|
"\n",
|
||||||
|
"# Print out the results\n",
|
||||||
|
"np.set_printoptions(precision=3)\n",
|
||||||
|
"print(\"Your answer:\")\n",
|
||||||
|
"print(X_prime)\n",
|
||||||
|
"\n",
|
||||||
|
"print(\"True values:\")\n",
|
||||||
|
"print(\"[[-21.207 -5.373 -20.933 -9.179 -11.319 -17.812]\")\n",
|
||||||
|
"print(\" [ -1.995 7.906 -10.516 3.452 9.863 -7.24 ]\")\n",
|
||||||
|
"print(\" [ 5.479 1.115 9.244 0.453 5.656 7.089]\")\n",
|
||||||
|
"print(\" [ -7.413 -7.416 0.363 -5.573 -6.736 -0.848]\")\n",
|
||||||
|
"print(\" [-11.261 -9.937 -4.848 -8.915 -13.378 -5.761]\")\n",
|
||||||
|
"print(\" [ 3.548 10.036 -2.244 1.604 12.113 -2.557]\")\n",
|
||||||
|
"print(\" [ 4.888 -5.814 2.407 3.228 -4.232 3.71 ]\")\n",
|
||||||
|
"print(\" [ 1.248 18.894 -6.409 3.224 19.717 -5.629]]\")\n",
|
||||||
|
"\n",
|
||||||
|
"# If your answers don't match, then make sure that you are doing the scaling, and make sure the scaling value is correct"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "MUOJbgJskUpl"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
341
Notebooks/Chap12/12_3_Tokenization.ipynb
Normal file
341
Notebooks/Chap12/12_3_Tokenization.ipynb
Normal file
@@ -0,0 +1,341 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyP0/KodWM9Dtr2x+8MdXXH1",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap12/12_3_Tokenization.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 12.3: Tokenization**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook builds set of tokens from a text string as in figure 12.8 of the book.\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"I adapted this code from *SOMEWHERE*. If anyone recognizes it, can you let me know and I will give the proper attribution or rewrite if the license is not permissive.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
|
||||||
|
"\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "t9vk9Elugvmi"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"import re, collections"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "3_WkaFO3OfLi"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"text = \"a sailor went to sea sea sea \"+\\\n",
|
||||||
|
" \"to see what he could see see see \"+\\\n",
|
||||||
|
" \"but all that he could see see see \"+\\\n",
|
||||||
|
" \"was the bottom of the deep blue sea sea sea\""
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "tVZVuauIXmJk"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Tokenize the input sentence To begin with the tokens are the individual letters and the </w> whitespace token. So, we represent each word in terms of these tokens with spaces between the tokens to delineate them.\n",
|
||||||
|
"\n",
|
||||||
|
"The tokenized text is stored in a structure that represents each word as tokens together with the count of how often that word occurs. We'll call this the *vocabulary*."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "fF2RBrouWV5w"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def initialize_vocabulary(text):\n",
|
||||||
|
" vocab = collections.defaultdict(int)\n",
|
||||||
|
" words = text.strip().split()\n",
|
||||||
|
" for word in words:\n",
|
||||||
|
" vocab[' '.join(list(word)) + ' </w>'] += 1\n",
|
||||||
|
" return vocab"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "OfvXkLSARk4_"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"vocab = initialize_vocabulary(text)\n",
|
||||||
|
"print('Vocabulary: {}'.format(vocab))\n",
|
||||||
|
"print('Size of vocabulary: {}'.format(len(vocab)))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "aydmNqaoOpSm"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Find all the tokens in the current vocabulary and their frequencies"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "fJAiCjphWsI9"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def get_tokens_and_frequencies(vocab):\n",
|
||||||
|
" tokens = collections.defaultdict(int)\n",
|
||||||
|
" for word, freq in vocab.items():\n",
|
||||||
|
" word_tokens = word.split()\n",
|
||||||
|
" for token in word_tokens:\n",
|
||||||
|
" tokens[token] += freq\n",
|
||||||
|
" return tokens"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "qYi6F_K3RYsW"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"tokens = get_tokens_and_frequencies(vocab)\n",
|
||||||
|
"print('Tokens: {}'.format(tokens))\n",
|
||||||
|
"print('Number of tokens: {}'.format(len(tokens)))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "Y4LCVGnvXIwp"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Find each pair of adjacent tokens in the vocabulary\n",
|
||||||
|
"and count them. We will subsequently merge the most frequently occurring pair."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "_-Rh1mD_Ww3b"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def get_pairs_and_counts(vocab):\n",
|
||||||
|
" pairs = collections.defaultdict(int)\n",
|
||||||
|
" for word, freq in vocab.items():\n",
|
||||||
|
" symbols = word.split()\n",
|
||||||
|
" for i in range(len(symbols)-1):\n",
|
||||||
|
" pairs[symbols[i],symbols[i+1]] += freq\n",
|
||||||
|
" return pairs"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "OqJTB3UFYubH"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"pairs = get_pairs_and_counts(vocab)\n",
|
||||||
|
"print('Pairs: {}'.format(pairs))\n",
|
||||||
|
"print('Number of distinct pairs: {}'.format(len(pairs)))\n",
|
||||||
|
"\n",
|
||||||
|
"most_frequent_pair = max(pairs, key=pairs.get)\n",
|
||||||
|
"print('Most frequent pair: {}'.format(most_frequent_pair))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "d-zm0JBcZSjS"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Merge the instances of the best pair in the vocabulary"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "pcborzqIXQFS"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def merge_pair_in_vocabulary(pair, vocab_in):\n",
|
||||||
|
" vocab_out = {}\n",
|
||||||
|
" bigram = re.escape(' '.join(pair))\n",
|
||||||
|
" p = re.compile(r'(?<!\\S)' + bigram + r'(?!\\S)')\n",
|
||||||
|
" for word in vocab_in:\n",
|
||||||
|
" word_out = p.sub(''.join(pair), word)\n",
|
||||||
|
" vocab_out[word_out] = vocab_in[word]\n",
|
||||||
|
" return vocab_out"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "xQI6NALdWQZX"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"vocab = merge_pair_in_vocabulary(most_frequent_pair, vocab)\n",
|
||||||
|
"print('Vocabulary: {}'.format(vocab))\n",
|
||||||
|
"print('Size of vocabulary: {}'.format(len(vocab)))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "TRYeBZI3ZULu"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Update the tokens, which now include the best token 'se'"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "bkhUx3GeXwba"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"tokens = get_tokens_and_frequencies(vocab)\n",
|
||||||
|
"print('Tokens: {}'.format(tokens))\n",
|
||||||
|
"print('Number of tokens: {}'.format(len(tokens)))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "Fqj-vQWeXxQi"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's write the full tokenization routine"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "K_hKp2kSXXS1"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# TODO -- write this routine by filling in this missing parts,\n",
|
||||||
|
"# calling the above routines\n",
|
||||||
|
"def tokenize(text, num_merges):\n",
|
||||||
|
" # Initialize the vocabulary from the input text\n",
|
||||||
|
" # vocab = (your code here)\n",
|
||||||
|
"\n",
|
||||||
|
" for i in range(num_merges):\n",
|
||||||
|
" # Find the tokens and how often they occur in the vocabulary\n",
|
||||||
|
" # tokens = (your code here)\n",
|
||||||
|
"\n",
|
||||||
|
" # Find the pairs of adjacent tokens and their counts\n",
|
||||||
|
" # pairs = (your code here)\n",
|
||||||
|
"\n",
|
||||||
|
" # Find the most frequent pair\n",
|
||||||
|
" # most_frequent_pair = (your code here)\n",
|
||||||
|
" print('Most frequent pair: {}'.format(most_frequent_pair))\n",
|
||||||
|
"\n",
|
||||||
|
" # Merge the code in the vocabulary\n",
|
||||||
|
" # vocab = (your code here)\n",
|
||||||
|
"\n",
|
||||||
|
" # Find the tokens and how often they occur in the vocabulary one last time\n",
|
||||||
|
" # tokens = (your code here)\n",
|
||||||
|
"\n",
|
||||||
|
" return tokens, vocab"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "U_1SkQRGQ8f3"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"tokens, vocab = tokenize(text, num_merges=22)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "w0EkHTrER_-I"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"print('Tokens: {}'.format(tokens))\n",
|
||||||
|
"print('Number of tokens: {}'.format(len(tokens)))\n",
|
||||||
|
"print('Vocabulary: {}'.format(vocab))\n",
|
||||||
|
"print('Size of vocabulary: {}'.format(len(vocab)))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "moqDtTzIb-NG"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"TODO - Consider the input text:\n",
|
||||||
|
"\n",
|
||||||
|
"\"How much wood could a woodchuck chuck if a woodchuck could chuck wood\"\n",
|
||||||
|
"\n",
|
||||||
|
"How many tokens will there be initially and what will they be?\n",
|
||||||
|
"How many tokens will there be if we run the tokenization routine for the maximum number of iterations (merges)?\n",
|
||||||
|
"\n",
|
||||||
|
"When you've made your predictions, run the code and see if you are correct."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "jOW_HJtMdAxd"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
648
Notebooks/Chap12/12_4_Decoding_Strategies.ipynb
Normal file
648
Notebooks/Chap12/12_4_Decoding_Strategies.ipynb
Normal file
@@ -0,0 +1,648 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyNPrHfkLWjy3NfDHRhGG3IE",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap12/12_4_Decoding_Strategies.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 12.4: Decoding strategies**\n",
|
||||||
|
"\n",
|
||||||
|
"This practical investigates neural decoding from transformer models. \n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "RnIUiieJWu6e"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"!pip install transformers"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "7abjZ9pMVj3k"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"from transformers import GPT2LMHeadModel, GPT2Tokenizer, set_seed\n",
|
||||||
|
"import torch\n",
|
||||||
|
"import torch.nn.functional as F\n",
|
||||||
|
"import numpy as np"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "sMOyD0zem2Ef"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Load model and tokenizer\n",
|
||||||
|
"model = GPT2LMHeadModel.from_pretrained('gpt2')\n",
|
||||||
|
"tokenizer = GPT2Tokenizer.from_pretrained('gpt2')"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "pZgfxbzKWNSR"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# Decoding from GPT2\n",
|
||||||
|
"\n",
|
||||||
|
"This tutorial investigates how to use GPT2 (the forerunner of GPT3) to generate text. There are a number of ways to do this that trade-off the realism of the text against the amount of variation.\n",
|
||||||
|
"\n",
|
||||||
|
"At every stage, GPT2 takes an input string and returns a probability for each of the possible subsequent tokens. We can choose what to do with these probability. We could always *greedily choose* the most likely next token, or we could draw a *sample* randomly according to the probabilities. There are also intermediate strategies such as *top-k sampling* and *nucleus sampling*, that have some controlled randomness.\n",
|
||||||
|
"\n",
|
||||||
|
"We'll also investigate *beam search* -- the idea is that rather than greedily take the next best token at each stage, we maintain a set of hypotheses (beams)as we add each subsequent token and return the most likely overall hypothesis. This is not necessarily the same result we get from greedily choosing the next token."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "TfhAGy0TXEvV"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"First, let's investigate the token themselves. The code below prints out the vocabulary size and shows 20 random tokens. "
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "vsmO9ptzau3_"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"np.random.seed(1)\n",
|
||||||
|
"print(\"Number of tokens in dictionary = %d\"%(tokenizer.vocab_size))\n",
|
||||||
|
"for i in range(20):\n",
|
||||||
|
" index = np.random.randint(tokenizer.vocab_size)\n",
|
||||||
|
" print(\"Token: %d \"%(index)+tokenizer.decode(torch.tensor(index), skip_special_tokens=True))\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "dmmBNS5GY_yk"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# Sampling\n",
|
||||||
|
"\n",
|
||||||
|
"Each time we run GPT2 it will take in a set of tokens, and return a probability over each of the possible next tokens. The simplest thing we could do is to just draw a sample from this probability distribution each time."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "MUM3kLEjbTso"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def sample_next_token(input_tokens, model, tokenizer):\n",
|
||||||
|
" # Run model to get prediction over next output\n",
|
||||||
|
" outputs = model(input_ids = input_tokens['input_ids'], attention_mask = input_tokens['attention_mask'])\n",
|
||||||
|
" # Find prediction\n",
|
||||||
|
" prob_over_tokens = F.softmax(outputs.logits, dim=-1).detach().numpy()[0,-1]\n",
|
||||||
|
" # TODO Draw a random token according to the probabilities\n",
|
||||||
|
" # next_token should be an array with an sole integer in it (as below)\n",
|
||||||
|
" # Use: https://numpy.org/doc/stable/reference/random/generated/numpy.random.choice.html\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" next_token = [5000]\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" # Append token to sentence\n",
|
||||||
|
" output_tokens = input_tokens\n",
|
||||||
|
" output_tokens[\"input_ids\"] = torch.cat((output_tokens['input_ids'],torch.tensor([next_token])),dim=1)\n",
|
||||||
|
" output_tokens['attention_mask'] = torch.cat((output_tokens['attention_mask'],torch.tensor([[1]])),dim=1)\n",
|
||||||
|
" output_tokens['last_token_prob'] = prob_over_tokens[next_token]\n",
|
||||||
|
"\n",
|
||||||
|
" return output_tokens"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "TIyNgg0FkJKO"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Expected output:\n",
|
||||||
|
"# \"The best thing about Bath is that they don't even change or shrink anymore.\"\n",
|
||||||
|
"\n",
|
||||||
|
"set_seed(0)\n",
|
||||||
|
"input_txt = \"The best thing about Bath is\"\n",
|
||||||
|
"input_tokens = tokenizer(input_txt, return_tensors='pt')\n",
|
||||||
|
"for i in range(10):\n",
|
||||||
|
" input_tokens = sample_next_token(input_tokens, model, tokenizer)\n",
|
||||||
|
" print(tokenizer.decode(input_tokens[\"input_ids\"][0], skip_special_tokens=True))\n",
|
||||||
|
"\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "BHs-IWaz9MNY"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# TODO Modify the code below by changing the number of tokens generated and the initial sentence\n",
|
||||||
|
"# to get a feel for how well this works. Since I didn't reset the seed, it will give a different\n",
|
||||||
|
"# answer every time that you run it.\n",
|
||||||
|
"\n",
|
||||||
|
"# TODO Experiment with changing this line:\n",
|
||||||
|
"input_txt = \"The best thing about Bath is\"\n",
|
||||||
|
"input_tokens = tokenizer(input_txt, return_tensors='pt')\n",
|
||||||
|
"# TODO Experiment with changing this line:\n",
|
||||||
|
"for i in range(10):\n",
|
||||||
|
" input_tokens = sample_next_token(input_tokens, model, tokenizer)\n",
|
||||||
|
" print(tokenizer.decode(input_tokens[\"input_ids\"][0], skip_special_tokens=True))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "yN98_7WqbvIe"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# Greedy token selection\n",
|
||||||
|
"\n",
|
||||||
|
"You probably (correctly) got the impression that the text from pure sampling of the probability model can be kind of random. How about if we choose most likely token at each step?\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "7eHFLCeZcmmg"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def get_best_next_token(input_tokens, model, tokenizer):\n",
|
||||||
|
" # Run model to get prediction over next output\n",
|
||||||
|
" outputs = model(input_ids = input_tokens['input_ids'], attention_mask = input_tokens['attention_mask'])\n",
|
||||||
|
" # Find prediction\n",
|
||||||
|
" prob_over_tokens = F.softmax(outputs.logits, dim=-1).detach().numpy()[0,-1]\n",
|
||||||
|
"\n",
|
||||||
|
" # TODO -- find the token index with the maximum probability\n",
|
||||||
|
" # It should be returns as a list (i.e., put squared brackets around it)\n",
|
||||||
|
" # Use https://numpy.org/doc/stable/reference/generated/numpy.argmax.html\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" next_token = [5000]\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" # Append token to sentence\n",
|
||||||
|
" output_tokens = input_tokens\n",
|
||||||
|
" output_tokens[\"input_ids\"] = torch.cat((output_tokens['input_ids'],torch.tensor([next_token])),dim=1)\n",
|
||||||
|
" output_tokens['attention_mask'] = torch.cat((output_tokens['attention_mask'],torch.tensor([[1]])),dim=1)\n",
|
||||||
|
" output_tokens['last_token_prob'] = prob_over_tokens[next_token]\n",
|
||||||
|
" return output_tokens"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "OhRzynEjxpZF"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Expected output:\n",
|
||||||
|
"# The best thing about Bath is that it's a place where you can go to\n",
|
||||||
|
"set_seed(0)\n",
|
||||||
|
"input_txt = \"The best thing about Bath is\"\n",
|
||||||
|
"input_tokens = tokenizer(input_txt, return_tensors='pt')\n",
|
||||||
|
"for i in range(10):\n",
|
||||||
|
" input_tokens = get_best_next_token(input_tokens, model, tokenizer)\n",
|
||||||
|
" print(tokenizer.decode(input_tokens[\"input_ids\"][0], skip_special_tokens=True))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "gKB1Mgndj-Hm"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# TODO Modify the code below by changing the number of tokens generated and the initial sentence\n",
|
||||||
|
"# to get a feel for how well this works.\n",
|
||||||
|
"\n",
|
||||||
|
"# TODO Experiment with changing this line:\n",
|
||||||
|
"input_txt = \"The best thing about Bath is\"\n",
|
||||||
|
"input_tokens = tokenizer(input_txt, return_tensors='pt')\n",
|
||||||
|
"# TODO Experiment with changing this line:\n",
|
||||||
|
"for i in range(10):\n",
|
||||||
|
" input_tokens = get_best_next_token(input_tokens, model, tokenizer)\n",
|
||||||
|
" print(tokenizer.decode(input_tokens[\"input_ids\"][0], skip_special_tokens=True))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "L1YHKaYFfC0M"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# Top-K sampling\n",
|
||||||
|
"\n",
|
||||||
|
"You probably noticed that the greedy strategy produces quite realistic text, but it's kind of boring. It produces generic answers. Also, if this was a chatbot, then we wouldn't necessarily want it to produce the same answer to a question each time. \n",
|
||||||
|
"\n",
|
||||||
|
"Top-K sampling is a compromise strategy that samples randomly from the top K most probable tokens. We could just choose them with a uniform distribution, or (as here) we could sample them according to their original probabilities."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "1ORFXYX_gBDT"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def get_top_k_token(input_tokens, model, tokenizer, k=20):\n",
|
||||||
|
" # Run model to get prediction over next output\n",
|
||||||
|
" outputs = model(input_ids = input_tokens['input_ids'], attention_mask = input_tokens['attention_mask'])\n",
|
||||||
|
" # Find prediction\n",
|
||||||
|
" prob_over_tokens = F.softmax(outputs.logits, dim=-1).detach().numpy()[0,-1]\n",
|
||||||
|
"\n",
|
||||||
|
" # Draw a sample from the top K most likely tokens.\n",
|
||||||
|
" # Take copy of the probabilities and sort from largest to smallest (use np.sort)\n",
|
||||||
|
" # TODO -- replace this line\n",
|
||||||
|
" sorted_prob_over_tokens = prob_over_tokens\n",
|
||||||
|
"\n",
|
||||||
|
" # Find the probability at the k'th position\n",
|
||||||
|
" # TODO -- replace this line\n",
|
||||||
|
" kth_prob_value = 0.0\n",
|
||||||
|
"\n",
|
||||||
|
" # Set all probabilities below this value to zero\n",
|
||||||
|
" prob_over_tokens[prob_over_tokens<kth_prob_value] = 0\n",
|
||||||
|
"\n",
|
||||||
|
" # Renormalize the probabilities so that they sum to one\n",
|
||||||
|
" # TODO -- replace this line\n",
|
||||||
|
" prob_over_tokens = prob_over_tokens\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" # Draw random token\n",
|
||||||
|
" next_token = np.random.choice(len(prob_over_tokens), 1, replace=False, p=prob_over_tokens)\n",
|
||||||
|
"\n",
|
||||||
|
" # Append token to sentence\n",
|
||||||
|
" output_tokens = input_tokens\n",
|
||||||
|
" output_tokens[\"input_ids\"] = torch.cat((output_tokens['input_ids'],torch.tensor([next_token])),dim=1)\n",
|
||||||
|
" output_tokens['attention_mask'] = torch.cat((output_tokens['attention_mask'],torch.tensor([[1]])),dim=1)\n",
|
||||||
|
" output_tokens['last_token_prob'] = prob_over_tokens[next_token]\n",
|
||||||
|
" return output_tokens"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "7RFbn6c-0Z4v"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Expected output:\n",
|
||||||
|
"# The best thing about Bath is that you get to see all the beautiful faces of\n",
|
||||||
|
"\n",
|
||||||
|
"set_seed(0)\n",
|
||||||
|
"input_txt = \"The best thing about Bath is\"\n",
|
||||||
|
"input_tokens = tokenizer(input_txt, return_tensors='pt')\n",
|
||||||
|
"for i in range(10):\n",
|
||||||
|
" input_tokens = get_top_k_token(input_tokens, model, tokenizer, k=10)\n",
|
||||||
|
" print(tokenizer.decode(input_tokens[\"input_ids\"][0], skip_special_tokens=True))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "G3w1GVED4HYv"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# TODO\n",
|
||||||
|
"# Experiment with different values of k\n",
|
||||||
|
"# If you set it to a lower number (say 3) the text will be less random\n",
|
||||||
|
"# If you set it to a higher number (say 5000) the text will be more random\n",
|
||||||
|
"\n",
|
||||||
|
"set_seed(0)\n",
|
||||||
|
"input_txt = \"The best thing about Bath is\"\n",
|
||||||
|
"input_tokens = tokenizer(input_txt, return_tensors='pt')\n",
|
||||||
|
"for i in range(10):\n",
|
||||||
|
" input_tokens = get_top_k_token(input_tokens, model, tokenizer, k=10)\n",
|
||||||
|
" print(tokenizer.decode(input_tokens[\"input_ids\"][0], skip_special_tokens=True))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "RySu2bzqpW9E"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# Nucleus sampling\n",
|
||||||
|
"\n",
|
||||||
|
"Top-K sampling has the disadvantage that sometimes there are only a few plausible next tokens, and sometimes there are a lot. How do we adapt to this situation? One way is to sample from a fixed proportion of the probability mass. That is we order the tokens in terms of probability and cut off the possibility of sampling when the cumulative sum is greater than a threshold.\n",
|
||||||
|
"\n",
|
||||||
|
"This way, we adapt the number of possible tokens that we can choose."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "fOHak_QJfU-2"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def get_nucleus_sampling_token(input_tokens, model, tokenizer, thresh=0.25):\n",
|
||||||
|
" # Run model to get prediction over next output\n",
|
||||||
|
" outputs = model(input_ids = input_tokens['input_ids'], attention_mask = input_tokens['attention_mask'])\n",
|
||||||
|
" # Find prediction\n",
|
||||||
|
" prob_over_tokens = F.softmax(outputs.logits, dim=-1).detach().numpy()[0,-1]\n",
|
||||||
|
"\n",
|
||||||
|
" # Find the most likely tokens that make up the first (thresh) of the probability\n",
|
||||||
|
" # TODO -- sort the probabilities in decreasing order\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" sorted_probs_decreasing = prob_over_tokens\n",
|
||||||
|
" # TODO -- compute the cumulative sum of these probabilities\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" cum_sum_probs = sorted_probs_decreasing\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" # Find index where that the cumulative sum is greater than the threshold\n",
|
||||||
|
" thresh_index = np.argmax(cum_sum_probs>thresh)\n",
|
||||||
|
" print(\"Choosing from %d tokens\"%(thresh_index))\n",
|
||||||
|
" # TODO: Find the probability value to threshold\n",
|
||||||
|
" # Replace this line:\n",
|
||||||
|
" thresh_prob = sorted_probs_decreasing[thresh_index]\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" # Set any probabilities less than this to zero\n",
|
||||||
|
" prob_over_tokens[prob_over_tokens<thresh_prob] = 0\n",
|
||||||
|
" # Renormalize\n",
|
||||||
|
" prob_over_tokens = prob_over_tokens / np.sum(prob_over_tokens)\n",
|
||||||
|
" # Draw random token\n",
|
||||||
|
" next_token = np.random.choice(len(prob_over_tokens), 1, replace=False, p=prob_over_tokens)\n",
|
||||||
|
"\n",
|
||||||
|
" # Append token to sentence\n",
|
||||||
|
" output_tokens = input_tokens\n",
|
||||||
|
" output_tokens[\"input_ids\"] = torch.cat((output_tokens['input_ids'],torch.tensor([next_token])),dim=1)\n",
|
||||||
|
" output_tokens['attention_mask'] = torch.cat((output_tokens['attention_mask'],torch.tensor([[1]])),dim=1)\n",
|
||||||
|
" output_tokens['last_token_prob'] = prob_over_tokens[next_token]\n",
|
||||||
|
" return output_tokens"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "PtxS4kNDyUcm"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Expected output:\n",
|
||||||
|
"# The best thing about Bath is that it's not a city that has been around\n",
|
||||||
|
"set_seed(0)\n",
|
||||||
|
"input_txt = \"The best thing about Bath is\"\n",
|
||||||
|
"input_tokens = tokenizer(input_txt, return_tensors='pt')\n",
|
||||||
|
"for i in range(10):\n",
|
||||||
|
" input_tokens = get_nucleus_sampling_token(input_tokens, model, tokenizer, thresh = 0.2)\n",
|
||||||
|
" print(tokenizer.decode(input_tokens[\"input_ids\"][0], skip_special_tokens=True))\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "K2Vk1Ly40S6c"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# TODO -- experiment with setting the threshold probability to larger or smaller values\n",
|
||||||
|
"input_txt = \"The best thing about Bath is\"\n",
|
||||||
|
"input_tokens = tokenizer(input_txt, return_tensors='pt')\n",
|
||||||
|
"for i in range(10):\n",
|
||||||
|
" input_tokens = get_nucleus_sampling_token(input_tokens, model, tokenizer, thresh = 0.2)\n",
|
||||||
|
" print(tokenizer.decode(input_tokens[\"input_ids\"][0], skip_special_tokens=True))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "eQNNHe14wDvC"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# Beam search\n",
|
||||||
|
"\n",
|
||||||
|
"All of the methods we've seen so far choose the tokens one by one. But this isn't necessarily sensible. Even greedily choosing the best token doesn't necessarily retrieve the sequence with the highest probability. It might be that the most likely token only has very unlikely tokens following it.\n",
|
||||||
|
"\n",
|
||||||
|
"Beam search maintains $K$ hypotheses about the best possible continuation. It starts with the top $K$ continuations. Then for each of those, it finds the top K continuations, giving $K^2$ hypotheses. Then it retains just the top $K$ of these so that the number of hypotheses stays the same."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "WMMNeLixwlgM"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# This routine returns the k'th most likely next token.\n",
|
||||||
|
"# If k =0 then it returns the most likely token, if k=1 it returns the next most likely and so on\n",
|
||||||
|
"# We will need this for beam search\n",
|
||||||
|
"def get_kth_most_likely_token(input_tokens, model, tokenizer, k):\n",
|
||||||
|
" # Run model to get prediction over next output\n",
|
||||||
|
" outputs = model(input_ids = input_tokens['input_ids'], attention_mask = input_tokens['attention_mask'])\n",
|
||||||
|
" # Find prediction\n",
|
||||||
|
" prob_over_tokens = F.softmax(outputs.logits, dim=-1).detach().numpy()[0,-1]\n",
|
||||||
|
"\n",
|
||||||
|
" # Find the k'th most likely token\n",
|
||||||
|
" # TODO Sort the probabilities from largest to smallest\n",
|
||||||
|
" # Replace this line:\n",
|
||||||
|
" sorted_prob_over_tokens = prob_over_tokens\n",
|
||||||
|
" # TODO Find the k'th sorted probability\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" kth_prob_value = prob_over_tokens[0]\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" # Find position of this token.\n",
|
||||||
|
" next_token = np.where(prob_over_tokens == kth_prob_value)[0]\n",
|
||||||
|
"\n",
|
||||||
|
" # Append token to sentence\n",
|
||||||
|
" output_tokens = input_tokens\n",
|
||||||
|
" output_tokens[\"input_ids\"] = torch.cat((output_tokens['input_ids'],torch.tensor([next_token])),dim=1)\n",
|
||||||
|
" output_tokens['attention_mask'] = torch.cat((output_tokens['attention_mask'],torch.tensor([[1]])),dim=1)\n",
|
||||||
|
" output_tokens['last_token_prob'] = prob_over_tokens[next_token]\n",
|
||||||
|
" output_tokens['log_prob'] = output_tokens['log_prob'] + np.log(prob_over_tokens[next_token])\n",
|
||||||
|
" return output_tokens"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "sAI2bClXCe2F"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# We can test this code and see that if we choose the 2nd most likely (K=1) token each time\n",
|
||||||
|
"# then we get much better generation results than if we choose the 2001st most likely token\n",
|
||||||
|
"\n",
|
||||||
|
"# Expected output:\n",
|
||||||
|
"# The best thing about Bath is the way you get the most bang outta the\n",
|
||||||
|
"set_seed(0)\n",
|
||||||
|
"input_txt = \"The best thing about Bath is\"\n",
|
||||||
|
"input_tokens = tokenizer(input_txt, return_tensors='pt')\n",
|
||||||
|
"input_tokens['log_prob'] = 0.0\n",
|
||||||
|
"for i in range(10):\n",
|
||||||
|
" input_tokens = get_kth_most_likely_token(input_tokens, model, tokenizer, k=1)\n",
|
||||||
|
" print(tokenizer.decode(input_tokens[\"input_ids\"][0], skip_special_tokens=True))\n",
|
||||||
|
"\n",
|
||||||
|
"# Expected output:\n",
|
||||||
|
"# The best thing about Bath is mixed profits partnerships» buy generic+ Honda throttlecont\n",
|
||||||
|
"input_txt = \"The best thing about Bath is\"\n",
|
||||||
|
"input_tokens = tokenizer(input_txt, return_tensors='pt')\n",
|
||||||
|
"input_tokens['log_prob'] = 0.0\n",
|
||||||
|
"for i in range(10):\n",
|
||||||
|
" input_tokens = get_kth_most_likely_token(input_tokens, model, tokenizer, k=2000)\n",
|
||||||
|
" print(tokenizer.decode(input_tokens[\"input_ids\"][0], skip_special_tokens=True))\n",
|
||||||
|
"\n",
|
||||||
|
"# TODO -- play around with different values of K"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "6kSc0WrTELMd"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Print out each beam plus the log probability\n",
|
||||||
|
"def print_beams(beams):\n",
|
||||||
|
" for index,beam in enumerate(beams):\n",
|
||||||
|
" print(\"Beam %d, Prob %3.3f: \"%(index,beam['log_prob'])+tokenizer.decode(beam[\"input_ids\"][0], skip_special_tokens=True))\n",
|
||||||
|
" print('---')\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"# TODO: Read this code carefully!\n",
|
||||||
|
"def do_beam_search(input_tokens_in, model, tokenizer, n_beam=5, beam_length=10):\n",
|
||||||
|
" # Store beams in a list\n",
|
||||||
|
" input_tokens['log_prob'] = 0.0\n",
|
||||||
|
"\n",
|
||||||
|
" # Initialize with n_beam most likely continuations\n",
|
||||||
|
" beams = [None] * n_beam\n",
|
||||||
|
" for c_k in range(n_beam):\n",
|
||||||
|
" beams[c_k] = dict(input_tokens_in)\n",
|
||||||
|
" beams[c_k] = get_kth_most_likely_token(beams[c_k], model, tokenizer, c_k)\n",
|
||||||
|
"\n",
|
||||||
|
" print_beams(beams)\n",
|
||||||
|
"\n",
|
||||||
|
" # For each token in the sequence we will add\n",
|
||||||
|
" for c_pos in range(beam_length-1):\n",
|
||||||
|
" # Now for each beam, we continue it in the most likely ways, making n_beam*n_beam type hypotheses\n",
|
||||||
|
" beams_all = [None] * (n_beam*n_beam)\n",
|
||||||
|
" log_probs_all = np.zeros(n_beam*n_beam)\n",
|
||||||
|
" # For each current hypothesis\n",
|
||||||
|
" for c_beam in range(n_beam):\n",
|
||||||
|
" # For each continuation\n",
|
||||||
|
" for c_k in range(n_beam):\n",
|
||||||
|
" # Store the continuation and the probability\n",
|
||||||
|
" beams_all[c_beam * n_beam + c_k] = dict(get_kth_most_likely_token(beams[c_beam], model, tokenizer, c_k))\n",
|
||||||
|
" log_probs_all[c_beam * n_beam + c_k] = beams_all[c_beam * n_beam + c_k]['log_prob']\n",
|
||||||
|
"\n",
|
||||||
|
" # Keep the best n_beams sequences with the highest probabilities\n",
|
||||||
|
" sorted_index = np.argsort(np.array(log_probs_all)*-1)\n",
|
||||||
|
" for c_k in range(n_beam):\n",
|
||||||
|
" beams[c_k] = dict(beams_all[sorted_index[c_k]])\n",
|
||||||
|
"\n",
|
||||||
|
" # Print the beams\n",
|
||||||
|
" print_beams(beams)\n",
|
||||||
|
"\n",
|
||||||
|
" return beams[0]"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "Y4hFfwPFFxka"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Expected output:\n",
|
||||||
|
"# The best thing about Bath is that it's a place where you don't have to\n",
|
||||||
|
"\n",
|
||||||
|
"set_seed(0)\n",
|
||||||
|
"input_txt = \"The best thing about Bath is\"\n",
|
||||||
|
"input_tokens = tokenizer(input_txt, return_tensors='pt')\n",
|
||||||
|
"\n",
|
||||||
|
"# Now let's call the beam search\n",
|
||||||
|
"# It takes a while as it has to run the model multiple times to add a token\n",
|
||||||
|
"n_beams = 5\n",
|
||||||
|
"best_beam = do_beam_search(input_tokens,model,tokenizer)\n",
|
||||||
|
"print(\"Beam search result:\")\n",
|
||||||
|
"print(tokenizer.decode(best_beam[\"input_ids\"][0], skip_special_tokens=True))\n",
|
||||||
|
"\n",
|
||||||
|
"# You should see that the best answer is not the same as the greedy solution we found above\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "0YWKwZmz4NXb"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"You can read about more decoding strategies in this blog (which uses a recursive neural network, not a transformer, but the principles are the same).\n",
|
||||||
|
"\n",
|
||||||
|
"https://www.borealisai.com/research-blogs/tutorial-6-neural-natural-language-generation-decoding-algorithms/\n",
|
||||||
|
"\n",
|
||||||
|
"You can also look at other possible language models via hugging face:\n",
|
||||||
|
"\n",
|
||||||
|
"https://huggingface.co/docs/transformers/v4.25.1/en/model_summary#decoders-or-autoregressive-models\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "-SXpjZPYsMhv"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
159
Notebooks/Chap13/13_1_Graph_Representation.ipynb
Normal file
159
Notebooks/Chap13/13_1_Graph_Representation.ipynb
Normal file
@@ -0,0 +1,159 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyMuzP1/oqTRTw4Xs/R4J/M3",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap13/13_1_Graph_Representation.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 13.1: Graph representation**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook investigates representing graphs with matrices as illustrated in figure 13.4 from the book.\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
|
||||||
|
"\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "t9vk9Elugvmi"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt\n",
|
||||||
|
"import networkx as nx"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "OLComQyvCIJ7"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Routine to draw graph structure\n",
|
||||||
|
"def draw_graph_structure(adjacency_matrix):\n",
|
||||||
|
"\n",
|
||||||
|
" G = nx.Graph()\n",
|
||||||
|
" n_node = adjacency_matrix.shape[0]\n",
|
||||||
|
" for i in range(n_node):\n",
|
||||||
|
" for j in range(i):\n",
|
||||||
|
" if adjacency_matrix[i,j]:\n",
|
||||||
|
" G.add_edge(i,j)\n",
|
||||||
|
"\n",
|
||||||
|
" nx.draw(G, nx.spring_layout(G, seed = 0), with_labels=True)\n",
|
||||||
|
" plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "O1QMxC7X4vh9"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Define a graph\n",
|
||||||
|
"# Note that the nodes are labelled from 0 rather than 1 as in the book\n",
|
||||||
|
"A = np.array([[0,1,0,1,0,0,0,0],\n",
|
||||||
|
" [1,0,1,1,1,0,0,0],\n",
|
||||||
|
" [0,1,0,0,1,0,0,0],\n",
|
||||||
|
" [1,1,0,0,1,0,0,0],\n",
|
||||||
|
" [0,1,1,1,0,1,0,1],\n",
|
||||||
|
" [0,0,0,0,1,0,1,1],\n",
|
||||||
|
" [0,0,0,0,0,1,0,0],\n",
|
||||||
|
" [0,0,0,0,1,1,0,0]]);\n",
|
||||||
|
"print(A)\n",
|
||||||
|
"draw_graph_structure(A)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "TIrihEw-7DRV"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# TODO -- find algorithmically how many walks of length three are between nodes 3 and 7\n",
|
||||||
|
"# Replace this line\n",
|
||||||
|
"print(\"Number of walks between nodes three and seven = ???\")"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "PzvfUpkV4zCj"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# TODO -- find algorithmically what the minimum path distance between nodes 0 and 6 is\n",
|
||||||
|
"# (i.e. what is the first walk length with non-zero count between 0 and 6)\n",
|
||||||
|
"# Replace this line\n",
|
||||||
|
"print(\"Minimum distance = ???\")\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"# What is the worst case complexity of your method?"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "MhhJr6CgCRb5"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Now let's represent node 0 as a vector\n",
|
||||||
|
"x = np.array([[1],[0],[0],[0],[0],[0],[0],[0]]);\n",
|
||||||
|
"print(x)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "lCQjXlatABGZ"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# TODO: Find algorithmically how many paths of length 3 are there between node 0 and every other node\n",
|
||||||
|
"# Replace this line\n",
|
||||||
|
"print(np.zeros_like(x))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "nizLdZgLDzL4"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
244
Notebooks/Chap13/13_2_Graph_Classification.ipynb
Normal file
244
Notebooks/Chap13/13_2_Graph_Classification.ipynb
Normal file
@@ -0,0 +1,244 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyOMSGUFWT+YN0fwYHpMmHJM",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap13/13_2_Graph_Classification.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 13.2: Graph classification**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook investigates representing graphs with matrices as illustrated in figure 13.4 from the book.\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "t9vk9Elugvmi"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt\n",
|
||||||
|
"import networkx as nx"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "OLComQyvCIJ7"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Let's build a model that maps a chemical structure to a binary decision. This model might be used to predict whether a chemical is liquid at room temparature or not. We'll start by drawing the chemical structure."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "UNleESc7k5uB"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Define a graph that represents the chemical structure of ethanol and draw it\n",
|
||||||
|
"# Each node is labelled with the node number and the element (carbon, hydrogen, oxygen)\n",
|
||||||
|
"G = nx.Graph()\n",
|
||||||
|
"G.add_edge('0:H','2:C')\n",
|
||||||
|
"G.add_edge('1:H','2:C')\n",
|
||||||
|
"G.add_edge('3:H','2:C')\n",
|
||||||
|
"G.add_edge('2:C','5:C')\n",
|
||||||
|
"G.add_edge('4:H','5:C')\n",
|
||||||
|
"G.add_edge('6:H','5:C')\n",
|
||||||
|
"G.add_edge('7:O','5:C')\n",
|
||||||
|
"G.add_edge('8:H','7:O')\n",
|
||||||
|
"nx.draw(G, nx.spring_layout(G, seed = 0), with_labels=True, node_size=600)\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "TIrihEw-7DRV"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Define adjacency matrix\n",
|
||||||
|
"# TODO -- Define the adjacency matrix for this chemical\n",
|
||||||
|
"# Replace this line\n",
|
||||||
|
"A = np.zeros((9,9)) ;\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"print(A)\n",
|
||||||
|
"\n",
|
||||||
|
"# TODO -- Define node matrix\n",
|
||||||
|
"# There will be 9 nodes and 118 possible chemical elements\n",
|
||||||
|
"# so we'll define a 9x118 matrix. Each column represents one\n",
|
||||||
|
"# node and is a one-hot vector (i.e. all zeros, except a single one at the\n",
|
||||||
|
"# chemical number of the element).\n",
|
||||||
|
"# Chemical numbers: Hydrogen-->1, Carbon-->6, Oxygen-->8\n",
|
||||||
|
"# Since the indices start at 0, we'll set element 0 to 1 for hydrogen, element 5\n",
|
||||||
|
"# to one for carbon, and element 7 to one for oxygen\n",
|
||||||
|
"# Replace this line:\n",
|
||||||
|
"X = np.zeros((118,9))\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"# Print the top 15 rows of the data matrix\n",
|
||||||
|
"print(X[0:15,:])"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "gKBD5JsPfrkA"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's define a network with four layers that maps this graph to a binary value, using the formulation in equation 13.11."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "40FLjNIcpHa9"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# We'll need these helper functions\n",
|
||||||
|
"\n",
|
||||||
|
"# Define the Rectified Linear Unit (ReLU) function\n",
|
||||||
|
"def ReLU(preactivation):\n",
|
||||||
|
" activation = preactivation.clip(0.0)\n",
|
||||||
|
" return activation\n",
|
||||||
|
"\n",
|
||||||
|
"# Define the logistic sigmoid function\n",
|
||||||
|
"def sigmoid(x):\n",
|
||||||
|
" return 1.0/(1.0+np.exp(-x))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "52IFREpepHE4"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Our network will have K=3 hidden layers, and will use a dimension of D=200.\n",
|
||||||
|
"K = 3; D = 200\n",
|
||||||
|
"# Set seed so we always get the same random numbers\n",
|
||||||
|
"np.random.seed(1)\n",
|
||||||
|
"# Let's initialize the parameter matrices randomly with He initialization\n",
|
||||||
|
"Omega0 = np.random.normal(size=(D, 118)) * 2.0 / D\n",
|
||||||
|
"beta0 = np.random.normal(size=(D,1)) * 2.0 / D\n",
|
||||||
|
"Omega1 = np.random.normal(size=(D, D)) * 2.0 / D\n",
|
||||||
|
"beta1 = np.random.normal(size=(D,1)) * 2.0 / D\n",
|
||||||
|
"Omega2 = np.random.normal(size=(D, D)) * 2.0 / D\n",
|
||||||
|
"beta2 = np.random.normal(size=(D,1)) * 2.0 / D\n",
|
||||||
|
"omega3 = np.random.normal(size=(1, D))\n",
|
||||||
|
"beta3 = np.random.normal(size=(1,1))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "ag0YdEgnpApK"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def graph_neural_network(A,X, Omega0, beta0, Omega1, beta1, Omega2, beta2, omega3, beta3):\n",
|
||||||
|
" # Define this network according to equation 13.11 from the book\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" f = np.ones((1,1))\n",
|
||||||
|
"\n",
|
||||||
|
" return f;"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "RQuTMc2WrsU3"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Let's test this network\n",
|
||||||
|
"f = graph_neural_network(A,X, Omega0, beta0, Omega1, beta1, Omega2, beta2, omega3, beta3)\n",
|
||||||
|
"print(\"Your value is %3f: \"%(f[0,0]), \"True value of f: 0.498010\")"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "X7gYgOu6uIAt"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Let's check that permuting the indices of the graph doesn't change\n",
|
||||||
|
"# the output of the network\n",
|
||||||
|
"# Define a permutation matrix\n",
|
||||||
|
"P = np.array([[0,1,0,0,0,0,0,0,0],\n",
|
||||||
|
" [0,0,0,0,1,0,0,0,0],\n",
|
||||||
|
" [0,0,0,0,0,1,0,0,0],\n",
|
||||||
|
" [0,0,0,0,0,0,0,0,1],\n",
|
||||||
|
" [1,0,0,0,0,0,0,0,0],\n",
|
||||||
|
" [0,0,1,0,0,0,0,0,0],\n",
|
||||||
|
" [0,0,0,1,0,0,0,0,0],\n",
|
||||||
|
" [0,0,0,0,0,0,0,1,0],\n",
|
||||||
|
" [0,0,0,0,0,0,1,0,0]]);\n",
|
||||||
|
"\n",
|
||||||
|
"# TODO -- Use this matrix to permute the adjacency matrix A and node matrix X\n",
|
||||||
|
"# Replace these lines\n",
|
||||||
|
"A_permuted = np.copy(A)\n",
|
||||||
|
"X_permuted = np.copy(X)\n",
|
||||||
|
"\n",
|
||||||
|
"f = graph_neural_network(A_permuted,X_permuted, Omega0, beta0, Omega1, beta1, Omega2, beta2, omega3, beta3)\n",
|
||||||
|
"print(\"Your value is %3f: \"%(f[0,0]), \"True value of f: 0.498010\")"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "F0zc3U_UuR5K"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"TODO -- encode the adjacency matrix and node matrix for propanol and run the network again. Show that the network still runs even though the size of the input graph is different.\n",
|
||||||
|
"\n",
|
||||||
|
"Propanol structure can be found [here](https://upload.wikimedia.org/wikipedia/commons/b/b8/Propanol_flat_structure.png)."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "l44vHi50zGqY"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
314
Notebooks/Chap13/13_3_Neighborhood_Sampling.ipynb
Normal file
314
Notebooks/Chap13/13_3_Neighborhood_Sampling.ipynb
Normal file
@@ -0,0 +1,314 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyNXqwmC4yEc1mGv9/74b0jY",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap13/13_3_Neighborhood_Sampling.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 13.3: Neighborhood sampling**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook investigates neighborhood sampling of graphs as in figure 13.10 from the book.\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "t9vk9Elugvmi"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt\n",
|
||||||
|
"import networkx as nx"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "OLComQyvCIJ7"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Let's construct the graph from figure 13.10, which has 23 nodes."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "UNleESc7k5uB"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Define adjacency matrix\n",
|
||||||
|
"A = np.array([[0,1,1,1,0, 0,0,0,0,0, 0,0,0,0,0, 0,0,0,0,0, 0,0,0],\n",
|
||||||
|
" [1,0,1,0,0, 0,0,0,1,1, 0,0,0,0,0, 0,0,0,0,0, 0,0,0],\n",
|
||||||
|
" [1,1,0,1,0, 0,0,0,0,1, 0,0,0,0,0, 0,0,0,0,0, 0,0,0],\n",
|
||||||
|
" [1,0,1,0,1, 0,1,1,0,0, 0,0,0,0,0, 0,0,0,0,0, 0,0,0],\n",
|
||||||
|
" [0,0,0,1,0, 1,0,1,0,0, 0,0,0,0,0, 0,0,0,0,0, 0,0,0],\n",
|
||||||
|
" [0,0,0,0,1, 0,0,1,0,0, 0,0,0,0,0, 0,0,0,0,0, 0,0,0],\n",
|
||||||
|
" [0,0,0,1,0, 0,0,1,0,1, 1,0,0,0,0, 0,0,0,0,0, 0,0,0],\n",
|
||||||
|
" [0,0,0,1,1, 1,1,0,0,0, 1,0,0,1,0, 0,0,0,0,0, 0,0,0],\n",
|
||||||
|
" [0,1,0,0,0, 0,0,0,0,1, 0,0,0,0,0, 0,0,0,0,0, 0,0,0],\n",
|
||||||
|
" [0,1,1,0,0, 0,1,0,1,0, 0,1,1,0,0, 0,1,0,0,0, 0,0,0],\n",
|
||||||
|
" [0,0,0,0,0, 0,1,1,0,0, 0,0,1,0,0, 0,0,0,0,0, 0,0,0],\n",
|
||||||
|
" [0,0,0,0,0, 0,0,0,0,1, 0,0,0,0,1, 1,1,0,0,0, 0,0,0],\n",
|
||||||
|
" [0,0,0,0,0, 0,0,0,0,1, 1,0,0,1,0, 0,1,1,0,0, 0,0,0],\n",
|
||||||
|
" [0,0,0,0,0, 0,0,1,0,0, 0,0,1,0,0, 0,0,1,1,0, 0,0,0],\n",
|
||||||
|
" [0,0,0,0,0, 0,0,0,0,0, 0,1,0,0,0, 1,0,0,0,1, 0,0,0],\n",
|
||||||
|
" [0,0,0,0,0, 0,0,0,0,0, 0,1,0,0,1, 0,1,0,0,1, 0,0,0],\n",
|
||||||
|
" [0,0,0,0,0, 0,0,0,0,1, 0,1,1,0,0, 1,0,1,0,1, 0,0,0],\n",
|
||||||
|
" [0,0,0,0,0, 0,0,0,0,0, 0,0,1,1,0, 0,1,0,1,0, 1,1,1],\n",
|
||||||
|
" [0,0,0,0,0, 0,0,0,0,0, 0,0,0,1,0, 0,0,1,0,0, 0,0,1],\n",
|
||||||
|
" [0,0,0,0,0, 0,0,0,0,0, 0,0,0,0,1, 1,1,0,0,0, 1,0,0],\n",
|
||||||
|
" [0,0,0,0,0, 0,0,0,0,0, 0,0,0,0,0, 0,0,1,0,1, 0,1,0],\n",
|
||||||
|
" [0,0,0,0,0, 0,0,0,0,0, 0,0,0,0,0, 0,0,1,0,0, 1,0,1],\n",
|
||||||
|
" [0,0,0,0,0, 0,0,0,0,0, 0,0,0,0,0, 0,0,1,1,0, 0,1,0]]);\n",
|
||||||
|
"print(A)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "fHgH5hdG_W1h"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Routine to draw graph structure, highlighting original node (brown in fig 13.10)\n",
|
||||||
|
"# and neighborhood nodes (orange in figure 13.10)\n",
|
||||||
|
"def draw_graph_structure(adjacency_matrix, original_node, neighborhood_nodes=None):\n",
|
||||||
|
"\n",
|
||||||
|
" G = nx.Graph()\n",
|
||||||
|
" n_node = adjacency_matrix.shape[0]\n",
|
||||||
|
" for i in range(n_node):\n",
|
||||||
|
" for j in range(i):\n",
|
||||||
|
" if adjacency_matrix[i,j]:\n",
|
||||||
|
" G.add_edge(i,j)\n",
|
||||||
|
"\n",
|
||||||
|
" color_map = []\n",
|
||||||
|
"\n",
|
||||||
|
" for node in G:\n",
|
||||||
|
" if original_node[node]:\n",
|
||||||
|
" color_map.append('brown')\n",
|
||||||
|
" else:\n",
|
||||||
|
" if neighborhood_nodes[node]:\n",
|
||||||
|
" color_map.append('orange')\n",
|
||||||
|
" else:\n",
|
||||||
|
" color_map.append('white')\n",
|
||||||
|
"\n",
|
||||||
|
" nx.draw(G, nx.spring_layout(G, seed = 7), with_labels=True,node_color=color_map)\n",
|
||||||
|
" plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "TIrihEw-7DRV"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"n_nodes = A.shape[0]\n",
|
||||||
|
"\n",
|
||||||
|
"# Define a single output layer node\n",
|
||||||
|
"output_layer_nodes=np.zeros((n_nodes,1)); output_layer_nodes[16]=1\n",
|
||||||
|
"# Define the neighboring nodes to draw (none)\n",
|
||||||
|
"neighbor_nodes = np.zeros((n_nodes,1))\n",
|
||||||
|
"print(\"Output layer:\")\n",
|
||||||
|
"draw_graph_structure(A, output_layer_nodes, neighbor_nodes)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "gKBD5JsPfrkA"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Let's imagine that we want to form a batch for a node labelling task that consists of just node 16 in the output layer (highlighted). The network consists of the input, hidden layer 1, hidden layer2, and the output layer."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "JaH3g_-O-0no"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# TODO Find the nodes in hidden layer 2 that connect to node 16 in the output layer\n",
|
||||||
|
"# using the adjacency matrix\n",
|
||||||
|
"# Replace this line:\n",
|
||||||
|
"hidden_layer2_nodes = np.zeros((n_nodes,1));\n",
|
||||||
|
"\n",
|
||||||
|
"print(\"Hidden layer 2:\")\n",
|
||||||
|
"draw_graph_structure(A, output_layer_nodes, hidden_layer2_nodes)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "9oSiuP3B3HNS"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# TODO - Find the nodes in hidden layer 1 that connect that connect to node 16 in the output layer\n",
|
||||||
|
"# using the adjacency matrix\n",
|
||||||
|
"# Replace this line:\n",
|
||||||
|
"hidden_layer1_nodes = np.zeros((n_nodes,1));\n",
|
||||||
|
"\n",
|
||||||
|
"print(\"Hidden layer 1:\")\n",
|
||||||
|
"draw_graph_structure(A, output_layer_nodes, hidden_layer1_nodes)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "zZFxw3m1_wWr"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# TODO Find the nodes in the input layer that connect to node 16 in the output layer\n",
|
||||||
|
"# using the adjacency matrix\n",
|
||||||
|
"# Replace this line:\n",
|
||||||
|
"input_layer_nodes = np.zeros((n_nodes,1));\n",
|
||||||
|
"\n",
|
||||||
|
"print(\"Input layer:\")\n",
|
||||||
|
"draw_graph_structure(A, output_layer_nodes, input_layer_nodes)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "EL3N8BXyCu0F"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"This is bad news. This is a fairly sparsely connected graph (i.e. adjacency matrix is mostly zeros) and there are only two hidden layers. Nonetheless, we have to involve almost all the nodes in the graph to compute the loss at this output.\n",
|
||||||
|
"\n",
|
||||||
|
"To resolve this problem, we'll use neighborhood sampling. We'll start again with a single node in the output layer."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "CE0WqytvC7zr"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"print(\"Output layer:\")\n",
|
||||||
|
"draw_graph_structure(A, output_layer_nodes, neighbor_nodes)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "59WNys3KC5y6"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Define umber of neighbors to sample\n",
|
||||||
|
"n_sample = 3"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "uCoJwpcTNFdI"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# TODO Find the nodes in hidden layer 2 that connect to node 16 in the output layer\n",
|
||||||
|
"# using the adjacency matrix. Then sample n_sample of these nodes randomly without\n",
|
||||||
|
"# replacement.\n",
|
||||||
|
"\n",
|
||||||
|
"# Replace this line:\n",
|
||||||
|
"hidden_layer2_nodes = np.zeros((n_nodes,1));\n",
|
||||||
|
"\n",
|
||||||
|
"draw_graph_structure(A, output_layer_nodes, hidden_layer2_nodes)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "_WEop6lYGNhJ"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# TODO Find the nodes in hidden layer 1 that connect to the nodes in hidden layer 2\n",
|
||||||
|
"# using the adjacency matrix. Then sample n_sample of these nodes randomly without\n",
|
||||||
|
"# replacement. Make sure not to sample nodes that were already included in hidden layer 2 our the ouput layer.\n",
|
||||||
|
"# The nodes at hidden layer 1 are the union of these nodes and the nodes in hidden layer 2\n",
|
||||||
|
"\n",
|
||||||
|
"# Replace this line:\n",
|
||||||
|
"hidden_layer1_nodes = np.zeros((n_nodes,1));\n",
|
||||||
|
"\n",
|
||||||
|
"draw_graph_structure(A, output_layer_nodes, hidden_layer1_nodes)\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "k90qW_LDLpNk"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# TODO Find the nodes in the input layer that connect to the nodes in hidden layer 1\n",
|
||||||
|
"# using the adjacency matrix. Then sample n_sample of these nodes randomly without\n",
|
||||||
|
"# replacement. Make sure not to sample nodes that were already included in hidden layer 1,2, or the output layer.\n",
|
||||||
|
"# The nodes at the input layer are the union of these nodes and the nodes in hidden layers 1 and 2\n",
|
||||||
|
"\n",
|
||||||
|
"# Replace this line:\n",
|
||||||
|
"input_layer_nodes = np.zeros((n_nodes,1));\n",
|
||||||
|
"\n",
|
||||||
|
"draw_graph_structure(A, output_layer_nodes, input_layer_nodes)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "NDEYUty_O3Zr"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"If you did this correctly, there should be 9 yellow nodes in the figure. The \"receptive field\" of node 16 in the output layer increases much more slowly as we move back through the layers of the network."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "vu4eJURmVkc5"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
213
Notebooks/Chap13/13_4_Graph_Attention_Networks.ipynb
Normal file
213
Notebooks/Chap13/13_4_Graph_Attention_Networks.ipynb
Normal file
@@ -0,0 +1,213 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyOdSkjfQnSZXnffGsZVM7r5",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap13/13_4_Graph_Attention_Networks.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 13.4: Graph attention networks**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook builds a graph attention mechanism from scratch, as discussed in section 13.8.6 of the book and illustrated in figure 13.12c\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
|
||||||
|
"\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "t9vk9Elugvmi"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "OLComQyvCIJ7"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"The self-attention mechanism maps $N$ inputs $\\mathbf{x}_{n}\\in\\mathbb{R}^{D}$ and returns $N$ outputs $\\mathbf{x}'_{n}\\in \\mathbb{R}^{D}$. \n",
|
||||||
|
"\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "9OJkkoNqCVK2"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Set seed so we get the same random numbers\n",
|
||||||
|
"np.random.seed(1)\n",
|
||||||
|
"# Number of nodes in the graph\n",
|
||||||
|
"N = 8\n",
|
||||||
|
"# Number of dimensions of each input\n",
|
||||||
|
"D = 4\n",
|
||||||
|
"\n",
|
||||||
|
"# Define a graph\n",
|
||||||
|
"A = np.array([[0,1,0,1,0,0,0,0],\n",
|
||||||
|
" [1,0,1,1,1,0,0,0],\n",
|
||||||
|
" [0,1,0,0,1,0,0,0],\n",
|
||||||
|
" [1,1,0,0,1,0,0,0],\n",
|
||||||
|
" [0,1,1,1,0,1,0,1],\n",
|
||||||
|
" [0,0,0,0,1,0,1,1],\n",
|
||||||
|
" [0,0,0,0,0,1,0,0],\n",
|
||||||
|
" [0,0,0,0,1,1,0,0]]);\n",
|
||||||
|
"print(A)\n",
|
||||||
|
"\n",
|
||||||
|
"# Let's also define some random data\n",
|
||||||
|
"X = np.random.normal(size=(D,N))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "oAygJwLiCSri"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"We'll also need the weights and biases for the keys, queries, and values (equations 12.2 and 12.4)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "W2iHFbtKMaDp"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Choose random values for the parameters\n",
|
||||||
|
"omega = np.random.normal(size=(D,D))\n",
|
||||||
|
"beta = np.random.normal(size=(D,1))\n",
|
||||||
|
"phi = np.random.normal(size=(1,2*D))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "79TSK7oLMobe"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"We'll need a softmax operation that operates on the columns of the matrix and a ReLU function as well"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "iYPf6c4MhCgq"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Define softmax operation that works independently on each column\n",
|
||||||
|
"def softmax_cols(data_in):\n",
|
||||||
|
" # Exponentiate all of the values\n",
|
||||||
|
" exp_values = np.exp(data_in) ;\n",
|
||||||
|
" # Sum over columns\n",
|
||||||
|
" denom = np.sum(exp_values, axis = 0);\n",
|
||||||
|
" # Replicate denominator to N rows\n",
|
||||||
|
" denom = np.matmul(np.ones((data_in.shape[0],1)), denom[np.newaxis,:])\n",
|
||||||
|
" # Compute softmax\n",
|
||||||
|
" softmax = exp_values / denom\n",
|
||||||
|
" # return the answer\n",
|
||||||
|
" return softmax\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"# Define the Rectified Linear Unit (ReLU) function\n",
|
||||||
|
"def ReLU(preactivation):\n",
|
||||||
|
" activation = preactivation.clip(0.0)\n",
|
||||||
|
" return activation\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "obaQBdUAMXXv"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
" # Now let's compute self attention in matrix form\n",
|
||||||
|
"def graph_attention(X,omega, beta, phi, A):\n",
|
||||||
|
"\n",
|
||||||
|
" # TODO -- Write this function (see figure 13.12c)\n",
|
||||||
|
" # 1. Compute X_prime\n",
|
||||||
|
" # 2. Compute S\n",
|
||||||
|
" # 3. To apply the mask, set S to a very large negative number (e.g. -1e20) everywhere where A+I is zero\n",
|
||||||
|
" # 4. Run the softmax function to compute the attention values\n",
|
||||||
|
" # 5. Postmultiply X' by the attention values\n",
|
||||||
|
" # 6. Apply the ReLU function\n",
|
||||||
|
" # Replace this line:\n",
|
||||||
|
" output = np.ones_like(X) ;\n",
|
||||||
|
"\n",
|
||||||
|
" return output;"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "gb2WvQ3SiH8r"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Test out the graph attention mechanism\n",
|
||||||
|
"np.set_printoptions(precision=3)\n",
|
||||||
|
"output = graph_attention(X, omega, beta, phi, A);\n",
|
||||||
|
"print(\"Correct answer is:\")\n",
|
||||||
|
"print(\"[[1.796 1.346 0.569 1.703 1.298 1.224 1.24 1.234]\")\n",
|
||||||
|
"print(\" [0.768 0.672 0. 0.529 3.841 4.749 5.376 4.761]\")\n",
|
||||||
|
"print(\" [0.305 0.129 0. 0.341 0.785 1.014 1.113 1.024]\")\n",
|
||||||
|
"print(\" [0. 0. 0. 0. 0.35 0.864 1.098 0.871]]]\")\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"print(\"Your answer is:\")\n",
|
||||||
|
"print(output)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "d4p6HyHXmDh5"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"TODO -- Try to construct a dot-product self-attention mechanism as in practical 12.1 that respects the geometry of the graph and has zero attention between non-neighboring nodes by combining figures 13.12a and 13.12b.\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "QDEkIrcgrql-"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
419
Notebooks/Chap15/15_1_GAN_Toy_Example.ipynb
Normal file
419
Notebooks/Chap15/15_1_GAN_Toy_Example.ipynb
Normal file
@@ -0,0 +1,419 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyM0StKV3FIZ3MZqfflqC0Rv",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap15/15_1_GAN_Toy_Example.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 15.1: GAN Toy example**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook investigates the GAN toy example as illustred in figure 15.1 in the book.\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "t9vk9Elugvmi"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "OLComQyvCIJ7"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Get a batch of real data. Our goal is to make data that looks like this.\n",
|
||||||
|
"def get_real_data_batch(n_sample):\n",
|
||||||
|
" np.random.seed(0)\n",
|
||||||
|
" x_true = np.random.normal(size=(1,n_sample)) + 7.5\n",
|
||||||
|
" return x_true"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "y_OkVWmam4Qx"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Define our generator. This takes a standard normally-distributed latent variable $z$ and adds a scalar $\\theta$ to this, where $\\theta$ is the single parameter of this generative model according to:\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{equation}\n",
|
||||||
|
"x_i = z_i + \\theta.\n",
|
||||||
|
"\\end{equation}\n",
|
||||||
|
"\n",
|
||||||
|
"Obviously this model can generate the family of Gaussian distributions with unit variance, but different means."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "RFpL0uCXoTpV"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# This is our generator -- takes the single parameter theta\n",
|
||||||
|
"# of the generative model and generates n samples\n",
|
||||||
|
"def generator(z, theta):\n",
|
||||||
|
" x_gen = z + theta\n",
|
||||||
|
" return x_gen"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "OtLQvf3Enfyw"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now, we define our disriminator. This is a simple logistic regression model (1D linear model passed through sigmoid) that returns the probability that the data is real"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "Xrzd8aehYAYR"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Define our discriminative model\n",
|
||||||
|
"\n",
|
||||||
|
"# Logistic sigmoid, maps from [-infty,infty] to [0,1]\n",
|
||||||
|
"def sig(data_in):\n",
|
||||||
|
" return 1.0 / (1.0+np.exp(-data_in))\n",
|
||||||
|
"\n",
|
||||||
|
"# Discriminator computes y\n",
|
||||||
|
"def discriminator(x, phi0, phi1):\n",
|
||||||
|
" return sig(phi0 + phi1 * x)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "vHBgAFZMsnaC"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Draws a figure like Figure 15.1a\n",
|
||||||
|
"def draw_data_model(x_real, x_syn, phi0=None, phi1=None):\n",
|
||||||
|
" fix, ax = plt.subplots();\n",
|
||||||
|
"\n",
|
||||||
|
" for x in x_syn:\n",
|
||||||
|
" ax.plot([x,x],[0,0.33],color='#f47a60')\n",
|
||||||
|
" for x in x_real:\n",
|
||||||
|
" ax.plot([x,x],[0,0.33],color='#7fe7dc')\n",
|
||||||
|
"\n",
|
||||||
|
" if phi0 is not None:\n",
|
||||||
|
" x_model = np.arange(0,10,0.01)\n",
|
||||||
|
" y_model = discriminator(x_model, phi0, phi1)\n",
|
||||||
|
" ax.plot(x_model, y_model,color='#dddddd')\n",
|
||||||
|
" ax.set_xlim([0,10])\n",
|
||||||
|
" ax.set_ylim([0,1])\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "V1FiDBhepcQJ"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Get data batch\n",
|
||||||
|
"x_real = get_real_data_batch(10)\n",
|
||||||
|
"\n",
|
||||||
|
"# Initialize generator and synthesize a batch of examples\n",
|
||||||
|
"theta = 3.0\n",
|
||||||
|
"np.random.seed(1)\n",
|
||||||
|
"z = np.random.normal(size=(1,10))\n",
|
||||||
|
"x_syn = generator(z, theta)\n",
|
||||||
|
"\n",
|
||||||
|
"# Initialize discriminator model\n",
|
||||||
|
"phi0 = -2\n",
|
||||||
|
"phi1 = 1\n",
|
||||||
|
"\n",
|
||||||
|
"draw_data_model(x_real, x_syn, phi0, phi1)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "U8pFb497x36n"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"You can see that the synthesized (orange) samples don't look much like the real (cyan) ones, and the initial model to discriminate them (gray line represents probability of being real) is pretty bad as well.\n",
|
||||||
|
"\n",
|
||||||
|
"Let's deal with the discriminator first. Let's define the loss"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "SNDV1G5PYhcQ"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Discriminator loss\n",
|
||||||
|
"def compute_discriminator_loss(x_real, x_syn, phi0, phi1):\n",
|
||||||
|
"\n",
|
||||||
|
" # TODO -- compute the loss for the discriminator\n",
|
||||||
|
" # Run the real data and the synthetic data through the discriminator\n",
|
||||||
|
" # Then use the standard binary cross entropy loss with the y=1 for the real samples\n",
|
||||||
|
" # and y=0 for the synthesized ones.\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" loss = 0.0\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" return loss"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "Bc3VwCabYcfg"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Test the loss\n",
|
||||||
|
"loss = compute_discriminator_loss(x_real, x_syn, phi0, phi1)\n",
|
||||||
|
"print(\"True Loss = 13.814757170851447, Your loss=\", loss )"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "MiqM3GXSbn0z"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Gradient of loss (cheating, using finite differences)\n",
|
||||||
|
"def compute_discriminator_gradient(x_real, x_syn, phi0, phi1):\n",
|
||||||
|
" delta = 0.0001;\n",
|
||||||
|
" loss1 = compute_discriminator_loss(x_real, x_syn, phi0, phi1)\n",
|
||||||
|
" loss2 = compute_discriminator_loss(x_real, x_syn, phi0+delta, phi1)\n",
|
||||||
|
" loss3 = compute_discriminator_loss(x_real, x_syn, phi0, phi1+delta)\n",
|
||||||
|
" dl_dphi0 = (loss2-loss1) / delta\n",
|
||||||
|
" dl_dphi1 = (loss3-loss1) / delta\n",
|
||||||
|
"\n",
|
||||||
|
" return dl_dphi0, dl_dphi1\n",
|
||||||
|
"\n",
|
||||||
|
"# This routine performs gradient descent with the discriminator\n",
|
||||||
|
"def update_discriminator(x_real, x_syn, n_iter, phi0, phi1):\n",
|
||||||
|
"\n",
|
||||||
|
" # Define learning rate\n",
|
||||||
|
" alpha = 0.01\n",
|
||||||
|
"\n",
|
||||||
|
" # Get derivatives\n",
|
||||||
|
" print(\"Initial discriminator loss = \", compute_discriminator_loss(x_real, x_syn, phi0, phi1))\n",
|
||||||
|
" for iter in range(n_iter):\n",
|
||||||
|
" # Get gradient\n",
|
||||||
|
" dl_dphi0, dl_dphi1 = compute_discriminator_gradient(x_real, x_syn, phi0, phi1)\n",
|
||||||
|
" # Take a gradient step downhill\n",
|
||||||
|
" phi0 = phi0 - alpha * dl_dphi0 ;\n",
|
||||||
|
" phi1 = phi1 - alpha * dl_dphi1 ;\n",
|
||||||
|
"\n",
|
||||||
|
" print(\"Final Discriminator Loss= \", compute_discriminator_loss(x_real, x_syn, phi0, phi1))\n",
|
||||||
|
"\n",
|
||||||
|
" return phi0, phi1"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "zAxUPo3p0CIW"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Let's update the discriminator (sigmoid curve)\n",
|
||||||
|
"n_iter = 100\n",
|
||||||
|
"print(\"Initial parameters (phi0,phi1)\", phi0, phi1)\n",
|
||||||
|
"phi0, phi1 = update_discriminator(x_real, x_syn, n_iter, phi0, phi1)\n",
|
||||||
|
"print(\"Final parameters (phi0,phi1)\", phi0, phi1)\n",
|
||||||
|
"draw_data_model(x_real, x_syn, phi0, phi1)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "FE_DeweeAbMc"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's update the generator"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "pRv9myh0d3Xm"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def compute_generator_loss(z, theta, phi0, phi1):\n",
|
||||||
|
" # TODO -- Run the generator on the latent variables z with the parameters theta\n",
|
||||||
|
" # to generate new data x_syn\n",
|
||||||
|
" # Then run the discriminator on the new data to get the probability of being real\n",
|
||||||
|
" # The loss is the total negative log probability of being synthesized (i.e. of not being real)\n",
|
||||||
|
" # Replace this code\n",
|
||||||
|
" loss = 1\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" return loss"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "5uiLrFBvJFAr"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Test generator loss to check you have it correct\n",
|
||||||
|
"loss = compute_generator_loss(z, theta, -2, 1)\n",
|
||||||
|
"print(\"True Loss = 13.78437035945412, Your loss=\", loss )"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "cqnU3dGPd6NK"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def compute_generator_gradient(z, theta, phi0, phi1):\n",
|
||||||
|
" delta = 0.0001\n",
|
||||||
|
" loss1 = compute_generator_loss(z,theta, phi0, phi1) ;\n",
|
||||||
|
" loss2 = compute_generator_loss(z,theta+delta, phi0, phi1) ;\n",
|
||||||
|
" dl_dtheta = (loss2-loss1)/ delta\n",
|
||||||
|
" return dl_dtheta\n",
|
||||||
|
"\n",
|
||||||
|
"def update_generator(z, theta, n_iter, phi0, phi1):\n",
|
||||||
|
" # Define learning rate\n",
|
||||||
|
" alpha = 0.02\n",
|
||||||
|
"\n",
|
||||||
|
" # Get derivatives\n",
|
||||||
|
" print(\"Initial generator loss = \", compute_generator_loss(z, theta, phi0, phi1))\n",
|
||||||
|
" for iter in range(n_iter):\n",
|
||||||
|
" # Get gradient\n",
|
||||||
|
" dl_dtheta = compute_generator_gradient(x_real, x_syn, phi0, phi1)\n",
|
||||||
|
" # Take a gradient step (uphill, since we are trying to make synthesized data less well classified by discriminator)\n",
|
||||||
|
" theta = theta + alpha * dl_dtheta ;\n",
|
||||||
|
"\n",
|
||||||
|
" print(\"Final generator loss = \", compute_generator_loss(z, theta, phi0, phi1))\n",
|
||||||
|
" return theta\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "P1Lqy922dqal"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"n_iter = 10\n",
|
||||||
|
"theta = 3.0\n",
|
||||||
|
"print(\"Theta before\", theta)\n",
|
||||||
|
"theta = update_generator(z, theta, n_iter, phi0, phi1)\n",
|
||||||
|
"print(\"Theta after\", theta)\n",
|
||||||
|
"\n",
|
||||||
|
"x_syn = generator(z,theta)\n",
|
||||||
|
"draw_data_model(x_real, x_syn, phi0, phi1)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "Q6kUkMO1P8V0"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Now let's define a full GAN loop\n",
|
||||||
|
"\n",
|
||||||
|
"# Initialize the parameters\n",
|
||||||
|
"theta = 3\n",
|
||||||
|
"phi0 = -2\n",
|
||||||
|
"phi1 = 1\n",
|
||||||
|
"\n",
|
||||||
|
"# Number of iterations for updating generator and discriminator\n",
|
||||||
|
"n_iter_discrim = 300\n",
|
||||||
|
"n_iter_gen = 3\n",
|
||||||
|
"\n",
|
||||||
|
"print(\"Final parameters (phi0,phi1)\", phi0, phi1)\n",
|
||||||
|
"for c_gan_iter in range(5):\n",
|
||||||
|
"\n",
|
||||||
|
" # Run generator to product syntehsized data\n",
|
||||||
|
" x_syn = generator(z, theta)\n",
|
||||||
|
" draw_data_model(x_real, x_syn, phi0, phi1)\n",
|
||||||
|
"\n",
|
||||||
|
" # Update the discriminator\n",
|
||||||
|
" print(\"Updating discriminator\")\n",
|
||||||
|
" phi0, phi1 = update_discriminator(x_real, x_syn, n_iter_discrim, phi0, phi1)\n",
|
||||||
|
" draw_data_model(x_real, x_syn, phi0, phi1)\n",
|
||||||
|
"\n",
|
||||||
|
" # Update the generator\n",
|
||||||
|
" print(\"Updating generator\")\n",
|
||||||
|
" theta = update_generator(z, theta, n_iter_gen, phi0, phi1)\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "pcbdK2agTO-y"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"You can see that the synthesized data (orange) is becoming closer to the true data (cyan). However, this is extremely unstable -- as you will find if you mess around with the number of iterations of each optimization and the total iterations overall."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "loMx0TQUgBs7"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
246
Notebooks/Chap15/15_2_Wasserstein_Distance.ipynb
Normal file
246
Notebooks/Chap15/15_2_Wasserstein_Distance.ipynb
Normal file
@@ -0,0 +1,246 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyNyLnpoXgKN+RGCuTUszCAZ",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap15/15_2_Wasserstein_Distance.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 15.2: Wassserstein Distance**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook investigates the GAN toy example as illustred in figure 15.1 in the book.\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "t9vk9Elugvmi"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt\n",
|
||||||
|
"from matplotlib import cm\n",
|
||||||
|
"from matplotlib.colors import ListedColormap\n",
|
||||||
|
"from scipy.optimize import linprog"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "OLComQyvCIJ7"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Define two probability distributions\n",
|
||||||
|
"p = np.array([5, 3, 2, 1, 8, 7, 5, 9, 2, 1])\n",
|
||||||
|
"q = np.array([4, 10,1, 1, 4, 6, 3, 2, 0, 1])\n",
|
||||||
|
"p = p/np.sum(p);\n",
|
||||||
|
"q= q/np.sum(q);\n",
|
||||||
|
"\n",
|
||||||
|
"# Draw those distributions\n",
|
||||||
|
"fig, ax =plt.subplots(2,1);\n",
|
||||||
|
"x = np.arange(0,p.size,1)\n",
|
||||||
|
"ax[0].bar(x,p, color=\"#cccccc\")\n",
|
||||||
|
"ax[0].set_ylim([0,0.35])\n",
|
||||||
|
"ax[0].set_ylabel(\"p(x=i)\")\n",
|
||||||
|
"\n",
|
||||||
|
"ax[1].bar(x,q,color=\"#f47a60\")\n",
|
||||||
|
"ax[1].set_ylim([0,0.35])\n",
|
||||||
|
"ax[1].set_ylabel(\"q(x=j)\")\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "ZIfQwhd-AV6L"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# TODO Define the distance matrix from figure 15.8d\n",
|
||||||
|
"# Replace this line\n",
|
||||||
|
"dist_mat = np.zeros((10,10))\n",
|
||||||
|
"\n",
|
||||||
|
"# vectorize the distance matrix\n",
|
||||||
|
"c = dist_mat.flatten()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "EZSlZQzWBKTm"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Define pretty colormap\n",
|
||||||
|
"my_colormap_vals_hex =('2a0902', '2b0a03', '2c0b04', '2d0c05', '2e0c06', '2f0d07', '300d08', '310e09', '320f0a', '330f0b', '34100b', '35110c', '36110d', '37120e', '38120f', '39130f', '3a1410', '3b1411', '3c1511', '3d1612', '3e1613', '3f1713', '401714', '411814', '421915', '431915', '451a16', '461b16', '471b17', '481c17', '491d18', '4a1d18', '4b1e19', '4c1f19', '4d1f1a', '4e201b', '50211b', '51211c', '52221c', '53231d', '54231d', '55241e', '56251e', '57261f', '58261f', '592720', '5b2821', '5c2821', '5d2922', '5e2a22', '5f2b23', '602b23', '612c24', '622d25', '632e25', '652e26', '662f26', '673027', '683027', '693128', '6a3229', '6b3329', '6c342a', '6d342a', '6f352b', '70362c', '71372c', '72372d', '73382e', '74392e', '753a2f', '763a2f', '773b30', '783c31', '7a3d31', '7b3e32', '7c3e33', '7d3f33', '7e4034', '7f4134', '804235', '814236', '824336', '834437', '854538', '864638', '874739', '88473a', '89483a', '8a493b', '8b4a3c', '8c4b3c', '8d4c3d', '8e4c3e', '8f4d3f', '904e3f', '924f40', '935041', '945141', '955242', '965343', '975343', '985444', '995545', '9a5646', '9b5746', '9c5847', '9d5948', '9e5a49', '9f5a49', 'a05b4a', 'a15c4b', 'a35d4b', 'a45e4c', 'a55f4d', 'a6604e', 'a7614e', 'a8624f', 'a96350', 'aa6451', 'ab6552', 'ac6552', 'ad6653', 'ae6754', 'af6855', 'b06955', 'b16a56', 'b26b57', 'b36c58', 'b46d59', 'b56e59', 'b66f5a', 'b7705b', 'b8715c', 'b9725d', 'ba735d', 'bb745e', 'bc755f', 'bd7660', 'be7761', 'bf7862', 'c07962', 'c17a63', 'c27b64', 'c27c65', 'c37d66', 'c47e67', 'c57f68', 'c68068', 'c78169', 'c8826a', 'c9836b', 'ca846c', 'cb856d', 'cc866e', 'cd876f', 'ce886f', 'ce8970', 'cf8a71', 'd08b72', 'd18c73', 'd28d74', 'd38e75', 'd48f76', 'd59077', 'd59178', 'd69279', 'd7937a', 'd8957b', 'd9967b', 'da977c', 'da987d', 'db997e', 'dc9a7f', 'dd9b80', 'de9c81', 'de9d82', 'df9e83', 'e09f84', 'e1a185', 'e2a286', 'e2a387', 'e3a488', 'e4a589', 'e5a68a', 'e5a78b', 'e6a88c', 'e7aa8d', 'e7ab8e', 'e8ac8f', 'e9ad90', 'eaae91', 'eaaf92', 'ebb093', 'ecb295', 'ecb396', 'edb497', 'eeb598', 'eeb699', 'efb79a', 'efb99b', 'f0ba9c', 'f1bb9d', 'f1bc9e', 'f2bd9f', 'f2bfa1', 'f3c0a2', 'f3c1a3', 'f4c2a4', 'f5c3a5', 'f5c5a6', 'f6c6a7', 'f6c7a8', 'f7c8aa', 'f7c9ab', 'f8cbac', 'f8ccad', 'f8cdae', 'f9ceb0', 'f9d0b1', 'fad1b2', 'fad2b3', 'fbd3b4', 'fbd5b6', 'fbd6b7', 'fcd7b8', 'fcd8b9', 'fcdaba', 'fddbbc', 'fddcbd', 'fddebe', 'fddfbf', 'fee0c1', 'fee1c2', 'fee3c3', 'fee4c5', 'ffe5c6', 'ffe7c7', 'ffe8c9', 'ffe9ca', 'ffebcb', 'ffeccd', 'ffedce', 'ffefcf', 'fff0d1', 'fff2d2', 'fff3d3', 'fff4d5', 'fff6d6', 'fff7d8', 'fff8d9', 'fffada', 'fffbdc', 'fffcdd', 'fffedf', 'ffffe0')\n",
|
||||||
|
"my_colormap_vals_dec = np.array([int(element,base=16) for element in my_colormap_vals_hex])\n",
|
||||||
|
"r = np.floor(my_colormap_vals_dec/(256*256))\n",
|
||||||
|
"g = np.floor((my_colormap_vals_dec - r *256 *256)/256)\n",
|
||||||
|
"b = np.floor(my_colormap_vals_dec - r * 256 *256 - g * 256)\n",
|
||||||
|
"my_colormap = ListedColormap(np.vstack((r,g,b)).transpose()/255.0)\n",
|
||||||
|
"\n",
|
||||||
|
"def draw_2D_heatmap(data, title, my_colormap):\n",
|
||||||
|
" # Make grid of intercept/slope values to plot\n",
|
||||||
|
" xv, yv = np.meshgrid(np.linspace(0, 1, 10), np.linspace(0, 1, 10))\n",
|
||||||
|
" fig,ax = plt.subplots()\n",
|
||||||
|
" fig.set_size_inches(4,4)\n",
|
||||||
|
" plt.imshow(data, cmap=my_colormap)\n",
|
||||||
|
" ax.set_title(title)\n",
|
||||||
|
" ax.set_xlabel('$q$'); ax.set_ylabel('$p$')\n",
|
||||||
|
" plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "ABRANmp6F8iQ"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"draw_2D_heatmap(dist_mat,'Distance $|i-j|$', my_colormap)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "G0HFPBXyHT6V"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Define b to be the verticalconcatenation of p and q\n",
|
||||||
|
"b = np.hstack((p,q))[np.newaxis].transpose()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "SfqeT3KlHWrt"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# TODO: Now construct the matrix A that has the initial distribution constraints\n",
|
||||||
|
"# so that Ap=b where p is the transport plan P vectorized rows first so p = np.flatten(P)\n",
|
||||||
|
"# Replace this line:\n",
|
||||||
|
"A = np.zeros((20,100))\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "7KrybL96IuNW"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now we have all of the things we need. The vectorized distance matrix $\\mathbf{c}$, the constraint matrix $\\mathbf{A}$, the vectorized and concatenated original distribution $\\mathbf{b}$. We can run the linear programming optimization."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "zEuEtU33S8Ly"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# We don't need the constraint that p>0 as this is the default\n",
|
||||||
|
"opt = linprog(c, A_eq=A, b_eq=b)\n",
|
||||||
|
"print(opt)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "wCfsOVbeSmF5"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Extract the answer and display"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "vpkkOOI2agyl"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"P = np.array(opt.x).reshape(10,10)\n",
|
||||||
|
"draw_2D_heatmap(P,'Transport plan $\\mathbf{P}$', my_colormap)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "nZGfkrbRV_D0"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Compute the Wasserstein distance\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "ZEiRYRVgalsJ"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"was = np.sum(P * dist_mat)\n",
|
||||||
|
"print(\"Wasserstein distance = \", was)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "yiQ_8j-Raq3c"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"TODO -- Compute the\n",
|
||||||
|
"\n",
|
||||||
|
"* Forward KL divergence $D_{KL}[p,q]$ between these distributions\n",
|
||||||
|
"* Reverse KL divergence $D_{KL}[q,p]$ between these distributions\n",
|
||||||
|
"* Jensen-Shannon divergence $D_{JS}[p,q]$ between these distributions\n",
|
||||||
|
"\n",
|
||||||
|
"What do you conclude?"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "zf8yTusua71s"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
235
Notebooks/Chap16/16_1_1D_Normalizing_Flows.ipynb
Normal file
235
Notebooks/Chap16/16_1_1D_Normalizing_Flows.ipynb
Normal file
@@ -0,0 +1,235 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyMJLViYIpiivB2A7YIuZmzU",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap16/16_1_1D_Normalizing_Flows.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 16.1: 1D normalizing flows**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook investigates a 1D normalizing flows example similar to that illustrated in figures 16.1 to 16.3 in the book.\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "t9vk9Elugvmi"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "OLComQyvCIJ7"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"First we start with a base probability density function"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "IyVn-Gi-p7wf"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Define the base pdf\n",
|
||||||
|
"def gauss_pdf(z, mu, sigma):\n",
|
||||||
|
" pr_z = np.exp( -0.5 * (z-mu) * (z-mu) / (sigma * sigma))/(np.sqrt(2*3.1413) * sigma)\n",
|
||||||
|
" return pr_z"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "ZIfQwhd-AV6L"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"z = np.arange(-3,3,0.01)\n",
|
||||||
|
"pr_z = gauss_pdf(z, 0, 1)\n",
|
||||||
|
"\n",
|
||||||
|
"fig,ax = plt.subplots()\n",
|
||||||
|
"ax.plot(z, pr_z)\n",
|
||||||
|
"ax.set_xlim([-3,3])\n",
|
||||||
|
"ax.set_xlabel('$z$')\n",
|
||||||
|
"ax.set_ylabel('$Pr(z)$')\n",
|
||||||
|
"plt.show();"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "gGh8RHmFp_Ls"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's define a nonlinear function that maps from the latent space $z$ to the observed data $x$."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "wVXi5qIfrL9T"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Define a function that maps from the base pdf over z to the observed space x\n",
|
||||||
|
"def f(z):\n",
|
||||||
|
" x1 = 6/(1+np.exp(-(z-0.25)*1.5))-3\n",
|
||||||
|
" x2 = z\n",
|
||||||
|
" p = z * z/9\n",
|
||||||
|
" x = (1-p) * x1 + p * x2\n",
|
||||||
|
" return x\n",
|
||||||
|
"\n",
|
||||||
|
"# Compute gradient of that function using finite differences\n",
|
||||||
|
"def df_dz(z):\n",
|
||||||
|
" return (f(z+0.0001)-f(z-0.0001))/0.0002"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "shHdgZHjp52w"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"x = f(z)\n",
|
||||||
|
"fig, ax = plt.subplots()\n",
|
||||||
|
"ax.plot(z,x)\n",
|
||||||
|
"ax.set_xlim(-3,3)\n",
|
||||||
|
"ax.set_ylim(-3,3)\n",
|
||||||
|
"ax.set_xlabel('Latent variable, $z$')\n",
|
||||||
|
"ax.set_ylabel('Observed variable, $x$')\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "sz7bnCLUq3Qs"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's evaluate the density in the observed space using equation 16.1"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "rmI0BbuQyXoc"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# TODO -- plot the density in the observed space\n",
|
||||||
|
"# Replace these line\n",
|
||||||
|
"x = np.ones_like(z)\n",
|
||||||
|
"pr_x = np.ones_like(pr_z)\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "iPdiT_5gyNOD"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Plot the density in the observed space\n",
|
||||||
|
"fig,ax = plt.subplots()\n",
|
||||||
|
"ax.plot(x, pr_x)\n",
|
||||||
|
"ax.set_xlim([-3,3])\n",
|
||||||
|
"ax.set_ylim([0, 0.5])\n",
|
||||||
|
"ax.set_xlabel('$x$')\n",
|
||||||
|
"ax.set_ylabel('$Pr(x)$')\n",
|
||||||
|
"plt.show();"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "Jlks8MW3zulA"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's draw some samples from the new distribution (see section 16.1)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "1c5rO0HHz-FV"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"np.random.seed(1)\n",
|
||||||
|
"n_sample = 20\n",
|
||||||
|
"\n",
|
||||||
|
"# TODO -- Draw samples from the modeled density\n",
|
||||||
|
"# Replace this line\n",
|
||||||
|
"x_samples = np.ones((n_sample, 1))\n",
|
||||||
|
"\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "LIlTRfpZz2k_"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Draw the samples\n",
|
||||||
|
"fig,ax = plt.subplots()\n",
|
||||||
|
"ax.plot(x, pr_x)\n",
|
||||||
|
"for x_sample in x_samples:\n",
|
||||||
|
" ax.plot([x_sample, x_sample], [0,0.1], 'r-')\n",
|
||||||
|
"\n",
|
||||||
|
"ax.set_xlim([-3,3])\n",
|
||||||
|
"ax.set_ylim([0, 0.5])\n",
|
||||||
|
"ax.set_xlabel('$x$')\n",
|
||||||
|
"ax.set_ylabel('$Pr(x)$')\n",
|
||||||
|
"plt.show();"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "JS__QPNv0vUA"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
307
Notebooks/Chap16/16_2_Autoregressive_Flows.ipynb
Normal file
307
Notebooks/Chap16/16_2_Autoregressive_Flows.ipynb
Normal file
@@ -0,0 +1,307 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyMe8jb5kLJqkNSE/AwExTpa",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap16/16_2_Autoregressive_Flows.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 16.2: 1D autoregressive flows**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook investigates a 1D normalizing flows example similar to that illustrated in figure 16.7 in the book.\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "t9vk9Elugvmi"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "OLComQyvCIJ7"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"First we'll define an invertible one dimensional function as in figure 16.5"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "jTK456TUd2FV"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# First let's make the 1D piecewise linear mapping as illustated in figure 16.5\n",
|
||||||
|
"def g(h, phi):\n",
|
||||||
|
" # TODO -- write this function (equation 16.12)\n",
|
||||||
|
" # Note: If you have the first printing of the book, there is a mistake in equation 16.12\n",
|
||||||
|
" # Check the errata for the correct equation (or figure it out yourself!)\n",
|
||||||
|
" # Replace this line:\n",
|
||||||
|
" h_prime = 1\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" return h_prime"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "zceww_9qFi00"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Let's test this out. If you managed to vectorize the routine above, then good for you\n",
|
||||||
|
"# but I'll assume you didn't and so we'll use a loop\n",
|
||||||
|
"\n",
|
||||||
|
"# Define the parameters\n",
|
||||||
|
"phi = np.array([0.2, 0.1, 0.4, 0.05, 0.25])\n",
|
||||||
|
"\n",
|
||||||
|
"# Run the function on an array\n",
|
||||||
|
"h = np.arange(0,1,0.01)\n",
|
||||||
|
"h_prime = np.zeros_like(h)\n",
|
||||||
|
"for i in range(len(h)):\n",
|
||||||
|
" h_prime[i] = g(h[i], phi)\n",
|
||||||
|
"\n",
|
||||||
|
"# Draw the function\n",
|
||||||
|
"fig, ax = plt.subplots()\n",
|
||||||
|
"ax.plot(h,h_prime, 'b-')\n",
|
||||||
|
"ax.set_xlim([0,1])\n",
|
||||||
|
"ax.set_ylim([0,1])\n",
|
||||||
|
"ax.set_xlabel('Input, $h$')\n",
|
||||||
|
"ax.set_ylabel('Output, $h^\\prime$')\n",
|
||||||
|
"plt.show()\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "CLXhYl9ZIuRN"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"We will also need the inverse of this function"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "zOCMYC0leOyZ"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Define the inverse function\n",
|
||||||
|
"def g_inverse(h_prime, phi):\n",
|
||||||
|
" # Lot's of ways to do this, but we'll just do it by bracketing\n",
|
||||||
|
" h_low = 0\n",
|
||||||
|
" h_mid = 0.5\n",
|
||||||
|
" h_high = 0.999\n",
|
||||||
|
"\n",
|
||||||
|
" thresh = 0.0001\n",
|
||||||
|
" c_iter = 0\n",
|
||||||
|
" while(c_iter < 20 and h_high - h_low > thresh):\n",
|
||||||
|
" h_prime_low = g(h_low, phi)\n",
|
||||||
|
" h_prime_mid = g(h_mid, phi)\n",
|
||||||
|
" h_prime_high = g(h_high, phi)\n",
|
||||||
|
" if h_prime_mid < h_prime:\n",
|
||||||
|
" h_low = h_mid\n",
|
||||||
|
" else:\n",
|
||||||
|
" h_high = h_mid\n",
|
||||||
|
"\n",
|
||||||
|
" h_mid = h_low+(h_high-h_low)/2\n",
|
||||||
|
" c_iter+=1\n",
|
||||||
|
"\n",
|
||||||
|
" return h_mid"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "OIqFAgobeSM8"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's define an autogressive flow. Let's switch to looking at figure 16.7.# We'll assume that our piecewise function will use five parameters phi1,phi2,phi3,phi4,phi5"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "t8XPxipfd7hz"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"\n",
|
||||||
|
"def ReLU(preactivation):\n",
|
||||||
|
" activation = preactivation.clip(0.0)\n",
|
||||||
|
" return activation\n",
|
||||||
|
"\n",
|
||||||
|
"def softmax(x):\n",
|
||||||
|
" x = np.exp(x) ;\n",
|
||||||
|
" x = x/ np.sum(x) ;\n",
|
||||||
|
" return x\n",
|
||||||
|
"\n",
|
||||||
|
"# Return value of phi that doesn't depend on any of the iputs\n",
|
||||||
|
"def get_phi():\n",
|
||||||
|
" return np.array([0.2, 0.1, 0.4, 0.05, 0.25])\n",
|
||||||
|
"\n",
|
||||||
|
"# Compute values of phi that depend on h1\n",
|
||||||
|
"def shallow_network_phi_h1(h1, n_hidden=10):\n",
|
||||||
|
" # For neatness of code, we'll just define the parameters in the network definition itself\n",
|
||||||
|
" n_input = 1\n",
|
||||||
|
" np.random.seed(n_input)\n",
|
||||||
|
" beta0 = np.random.normal(size=(n_hidden,1))\n",
|
||||||
|
" Omega0 = np.random.normal(size=(n_hidden, n_input))\n",
|
||||||
|
" beta1 = np.random.normal(size=(5,1))\n",
|
||||||
|
" Omega1 = np.random.normal(size=(5, n_hidden))\n",
|
||||||
|
" return softmax(beta1 + Omega1 @ ReLU(beta0 + Omega0 @ np.array([[h1]])))\n",
|
||||||
|
"\n",
|
||||||
|
"# Compute values of phi that depend on h1 and h2\n",
|
||||||
|
"def shallow_network_phi_h1h2(h1,h2,n_hidden=10):\n",
|
||||||
|
" # For neatness of code, we'll just define the parameters in the network definition itself\n",
|
||||||
|
" n_input = 2\n",
|
||||||
|
" np.random.seed(n_input)\n",
|
||||||
|
" beta0 = np.random.normal(size=(n_hidden,1))\n",
|
||||||
|
" Omega0 = np.random.normal(size=(n_hidden, n_input))\n",
|
||||||
|
" beta1 = np.random.normal(size=(5,1))\n",
|
||||||
|
" Omega1 = np.random.normal(size=(5, n_hidden))\n",
|
||||||
|
" return softmax(beta1 + Omega1 @ ReLU(beta0 + Omega0 @ np.array([[h1],[h2]])))\n",
|
||||||
|
"\n",
|
||||||
|
"# Compute values of phi that depend on h1, h2, and h3\n",
|
||||||
|
"def shallow_network_phi_h1h2h3(h1,h2,h3, n_hidden=10):\n",
|
||||||
|
" # For neatness of code, we'll just define the parameters in the network definition itself\n",
|
||||||
|
" n_input = 3\n",
|
||||||
|
" np.random.seed(n_input)\n",
|
||||||
|
" beta0 = np.random.normal(size=(n_hidden,1))\n",
|
||||||
|
" Omega0 = np.random.normal(size=(n_hidden, n_input))\n",
|
||||||
|
" beta1 = np.random.normal(size=(5,1))\n",
|
||||||
|
" Omega1 = np.random.normal(size=(5, n_hidden))\n",
|
||||||
|
" return softmax(beta1 + Omega1 @ ReLU(beta0 + Omega0 @ np.array([[h1],[h2],[h3]])))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "PnHGlZtcNEAI"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"The forward mapping as shown in figure 16.7 a"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "8fXeG4V44GVH"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def forward_mapping(h1,h2,h3,h4):\n",
|
||||||
|
" #TODO implement the forward mapping\n",
|
||||||
|
" #Replace this line:\n",
|
||||||
|
" h_prime1 = 0 ; h_prime2=0; h_prime3=0; h_prime4 = 0\n",
|
||||||
|
"\n",
|
||||||
|
" return h_prime1, h_prime2, h_prime3, h_prime4"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "N1zjnIoX0TRP"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"The backward mapping as shown in figure 16.7b"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "H8vQfFwI4L7r"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def backward_mapping(h1_prime,h2_prime,h3_prime,h4_prime):\n",
|
||||||
|
" #TODO implement the backward mapping\n",
|
||||||
|
" #Replace this line:\n",
|
||||||
|
" h1=0; h2=0; h3=0; h4 = 0\n",
|
||||||
|
"\n",
|
||||||
|
" return h1,h2,h3,h4"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "HNcQTiVE4DMJ"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Finally, let's make sure that the network really can be inverted"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "W2IxFkuyZJyn"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Test the network to see if it does invert correctly\n",
|
||||||
|
"h1 = 0.22; h2 = 0.41; h3 = 0.83; h4 = 0.53\n",
|
||||||
|
"print(\"Original h values %3.3f,%3.3f,%3.3f,%3.3f\"%(h1,h2,h3,h4))\n",
|
||||||
|
"h1_prime, h2_prime, h3_prime, h4_prime = forward_mapping(h1,h2,h3,h4)\n",
|
||||||
|
"print(\"h_prime values %3.3f,%3.3f,%3.3f,%3.3f\"%(h1_prime,h2_prime,h3_prime,h4_prime))\n",
|
||||||
|
"h1,h2,h3,h4 = backward_mapping(h1_prime,h2_prime,h3_prime,h4_prime)\n",
|
||||||
|
"print(\"Reconstructed h values %3.3f,%3.3f,%3.3f,%3.3f\"%(h1,h2,h3,h4))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "RT7qvEFp700I"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [],
|
||||||
|
"metadata": {
|
||||||
|
"id": "sDknSPMLZmzh"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
294
Notebooks/Chap16/16_3_Contraction_Mappings.ipynb
Normal file
294
Notebooks/Chap16/16_3_Contraction_Mappings.ipynb
Normal file
@@ -0,0 +1,294 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyNeCWINUqqUGKMcxsqPFTAh",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap16/16_3_Contraction_Mappings.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 16.3: Contraction mappings**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook investigates a 1D normalizing flows example similar to that illustrated in figure 16.9 in the book.\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "t9vk9Elugvmi"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "OLComQyvCIJ7"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Define a function that is a contraction mapping\n",
|
||||||
|
"def f(z):\n",
|
||||||
|
" return 0.3 + 0.5 *z + 0.02 * np.sin(z*15)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "4Pfz2KSghdVI"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def draw_function(f, fixed_point=None):\n",
|
||||||
|
" z = np.arange(0,1,0.01)\n",
|
||||||
|
" z_prime = f(z)\n",
|
||||||
|
"\n",
|
||||||
|
" # Draw this function\n",
|
||||||
|
" fig, ax = plt.subplots()\n",
|
||||||
|
" ax.plot(z, z_prime,'c-')\n",
|
||||||
|
" ax.plot([0,1],[0,1],'k--')\n",
|
||||||
|
" if fixed_point!=None:\n",
|
||||||
|
" ax.plot(fixed_point, fixed_point, 'ro')\n",
|
||||||
|
" ax.set_xlim(0,1)\n",
|
||||||
|
" ax.set_ylim(0,1)\n",
|
||||||
|
" ax.set_xlabel('Input, $z$')\n",
|
||||||
|
" ax.set_ylabel('Output, f$[z]$')\n",
|
||||||
|
" plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "zEwCbIx0hpAI"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"draw_function(f)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "k4e5Yu0fl8bz"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's find where $\\mbox{f}[z]=z$ using fixed point iteration"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "DfgKrpCAjnol"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Takes a function f and a starting point z\n",
|
||||||
|
"def fixed_point_iteration(f, z0):\n",
|
||||||
|
" # TODO -- write this function\n",
|
||||||
|
" # Print out the iterations as you go, so you can see the progress\n",
|
||||||
|
" # Set the maximum number of iterations to 20\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" z_out = 0.5;\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" return z_out"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "bAOBvZT-j3lv"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's test that and plot the solution"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "CAS0lgIomAa0"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Now let's test that\n",
|
||||||
|
"z = fixed_point_iteration(f, 0.2)\n",
|
||||||
|
"draw_function(f, z)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "EYQZJdNPk8Lg"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Let's define another function\n",
|
||||||
|
"def f2(z):\n",
|
||||||
|
" return 0.7 + -0.6 *z + 0.03 * np.sin(z*15)\n",
|
||||||
|
"draw_function(f2)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "4DipPiqVlnwJ"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Now let's test that\n",
|
||||||
|
"# TODO Before running this code, predict what you think will happen\n",
|
||||||
|
"z = fixed_point_iteration(f2, 0.9)\n",
|
||||||
|
"draw_function(f2, z)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "tYOdbWcomdEE"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Let's define another function\n",
|
||||||
|
"# Define a function that is a contraction mapping\n",
|
||||||
|
"def f3(z):\n",
|
||||||
|
" return -0.2 + 1.5 *z + 0.1 * np.sin(z*15)\n",
|
||||||
|
"draw_function(f3)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "Mni37RUpmrIu"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Now let's test that\n",
|
||||||
|
"# TODO Before running this code, predict what you think will happen\n",
|
||||||
|
"z = fixed_point_iteration(f3, 0.7)\n",
|
||||||
|
"draw_function(f3, z)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "agt5mfJrnM1O"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Finally, let's invert a problem of the form $y = z+ f[z]$ for a given value of $y$. What is the $z$ that maps to it?"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "n6GI46-ZoQz6"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def f4(z):\n",
|
||||||
|
" return -0.3 + 0.5 *z + 0.02 * np.sin(z*15)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "dy6r3jr9rjPf"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def fixed_point_iteration_z_plus_f(f, y, z0):\n",
|
||||||
|
" # TODO -- write this function\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" z_out = 1\n",
|
||||||
|
"\n",
|
||||||
|
" return z_out"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "GMX64Iz0nl-B"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def draw_function2(f, y, fixed_point=None):\n",
|
||||||
|
" z = np.arange(0,1,0.01)\n",
|
||||||
|
" z_prime = z+f(z)\n",
|
||||||
|
"\n",
|
||||||
|
" # Draw this function\n",
|
||||||
|
" fig, ax = plt.subplots()\n",
|
||||||
|
" ax.plot(z, z_prime,'c-')\n",
|
||||||
|
" ax.plot(z, y-f(z),'r-')\n",
|
||||||
|
" ax.plot([0,1],[0,1],'k--')\n",
|
||||||
|
" if fixed_point!=None:\n",
|
||||||
|
" ax.plot(fixed_point, y, 'ro')\n",
|
||||||
|
" ax.set_xlim(0,1)\n",
|
||||||
|
" ax.set_ylim(0,1)\n",
|
||||||
|
" ax.set_xlabel('Input, $z$')\n",
|
||||||
|
" ax.set_ylabel('Output, z+f$[z]$')\n",
|
||||||
|
" plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "uXxKHad5qT8Y"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Test this out and draw\n",
|
||||||
|
"y = 0.8\n",
|
||||||
|
"z = fixed_point_iteration_z_plus_f(f4,y,0.2)\n",
|
||||||
|
"draw_function2(f4,y,z)\n",
|
||||||
|
"# If you have done this correctly, the red dot should be\n",
|
||||||
|
"# where the cyan curve has a y value of 0.8"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "mNEBXC3Aqd_1"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
396
Notebooks/Chap17/17_1_Latent_Variable_Models.ipynb
Normal file
396
Notebooks/Chap17/17_1_Latent_Variable_Models.ipynb
Normal file
@@ -0,0 +1,396 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyMBYNsjj1iTgHUYhAXqUYJd",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap17/17_1_Latent_Variable_Models.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 17.1: Latent variable models**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook investigates a non-linear latent variable model similar to that in figures 17.2 and 17.3 of the book.\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "t9vk9Elugvmi"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt\n",
|
||||||
|
"import scipy\n",
|
||||||
|
"from matplotlib.colors import ListedColormap\n",
|
||||||
|
"from matplotlib import cm"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "OLComQyvCIJ7"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"We'll assume that our base distribution over the latent variables is a 1D standard normal so that\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{equation}\n",
|
||||||
|
"Pr(z) = \\mbox{Norm}_{z}[0,1]\n",
|
||||||
|
"\\end{equation}\n",
|
||||||
|
"\n",
|
||||||
|
"As in figure 17.2, we'll assume that the output is two dimensional, we we need to define a function that maps from the 1D latent variable to two dimensions. Usually, we would use a neural network, but in this case, we'll just define an arbitrary relationship.\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{eqnarray}\n",
|
||||||
|
"x_{1} &=& 0.5\\cdot\\exp\\Bigl[\\sin\\bigl[2+ 3.675 z \\bigr]\\Bigr]\\\\\n",
|
||||||
|
"x_{2} &=& \\sin\\bigl[2+ 2.85 z \\bigr]\n",
|
||||||
|
"\\end{eqnarray}"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "IyVn-Gi-p7wf"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# The function that maps z to x1 and x2\n",
|
||||||
|
"def f(z):\n",
|
||||||
|
" x_1 = np.exp(np.sin(2+z*3.675)) * 0.5\n",
|
||||||
|
" x_2 = np.cos(2+z*2.85)\n",
|
||||||
|
" return x_1, x_2"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "ZIfQwhd-AV6L"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Let's plot the 3D relation between the two observed variables $x_{1}$ and $x_{2}$ and the latent variables $z$ as in figure 17.2 of the book. We'll use the opacity to represent the prior probability $Pr(z)$."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "KB9FU34onW1j"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def draw_3d_projection(z,pr_z, x1,x2):\n",
|
||||||
|
" alpha = pr_z / np.max(pr_z)\n",
|
||||||
|
" ax = plt.axes(projection='3d')\n",
|
||||||
|
" fig = plt.gcf()\n",
|
||||||
|
" fig.set_size_inches(18.5, 10.5)\n",
|
||||||
|
" for i in range(len(z)-1):\n",
|
||||||
|
" ax.plot([z[i],z[i+1]],[x1[i],x1[i+1]],[x2[i],x2[i+1]],'r-', alpha=pr_z[i])\n",
|
||||||
|
" ax.set_xlabel('$z$',)\n",
|
||||||
|
" ax.set_ylabel('$x_1$')\n",
|
||||||
|
" ax.set_zlabel('$x_2$')\n",
|
||||||
|
" ax.set_xlim(-3,3)\n",
|
||||||
|
" ax.set_ylim(0,2)\n",
|
||||||
|
" ax.set_zlim(-1,1)\n",
|
||||||
|
" ax.set_box_aspect((3,1,1))\n",
|
||||||
|
" plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "lW08xqAgnP4q"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Compute the prior\n",
|
||||||
|
"def get_prior(z):\n",
|
||||||
|
" return scipy.stats.multivariate_normal.pdf(z)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "9DUTauMi6tPk"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Define the latent variable values\n",
|
||||||
|
"z = np.arange(-3.0,3.0,0.01)\n",
|
||||||
|
"# Find the probability distribution over z\n",
|
||||||
|
"pr_z = get_prior(z)\n",
|
||||||
|
"# Compute x1 and x2 for each z\n",
|
||||||
|
"x1,x2 = f(z)\n",
|
||||||
|
"# Plot the function\n",
|
||||||
|
"draw_3d_projection(z,pr_z, x1,x2)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "PAzHq461VqvF"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"The likelihood is defined as:\n",
|
||||||
|
"\\begin{eqnarray}\n",
|
||||||
|
" Pr(x_1,x_2|z) &=& \\mbox{Norm}_{[x_1,x_2]}\\Bigl[\\mathbf{f}[z],\\sigma^{2}\\mathbf{I}\\Bigr]\n",
|
||||||
|
"\\end{eqnarray}\n",
|
||||||
|
"\n",
|
||||||
|
"so we will also need to define the noise level $\\sigma^2$"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "sQg2gKR5zMrF"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"sigma_sq = 0.04"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "In_Vg4_0nva3"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Draws a heatmap to represent a probability distribution, possibly with samples overlaed\n",
|
||||||
|
"def plot_heatmap(x1_mesh,x2_mesh,y_mesh, x1_samples=None, x2_samples=None, title=None):\n",
|
||||||
|
" # Define pretty colormap\n",
|
||||||
|
" my_colormap_vals_hex =('2a0902', '2b0a03', '2c0b04', '2d0c05', '2e0c06', '2f0d07', '300d08', '310e09', '320f0a', '330f0b', '34100b', '35110c', '36110d', '37120e', '38120f', '39130f', '3a1410', '3b1411', '3c1511', '3d1612', '3e1613', '3f1713', '401714', '411814', '421915', '431915', '451a16', '461b16', '471b17', '481c17', '491d18', '4a1d18', '4b1e19', '4c1f19', '4d1f1a', '4e201b', '50211b', '51211c', '52221c', '53231d', '54231d', '55241e', '56251e', '57261f', '58261f', '592720', '5b2821', '5c2821', '5d2922', '5e2a22', '5f2b23', '602b23', '612c24', '622d25', '632e25', '652e26', '662f26', '673027', '683027', '693128', '6a3229', '6b3329', '6c342a', '6d342a', '6f352b', '70362c', '71372c', '72372d', '73382e', '74392e', '753a2f', '763a2f', '773b30', '783c31', '7a3d31', '7b3e32', '7c3e33', '7d3f33', '7e4034', '7f4134', '804235', '814236', '824336', '834437', '854538', '864638', '874739', '88473a', '89483a', '8a493b', '8b4a3c', '8c4b3c', '8d4c3d', '8e4c3e', '8f4d3f', '904e3f', '924f40', '935041', '945141', '955242', '965343', '975343', '985444', '995545', '9a5646', '9b5746', '9c5847', '9d5948', '9e5a49', '9f5a49', 'a05b4a', 'a15c4b', 'a35d4b', 'a45e4c', 'a55f4d', 'a6604e', 'a7614e', 'a8624f', 'a96350', 'aa6451', 'ab6552', 'ac6552', 'ad6653', 'ae6754', 'af6855', 'b06955', 'b16a56', 'b26b57', 'b36c58', 'b46d59', 'b56e59', 'b66f5a', 'b7705b', 'b8715c', 'b9725d', 'ba735d', 'bb745e', 'bc755f', 'bd7660', 'be7761', 'bf7862', 'c07962', 'c17a63', 'c27b64', 'c27c65', 'c37d66', 'c47e67', 'c57f68', 'c68068', 'c78169', 'c8826a', 'c9836b', 'ca846c', 'cb856d', 'cc866e', 'cd876f', 'ce886f', 'ce8970', 'cf8a71', 'd08b72', 'd18c73', 'd28d74', 'd38e75', 'd48f76', 'd59077', 'd59178', 'd69279', 'd7937a', 'd8957b', 'd9967b', 'da977c', 'da987d', 'db997e', 'dc9a7f', 'dd9b80', 'de9c81', 'de9d82', 'df9e83', 'e09f84', 'e1a185', 'e2a286', 'e2a387', 'e3a488', 'e4a589', 'e5a68a', 'e5a78b', 'e6a88c', 'e7aa8d', 'e7ab8e', 'e8ac8f', 'e9ad90', 'eaae91', 'eaaf92', 'ebb093', 'ecb295', 'ecb396', 'edb497', 'eeb598', 'eeb699', 'efb79a', 'efb99b', 'f0ba9c', 'f1bb9d', 'f1bc9e', 'f2bd9f', 'f2bfa1', 'f3c0a2', 'f3c1a3', 'f4c2a4', 'f5c3a5', 'f5c5a6', 'f6c6a7', 'f6c7a8', 'f7c8aa', 'f7c9ab', 'f8cbac', 'f8ccad', 'f8cdae', 'f9ceb0', 'f9d0b1', 'fad1b2', 'fad2b3', 'fbd3b4', 'fbd5b6', 'fbd6b7', 'fcd7b8', 'fcd8b9', 'fcdaba', 'fddbbc', 'fddcbd', 'fddebe', 'fddfbf', 'fee0c1', 'fee1c2', 'fee3c3', 'fee4c5', 'ffe5c6', 'ffe7c7', 'ffe8c9', 'ffe9ca', 'ffebcb', 'ffeccd', 'ffedce', 'ffefcf', 'fff0d1', 'fff2d2', 'fff3d3', 'fff4d5', 'fff6d6', 'fff7d8', 'fff8d9', 'fffada', 'fffbdc', 'fffcdd', 'fffedf', 'ffffe0')\n",
|
||||||
|
" my_colormap_vals_dec = np.array([int(element,base=16) for element in my_colormap_vals_hex])\n",
|
||||||
|
" r = np.floor(my_colormap_vals_dec/(256*256))\n",
|
||||||
|
" g = np.floor((my_colormap_vals_dec - r *256 *256)/256)\n",
|
||||||
|
" b = np.floor(my_colormap_vals_dec - r * 256 *256 - g * 256)\n",
|
||||||
|
" my_colormap = ListedColormap(np.vstack((r,g,b)).transpose()/255.0)\n",
|
||||||
|
"\n",
|
||||||
|
" fig,ax = plt.subplots()\n",
|
||||||
|
" fig.set_size_inches(8,8)\n",
|
||||||
|
" ax.contourf(x1_mesh,x2_mesh,y_mesh,256,cmap=my_colormap)\n",
|
||||||
|
" ax.contour(x1_mesh,x2_mesh,y_mesh,8,colors=['#80808080'])\n",
|
||||||
|
" if title is not None:\n",
|
||||||
|
" ax.set_title(title);\n",
|
||||||
|
" if x1_samples is not None:\n",
|
||||||
|
" ax.plot(x1_samples, x2_samples, 'c.')\n",
|
||||||
|
" ax.set_xlim([-0.5,2.5])\n",
|
||||||
|
" ax.set_ylim([-1.5,1.5])\n",
|
||||||
|
" ax.set_xlabel('$x_1$'); ax.set_ylabel('$x_2$')\n",
|
||||||
|
" plt.show()\n",
|
||||||
|
"\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "6P6d-AgAqxXZ"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Returns the likelihood\n",
|
||||||
|
"def get_likelihood(x1_mesh, x2_mesh, z_val):\n",
|
||||||
|
" # Find the corresponding x1 and x2 values\n",
|
||||||
|
" x1,x2 = f(z_val)\n",
|
||||||
|
"\n",
|
||||||
|
" # Calculate the probability for a mesh of x1,x2 values.\n",
|
||||||
|
" mn = scipy.stats.multivariate_normal([x1, x2], [[sigma_sq, 0], [0, sigma_sq]])\n",
|
||||||
|
" pr_x1_x2_given_z_val = mn.pdf(np.dstack((x1_mesh, x2_mesh)))\n",
|
||||||
|
" return pr_x1_x2_given_z_val"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "diYKb7_ZgjlJ"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's plot the likelihood $Pr(x_1,x_2|z)$ as in fig 17.3b in the book."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "0X4NwixzqxtZ"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Choose some z value\n",
|
||||||
|
"z_val = 1.8\n",
|
||||||
|
"\n",
|
||||||
|
"# Compute the conditional distribution on a grid\n",
|
||||||
|
"x1_mesh, x2_mesh = np.meshgrid(np.arange(-0.5,2.5,0.01), np.arange(-1.5,1.5,0.01))\n",
|
||||||
|
"pr_x1_x2_given_z_val = get_likelihood(x1_mesh,x2_mesh, z_val)\n",
|
||||||
|
"\n",
|
||||||
|
"# Plot the result\n",
|
||||||
|
"plot_heatmap(x1_mesh, x2_mesh, pr_x1_x2_given_z_val, title=\"Conditional distribution $Pr(x1,x2|z)$\")\n",
|
||||||
|
"\n",
|
||||||
|
"# TODO -- Experiment with different values of z and make sure that you understand the what is happening."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "hWfqK-Oz5_DT"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"The data density is found by marginalizing over the latent variables $z$:\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{eqnarray}\n",
|
||||||
|
" Pr(x_1,x_2) &=& \\int Pr(x_1,x_2, z) dz \\nonumber \\\\\n",
|
||||||
|
" &=& \\int Pr(x_1,x_2 | z) \\cdot Pr(z)dz\\nonumber \\\\\n",
|
||||||
|
" &=& \\int \\mbox{Norm}_{[x_1,x_2]}\\Bigl[\\mathbf{f}[z],\\sigma^{2}\\mathbf{I}\\Bigr]\\cdot \\mbox{Norm}_{z}\\left[\\mathbf{0},\\mathbf{I}\\right]dz.\n",
|
||||||
|
"\\end{eqnarray}"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "25xqXnmFo-PH"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# TODO Compute the data density\n",
|
||||||
|
"# We can't integrate this function in closed form\n",
|
||||||
|
"# So let's approximate it as a sum over the z values (z = np.arange(-3,3,0.01))\n",
|
||||||
|
"# You will need the functions get_likelihood() and get_prior()\n",
|
||||||
|
"# To make this a valid probability distribution, you need to divide\n",
|
||||||
|
"# By the z-increment (0.01)\n",
|
||||||
|
"# Replace this line\n",
|
||||||
|
"pr_x1_x2 = np.zeros_like(x1_mesh)\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"# Plot the result\n",
|
||||||
|
"plot_heatmap(x1_mesh, x2_mesh, pr_x1_x2, title=\"Data density $Pr(x1,x2)$\")\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "H0Ijce9VzeCO"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's draw some samples from the model"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "W264N7By_h9y"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def draw_samples(n_sample):\n",
|
||||||
|
" # TODO Write this routine to draw n_sample samples from the model\n",
|
||||||
|
" # First draw a random value of z from the prior (a standard normal distribution)\n",
|
||||||
|
" # Then draw a sample from Pr(x1,x2|z)\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" x1_samples=0; x2_samples = 0;\n",
|
||||||
|
"\n",
|
||||||
|
" return x1_samples, x2_samples"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "Li3mK_I48k0k"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Let's plot those samples on top of the heat map."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "D7N7oqLe-eJO"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"x1_samples, x2_samples = draw_samples(500)\n",
|
||||||
|
"# Plot the result\n",
|
||||||
|
"plot_heatmap(x1_mesh, x2_mesh, pr_x1_x2, x1_samples, x2_samples, title=\"Data density $Pr(x1,x2)$\")\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "XRmWv99B-BWO"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Return the posterior distribution\n",
|
||||||
|
"def get_posterior(x1,x2):\n",
|
||||||
|
" z = np.arange(-3,3, 0.01)\n",
|
||||||
|
" # TODO -- write this function\n",
|
||||||
|
" # Again, we can't integrate, but we can sum\n",
|
||||||
|
" # We don't know the constant in the denominator of equation 17.19, but we can just normalize\n",
|
||||||
|
" # by the sum of the numerators for all values of z\n",
|
||||||
|
" # Replace this line:\n",
|
||||||
|
" pr_z_given_x1_x2 = np.ones_like(z)\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" return z, pr_z_given_x1_x2"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "PwOjzPD5_1OF"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"x1 = 0.9; x2 = -0.9\n",
|
||||||
|
"z, pr_z_given_x1_x2 = get_posterior(x1,x2)\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"fig, ax = plt.subplots()\n",
|
||||||
|
"ax.plot(z, pr_z_given_x1_x2, 'r-')\n",
|
||||||
|
"ax.set_xlabel(\"Latent variable $z$\")\n",
|
||||||
|
"ax.set_ylabel(\"Posterior probability $Pr(z|x_{1},x_{2})$\")\n",
|
||||||
|
"ax.set_xlim([-3,3])\n",
|
||||||
|
"ax.set_ylim([0,1.5 * np.max(pr_z_given_x1_x2)])\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "PKFUY42K-Tp7"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
423
Notebooks/Chap17/17_2_Reparameterization_Trick.ipynb
Normal file
423
Notebooks/Chap17/17_2_Reparameterization_Trick.ipynb
Normal file
@@ -0,0 +1,423 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyOxO2/0DTH4n4zhC97qbagY",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap17/17_2_Reparameterization_Trick.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 17.2: Reparameterization trick**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook investigates the reparameterization trick as described in section 17.7 of the book.\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "t9vk9Elugvmi"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "OLComQyvCIJ7"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"The reparameterization trick computes the derivative of an expectation of a function $\\mbox{f}[x]$:\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{equation}\n",
|
||||||
|
"\\frac{\\partial}{\\partial \\boldsymbol\\phi} \\mathbb{E}_{Pr(x|\\boldsymbol\\phi)}\\bigl[\\mbox{f}[x]\\bigr],\n",
|
||||||
|
"\\end{equation}\n",
|
||||||
|
"\n",
|
||||||
|
"with respect to the parameters $\\boldsymbol\\phi$ of the distribution $Pr(x|\\boldsymbol\\phi)$ that the expectation is over.\n",
|
||||||
|
"\n",
|
||||||
|
"Let's consider a simple concrete example, where:\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{equation}\n",
|
||||||
|
"Pr(x|\\phi) = \\mbox{Norm}_{x}\\Bigl[\\mu, \\sigma^2\\Bigr]=\\mbox{Norm}_{x}\\Bigl[\\phi^3,(\\exp[\\phi])^2\\Bigr]\n",
|
||||||
|
"\\end{equation}\n",
|
||||||
|
"\n",
|
||||||
|
"and\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{equation}\n",
|
||||||
|
"\\mbox{f}[x] = x^2+\\sin[x]\n",
|
||||||
|
"\\end{equation}"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "paLz5RukZP1J"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Let's approximate this expecctation for a particular value of phi\n",
|
||||||
|
"def compute_expectation(phi, n_samples):\n",
|
||||||
|
" # TODO complete this function\n",
|
||||||
|
" # 1. Compute the mean of the normal distribution, mu\n",
|
||||||
|
" # 2. Compute the standard deviation of the normal distribution, sigma\n",
|
||||||
|
" # 3. Draw n_samples samples using np.random.normal(mu, sigma, size=(n_samples, 1))\n",
|
||||||
|
" # 4. Compute f[x] for each of these samples\n",
|
||||||
|
" # 4. Approximate the expectation by taking the average of the values of f[x]\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" expected_f_given_phi = 0\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" return expected_f_given_phi"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "FdEbMnDBY0i9"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Set the seed so the random numbers are all the same\n",
|
||||||
|
"np.random.seed(0)\n",
|
||||||
|
"\n",
|
||||||
|
"# Compute the expectation for two values of phi\n",
|
||||||
|
"phi1 = 0.5\n",
|
||||||
|
"n_samples = 10000000\n",
|
||||||
|
"expected_f_given_phi1 = compute_expectation(phi1, n_samples)\n",
|
||||||
|
"print(\"Your value: \", expected_f_given_phi1, \", True value: 2.7650801613563116\")\n",
|
||||||
|
"\n",
|
||||||
|
"phi2 = -0.1\n",
|
||||||
|
"n_samples = 10000000\n",
|
||||||
|
"expected_f_given_phi2 = compute_expectation(phi2, n_samples)\n",
|
||||||
|
"print(\"Your value: \", expected_f_given_phi2, \", True value: 0.8176793102849222\")"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "FTh7LJ0llNJZ"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Le't plot this expectation as a function of phi"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "r5Hl2QkimWx9"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"phi_vals = np.arange(-1.5,1.5, 0.05)\n",
|
||||||
|
"expected_vals = np.zeros_like(phi_vals)\n",
|
||||||
|
"n_samples = 1000000\n",
|
||||||
|
"for i in range(len(phi_vals)):\n",
|
||||||
|
" expected_vals[i] = compute_expectation(phi_vals[i], n_samples)\n",
|
||||||
|
"\n",
|
||||||
|
"fig,ax = plt.subplots()\n",
|
||||||
|
"ax.plot(phi_vals, expected_vals,'r-')\n",
|
||||||
|
"ax.set_xlabel('Parameter $\\phi$')\n",
|
||||||
|
"ax.set_ylabel('$\\mathbb{E}_{Pr(x|\\phi)}[f[x]]$')\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "05XxVLJxmkER"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"It's this curve that we want to find the derivative of (so for example, we could run gradient descent and find the minimum.\n",
|
||||||
|
"\n",
|
||||||
|
"This is tricky though -- if you look at the computation that you performed, then there is a sampling step in the procedure (step 3). How do we compute the derivative of this?\n",
|
||||||
|
"\n",
|
||||||
|
"The answer is the reparameterization trick. We note that:\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{equation}\n",
|
||||||
|
"\\mbox{Norm}_{x}\\Bigl[\\mu, \\sigma^2\\Bigr]=\\mbox{Norm}_{x}\\Bigl[0, 1\\Bigr] \\times \\sigma + \\mu\n",
|
||||||
|
"\\end{equation}\n",
|
||||||
|
"\n",
|
||||||
|
"and so:\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{equation}\n",
|
||||||
|
"\\mbox{Norm}_{x}\\Bigl[\\phi^3,(\\exp[\\phi])^2\\Bigr] = \\mbox{Norm}_{x}\\Bigl[0, 1\\Bigr] \\times \\exp[\\phi]+ \\phi^3\n",
|
||||||
|
"\\end{equation}\n",
|
||||||
|
"\n",
|
||||||
|
"So, if we draw a sample $\\epsilon^*$ from $\\mbox{Norm}_{\\epsilon}[0, 1]$, then we can compute a sample $x^*$ as:\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{eqnarray*}\n",
|
||||||
|
"x^* &=& \\epsilon^* \\times \\sigma + \\mu \\\\\n",
|
||||||
|
"&=& \\epsilon^* \\times \\exp[\\phi]+ \\phi^3\n",
|
||||||
|
"\\end{eqnarray*}"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "zTCykVeWqj_O"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def compute_df_dx_star(x_star):\n",
|
||||||
|
" # TODO Compute this derivative (function defined at the top)\n",
|
||||||
|
" # Replace this line:\n",
|
||||||
|
" deriv = 0;\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" return deriv\n",
|
||||||
|
"\n",
|
||||||
|
"def compute_dx_star_dphi(epsilon_star, phi):\n",
|
||||||
|
" # TODO Compute this derivative\n",
|
||||||
|
" # Replace this line:\n",
|
||||||
|
" deriv = 0;\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" return deriv\n",
|
||||||
|
"\n",
|
||||||
|
"def compute_derivative_of_expectation(phi, n_samples):\n",
|
||||||
|
" # Generate the random values of epsilon\n",
|
||||||
|
" epsilon_star= np.random.normal(size=(n_samples,1))\n",
|
||||||
|
" # TODO -- write\n",
|
||||||
|
" # 1. Compute dx*/dphi using the function defined above\n",
|
||||||
|
" # 2. Compute x*\n",
|
||||||
|
" # 3. Compute df/dx* using the function you wrote above\n",
|
||||||
|
" # 4. Compute df/dphi = df/x* * dx*dphi\n",
|
||||||
|
" # 5. Average the samples of df/dphi to get the expectation.\n",
|
||||||
|
" # Replace this line:\n",
|
||||||
|
" df_dphi = 0\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" return df_dphi"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "w13HVpi9q8nF"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Set the seed so the random numbers are all the same\n",
|
||||||
|
"np.random.seed(0)\n",
|
||||||
|
"\n",
|
||||||
|
"# Compute the expectation for two values of phi\n",
|
||||||
|
"phi1 = 0.5\n",
|
||||||
|
"n_samples = 10000000\n",
|
||||||
|
"\n",
|
||||||
|
"deriv = compute_derivative_of_expectation(phi1, n_samples)\n",
|
||||||
|
"print(\"Your value: \", deriv, \", True value: 5.726338035051403\")"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "ntQT4An79kAl"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"phi_vals = np.arange(-1.5,1.5, 0.05)\n",
|
||||||
|
"deriv_vals = np.zeros_like(phi_vals)\n",
|
||||||
|
"n_samples = 1000000\n",
|
||||||
|
"for i in range(len(phi_vals)):\n",
|
||||||
|
" deriv_vals[i] = compute_derivative_of_expectation(phi_vals[i], n_samples)\n",
|
||||||
|
"\n",
|
||||||
|
"fig,ax = plt.subplots()\n",
|
||||||
|
"ax.plot(phi_vals, deriv_vals,'r-')\n",
|
||||||
|
"ax.set_xlabel('Parameter $\\phi$')\n",
|
||||||
|
"ax.set_ylabel('$\\partial/\\partial\\phi\\mathbb{E}_{Pr(x|\\phi)}[f[x]]$')\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "t0Jqd_IN_lMU"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"This should look plausibly like the derivative of the function we plotted above!"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "ASu4yKSwAEYI"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"The reparameterization trick computes the derivative of an expectation of a function $\\mbox{f}[x]$:\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{equation}\n",
|
||||||
|
"\\frac{\\partial}{\\partial \\boldsymbol\\phi} \\mathbb{E}_{Pr(x|\\boldsymbol\\phi)}\\bigl[\\mbox{f}[x]\\bigr],\n",
|
||||||
|
"\\end{equation}\n",
|
||||||
|
"\n",
|
||||||
|
"with respect to the parameters $\\boldsymbol\\phi$ of the distribution $Pr(x|\\boldsymbol\\phi)$ that the expectation is over. This derivative can also be computed as:\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{eqnarray}\n",
|
||||||
|
"\\frac{\\partial}{\\partial \\boldsymbol\\phi} \\mathbb{E}_{Pr(x|\\boldsymbol\\phi)}\\bigl[\\mbox{f}[x]\\bigr] &=& \\mathbb{E}_{Pr(x|\\boldsymbol\\phi)}\\left[\\mbox{f}[x]\\frac{\\partial}{\\partial \\boldsymbol\\phi} \\log\\bigl[ Pr(x|\\boldsymbol\\phi)\\bigr]\\right]\\nonumber \\\\\n",
|
||||||
|
"&\\approx & \\frac{1}{I}\\sum_{i=1}^{I}\\mbox{f}[x_i]\\frac{\\partial}{\\partial \\boldsymbol\\phi} \\log\\bigl[ Pr(x_i|\\boldsymbol\\phi)\\bigr].\n",
|
||||||
|
"\\end{eqnarray}\n",
|
||||||
|
"\n",
|
||||||
|
"This method is known as the REINFORCE algorithm or score function estimator. Problem 17.5 asks you to prove this relation. Let's use this method to compute the gradient and compare.\n",
|
||||||
|
"\n",
|
||||||
|
"Recall that the expression for a univariate Gaussian is:\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{equation}\n",
|
||||||
|
" Pr(x|\\mu,\\sigma^2) = \\frac{1}{\\sqrt{2\\pi\\sigma^{2}}}\\exp\\left[-\\frac{(x-\\mu)^{2}}{2\\sigma^{2}}\\right].\n",
|
||||||
|
"\\end{equation}\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "xoFR1wifc8-b"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def d_log_pr_x_given_phi(x,phi):\n",
|
||||||
|
" # TODO -- fill in this function\n",
|
||||||
|
" # Compute the derivative of log[Pr(x|phi)]\n",
|
||||||
|
" # Replace this line:\n",
|
||||||
|
" deriv =0;\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" return deriv\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"def compute_derivative_of_expectation_score_function(phi, n_samples):\n",
|
||||||
|
" # TODO -- Compute this function\n",
|
||||||
|
" # 1. Calculate mu from phi\n",
|
||||||
|
" # 2. Calculate sigma from phi\n",
|
||||||
|
" # 3. Generate n_sample random samples of x using np.random.normal\n",
|
||||||
|
" # 4. Calculate f[x] for all of the samples\n",
|
||||||
|
" # 5. Multiply f[x] by d_log_pr_x_given_phi\n",
|
||||||
|
" # 6. Take the average of the samples\n",
|
||||||
|
" # Replace this line:\n",
|
||||||
|
" deriv = 0\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" return deriv"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "4TUaxiWvASla"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Set the seed so the random numbers are all the same\n",
|
||||||
|
"np.random.seed(0)\n",
|
||||||
|
"\n",
|
||||||
|
"# Compute the expectation for two values of phi\n",
|
||||||
|
"phi1 = 0.5\n",
|
||||||
|
"n_samples = 100000000\n",
|
||||||
|
"\n",
|
||||||
|
"deriv = compute_derivative_of_expectation_score_function(phi1, n_samples)\n",
|
||||||
|
"print(\"Your value: \", deriv, \", True value: 5.724609927313369\")"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "0RSN32Rna_C_"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"phi_vals = np.arange(-1.5,1.5, 0.05)\n",
|
||||||
|
"deriv_vals = np.zeros_like(phi_vals)\n",
|
||||||
|
"n_samples = 1000000\n",
|
||||||
|
"for i in range(len(phi_vals)):\n",
|
||||||
|
" deriv_vals[i] = compute_derivative_of_expectation_score_function(phi_vals[i], n_samples)\n",
|
||||||
|
"\n",
|
||||||
|
"fig,ax = plt.subplots()\n",
|
||||||
|
"ax.plot(phi_vals, deriv_vals,'r-')\n",
|
||||||
|
"ax.set_xlabel('Parameter $\\phi$')\n",
|
||||||
|
"ax.set_ylabel('$\\partial/\\partial\\phi\\mathbb{E}_{Pr(x|\\phi)}[f[x]]$')\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "EM_i5zoyElHR"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"This should look the same as the derivative that we computed with the reparameterization trick. So, is there any advantage to one way or the other? Let's compare the variances of the estimates\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "1TWBiUC7bQSw"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"n_estimate = 100\n",
|
||||||
|
"n_sample = 1000\n",
|
||||||
|
"phi = 0.3\n",
|
||||||
|
"reparam_estimates = np.zeros((n_estimate,1))\n",
|
||||||
|
"score_function_estimates = np.zeros((n_estimate,1))\n",
|
||||||
|
"for i in range(n_estimate):\n",
|
||||||
|
" reparam_estimates[i]= compute_derivative_of_expectation(phi, n_samples)\n",
|
||||||
|
" score_function_estimates[i] = compute_derivative_of_expectation_score_function(phi, n_samples)\n",
|
||||||
|
"\n",
|
||||||
|
"print(\"Variance of reparameterization estimator\", np.var(reparam_estimates))\n",
|
||||||
|
"print(\"Variance of score function estimator\", np.var(score_function_estimates))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "vV_Jx5bCbQGs"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"The variance of the reparameterization estimator should be quite a bit lower than the score function estimator which is why it is preferred in this situation."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "d-0tntSYdKPR"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
496
Notebooks/Chap17/17_3_Importance_Sampling.ipynb
Normal file
496
Notebooks/Chap17/17_3_Importance_Sampling.ipynb
Normal file
@@ -0,0 +1,496 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyMvae+1cigwg2Htl6vt1Who",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap17/17_3_Importance_Sampling.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 17.3: Importance sampling**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook investigates importance sampling as described in section 17.8.1 of the book.\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "t9vk9Elugvmi"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "OLComQyvCIJ7"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Let's approximate the expectation\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{equation}\n",
|
||||||
|
"\\mathbb{E}_{y}\\Bigl[\\exp\\bigl[- (y-1)^4\\bigr]\\Bigr] = \\int \\exp\\bigl[- (y-1)^4\\bigr] Pr(y) dy,\n",
|
||||||
|
"\\end{equation}\n",
|
||||||
|
"\n",
|
||||||
|
"where\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{equation}\n",
|
||||||
|
"Pr(y)=\\mbox{Norm}_y[0,1]\n",
|
||||||
|
"\\end{equation}\n",
|
||||||
|
"\n",
|
||||||
|
"by drawing $I$ samples $y_i$ and using the formula:\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{equation}\n",
|
||||||
|
"\\mathbb{E}_{y}\\Bigl[\\exp\\bigl[- (y-1)^4\\bigr]\\Bigr] \\approx \\frac{1}{I} \\sum_{i=1}^I \\exp\\bigl[-(y-1)^4 \\bigr]\n",
|
||||||
|
"\\end{equation}"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "f7a6xqKjkmvT"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def f(y):\n",
|
||||||
|
" return np.exp(-(y-1) *(y-1) *(y-1) * (y-1))\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"def pr_y(y):\n",
|
||||||
|
" return (1/np.sqrt(2*np.pi)) * np.exp(-0.5 * y * y)\n",
|
||||||
|
"\n",
|
||||||
|
"fig,ax = plt.subplots()\n",
|
||||||
|
"y = np.arange(-10,10,0.01)\n",
|
||||||
|
"ax.plot(y, f(y), 'r-', label='f$[y]$');\n",
|
||||||
|
"ax.plot(y, pr_y(y),'b-',label='$Pr(y)$')\n",
|
||||||
|
"ax.set_xlabel(\"$y$\")\n",
|
||||||
|
"ax.legend()\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "VjkzRr8o2ksg"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def compute_expectation(n_samples):\n",
|
||||||
|
" # TODO -- compute this expectation\n",
|
||||||
|
" # 1. Generate samples y_i using np.random.normal\n",
|
||||||
|
" # 2. Approximate the expectation of f[y]\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" expectation = 0\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" return expectation"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "LGAKHjUJnWmy"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Set the seed so the random numbers are all the same\n",
|
||||||
|
"np.random.seed(0)\n",
|
||||||
|
"\n",
|
||||||
|
"# Compute the expectation with a very large number of samples (good estimate)\n",
|
||||||
|
"n_samples = 100000000\n",
|
||||||
|
"expected_f= compute_expectation(n_samples)\n",
|
||||||
|
"print(\"Your value: \", expected_f, \", True value: 0.43160702267383166\")"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "nmvixMqgodIP"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Let's investigate how the variance of this approximation decreases as we increase the number of samples $N$.\n",
|
||||||
|
"\n",
|
||||||
|
"\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "Jr4UPcqmnXCS"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def compute_mean_variance(n_sample):\n",
|
||||||
|
" n_estimate = 10000\n",
|
||||||
|
" estimates = np.zeros((n_estimate,1))\n",
|
||||||
|
" for i in range(n_estimate):\n",
|
||||||
|
" estimates[i] = compute_expectation(n_sample.astype(int))\n",
|
||||||
|
" return np.mean(estimates), np.var(estimates)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "yrDp1ILUo08j"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Compute the mean and variance for 1,2,... 20 samples\n",
|
||||||
|
"n_sample_all = np.array([1.,2,3,4,5,6,7,8,9,10,15,20,25,30,45,50,60,70,80,90,100,150,200,250,300,350,400,450,500])\n",
|
||||||
|
"mean_all = np.zeros_like(n_sample_all)\n",
|
||||||
|
"variance_all = np.zeros_like(n_sample_all)\n",
|
||||||
|
"for i in range(len(n_sample_all)):\n",
|
||||||
|
" print(\"Computing mean and variance for expectation with %d samples\"%(n_sample_all[i]))\n",
|
||||||
|
" mean_all[i],variance_all[i] = compute_mean_variance(n_sample_all[i])"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "BcUVsodtqdey"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"fig,ax = plt.subplots()\n",
|
||||||
|
"ax.semilogx(n_sample_all, mean_all,'r-',label='mean estimate')\n",
|
||||||
|
"ax.fill_between(n_sample_all, mean_all-2*np.sqrt(variance_all), mean_all+2*np.sqrt(variance_all))\n",
|
||||||
|
"ax.set_xlabel(\"Number of samples\")\n",
|
||||||
|
"ax.set_ylabel(\"Mean of estimate\")\n",
|
||||||
|
"ax.plot([0,500],[0.43160702267383166,0.43160702267383166],'k--',label='true value')\n",
|
||||||
|
"ax.legend()\n",
|
||||||
|
"plt.show()\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "feXmyk0krpUi"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"As you might expect, the more samples that we use to compute the approximate estimate, the lower the variance of the estimate."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "XTUpxFlSuOl7"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
" Now consider the function\n",
|
||||||
|
" \\begin{equation}\n",
|
||||||
|
" \\mbox{f}[y]= 20.446\\exp\\left[-(y-3)^4\\right],\n",
|
||||||
|
" \\end{equation}\n",
|
||||||
|
"\n",
|
||||||
|
"which decreases rapidly as we move away from the position $y=4$."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "6hxsl3Pxo1TT"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def f2(y):\n",
|
||||||
|
" return 20.446*np.exp(- (y-3) *(y-3) *(y-3) * (y-3))\n",
|
||||||
|
"\n",
|
||||||
|
"fig,ax = plt.subplots()\n",
|
||||||
|
"y = np.arange(-10,10,0.01)\n",
|
||||||
|
"ax.plot(y, f2(y), 'r-', label='f$[y]$');\n",
|
||||||
|
"ax.plot(y, pr_y(y),'b-',label='$Pr(y)$')\n",
|
||||||
|
"ax.set_xlabel(\"$y$\")\n",
|
||||||
|
"ax.legend()\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "znydVPW7sL4P"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Let's again, compute the expectation:\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{eqnarray}\n",
|
||||||
|
"\\mathbb{E}_{y}\\left[\\mbox{f}[y]\\right] &=& \\int \\mbox{f}[y] Pr(y) dy\\\\\n",
|
||||||
|
"&\\approx& \\frac{1}{I} \\mbox{f}[y]\n",
|
||||||
|
"\\end{eqnarray}\n",
|
||||||
|
"\n",
|
||||||
|
"where $Pr(y)=\\mbox{Norm}_y[0,1]$ by approximating with samples $y_{i}$.\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "G9Xxo0OJsIqD"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def compute_expectation2(n_samples):\n",
|
||||||
|
" y = np.random.normal(size=(n_samples,1))\n",
|
||||||
|
" expectation = np.mean(f2(y))\n",
|
||||||
|
"\n",
|
||||||
|
" return expectation"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "l8ZtmkA2vH4y"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Set the seed so the random numbers are all the same\n",
|
||||||
|
"np.random.seed(0)\n",
|
||||||
|
"\n",
|
||||||
|
"# Compute the expectation with a very large number of samples (good estimate)\n",
|
||||||
|
"n_samples = 100000000\n",
|
||||||
|
"expected_f2= compute_expectation2(n_samples)\n",
|
||||||
|
"print(\"Expected value: \", expected_f2)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "dfUQyJ-svZ6F"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"I deliberately chose this function, because it's expectation is roughly the same as for the previous function.\n",
|
||||||
|
"\n",
|
||||||
|
"Again, let's look at the mean and the variance of the estimates"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "2sVDqP0BvxqM"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def compute_mean_variance2(n_sample):\n",
|
||||||
|
" n_estimate = 10000\n",
|
||||||
|
" estimates = np.zeros((n_estimate,1))\n",
|
||||||
|
" for i in range(n_estimate):\n",
|
||||||
|
" estimates[i] = compute_expectation2(n_sample.astype(int))\n",
|
||||||
|
" return np.mean(estimates), np.var(estimates)\n",
|
||||||
|
"\n",
|
||||||
|
"# Compute the variance for 1,2,... 20 samples\n",
|
||||||
|
"mean_all2 = np.zeros_like(n_sample_all)\n",
|
||||||
|
"variance_all2 = np.zeros_like(n_sample_all)\n",
|
||||||
|
"for i in range(len(n_sample_all)):\n",
|
||||||
|
" print(\"Computing variance for expectation with %d samples\"%(n_sample_all[i]))\n",
|
||||||
|
" mean_all2[i], variance_all2[i] = compute_mean_variance2(n_sample_all[i])"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "mHnILRkOv0Ir"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"fig,ax1 = plt.subplots()\n",
|
||||||
|
"ax1.semilogx(n_sample_all, mean_all,'r-',label='mean estimate')\n",
|
||||||
|
"ax1.fill_between(n_sample_all, mean_all-2*np.sqrt(variance_all), mean_all+2*np.sqrt(variance_all))\n",
|
||||||
|
"ax1.set_xlabel(\"Number of samples\")\n",
|
||||||
|
"ax1.set_ylabel(\"Mean of estimate\")\n",
|
||||||
|
"ax1.plot([1,500],[0.43160702267383166,0.43160702267383166],'k--',label='true value')\n",
|
||||||
|
"ax1.set_ylim(-5,6)\n",
|
||||||
|
"ax1.set_title(\"First function\")\n",
|
||||||
|
"ax1.legend()\n",
|
||||||
|
"\n",
|
||||||
|
"fig2,ax2 = plt.subplots()\n",
|
||||||
|
"ax2.semilogx(n_sample_all, mean_all2,'r-',label='mean estimate')\n",
|
||||||
|
"ax2.fill_between(n_sample_all, mean_all2-2*np.sqrt(variance_all2), mean_all2+2*np.sqrt(variance_all2))\n",
|
||||||
|
"ax2.set_xlabel(\"Number of samples\")\n",
|
||||||
|
"ax2.set_ylabel(\"Mean of estimate\")\n",
|
||||||
|
"ax2.plot([0,500],[0.43160428638892556,0.43160428638892556],'k--',label='true value')\n",
|
||||||
|
"ax2.set_ylim(-5,6)\n",
|
||||||
|
"ax2.set_title(\"Second function\")\n",
|
||||||
|
"ax2.legend()\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "FkCX-hxxAnsw"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"You can see that the variance of the estimate of the second function is considerably worse than the estimate of the variance of estimate of the first function\n",
|
||||||
|
"\n",
|
||||||
|
"TODO: Think about why this is."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "EtBP6NeLwZqz"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
" Now let's repeat this experiment with the second function, but this time use importance sampling with auxiliary distribution:\n",
|
||||||
|
"\n",
|
||||||
|
" \\begin{equation}\n",
|
||||||
|
" q(y)=\\mbox{Norm}_{y}[3,1]\n",
|
||||||
|
" \\end{equation}\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "_wuF-NoQu1--"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def q_y(y):\n",
|
||||||
|
" return (1/np.sqrt(2*np.pi)) * np.exp(-0.5 * (y-3) * (y-3))\n",
|
||||||
|
"\n",
|
||||||
|
"def compute_expectation2b(n_samples):\n",
|
||||||
|
" # TODO -- complete this function\n",
|
||||||
|
" # 1. Draw n_samples from auxiliary distribution\n",
|
||||||
|
" # 2. Compute f[y] for those samples\n",
|
||||||
|
" # 3. Scale the results by pr_y / q_y\n",
|
||||||
|
" # 4. Compute the mean of these weighted samples\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" expectation = 0\n",
|
||||||
|
"\n",
|
||||||
|
" return expectation"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "jPm0AVYVIDnn"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Set the seed so the random numbers are all the same\n",
|
||||||
|
"np.random.seed(0)\n",
|
||||||
|
"\n",
|
||||||
|
"# Compute the expectation with a very large number of samples (good estimate)\n",
|
||||||
|
"n_samples = 100000000\n",
|
||||||
|
"expected_f2= compute_expectation2b(n_samples)\n",
|
||||||
|
"print(\"Your value: \", expected_f2,\", True value: 0.43163734204459125 \")"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "No2ByVvOM2yQ"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def compute_mean_variance2b(n_sample):\n",
|
||||||
|
" n_estimate = 10000\n",
|
||||||
|
" estimates = np.zeros((n_estimate,1))\n",
|
||||||
|
" for i in range(n_estimate):\n",
|
||||||
|
" estimates[i] = compute_expectation2b(n_sample.astype(int))\n",
|
||||||
|
" return np.mean(estimates), np.var(estimates)\n",
|
||||||
|
"\n",
|
||||||
|
"# Compute the variance for 1,2,... 20 samples\n",
|
||||||
|
"mean_all2b = np.zeros_like(n_sample_all)\n",
|
||||||
|
"variance_all2b = np.zeros_like(n_sample_all)\n",
|
||||||
|
"for i in range(len(n_sample_all)):\n",
|
||||||
|
" print(\"Computing variance for expectation with %d samples\"%(n_sample_all[i]))\n",
|
||||||
|
" mean_all2b[i], variance_all2b[i] = compute_mean_variance2b(n_sample_all[i])"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "6v8Jc7z4M3Mk"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"fig,ax1 = plt.subplots()\n",
|
||||||
|
"ax1.semilogx(n_sample_all, mean_all,'r-',label='mean estimate')\n",
|
||||||
|
"ax1.fill_between(n_sample_all, mean_all-2*np.sqrt(variance_all), mean_all+2*np.sqrt(variance_all))\n",
|
||||||
|
"ax1.set_xlabel(\"Number of samples\")\n",
|
||||||
|
"ax1.set_ylabel(\"Mean of estimate\")\n",
|
||||||
|
"ax1.plot([1,500],[0.43160702267383166,0.43160702267383166],'k--',label='true value')\n",
|
||||||
|
"ax1.set_ylim(-5,6)\n",
|
||||||
|
"ax1.set_title(\"First function\")\n",
|
||||||
|
"ax1.legend()\n",
|
||||||
|
"\n",
|
||||||
|
"fig2,ax2 = plt.subplots()\n",
|
||||||
|
"ax2.semilogx(n_sample_all, mean_all2,'r-',label='mean estimate')\n",
|
||||||
|
"ax2.fill_between(n_sample_all, mean_all2-2*np.sqrt(variance_all2), mean_all2+2*np.sqrt(variance_all2))\n",
|
||||||
|
"ax2.set_xlabel(\"Number of samples\")\n",
|
||||||
|
"ax2.set_ylabel(\"Mean of estimate\")\n",
|
||||||
|
"ax2.plot([0,500],[0.43160428638892556,0.43160428638892556],'k--',label='true value')\n",
|
||||||
|
"ax2.set_ylim(-5,6)\n",
|
||||||
|
"ax2.set_title(\"Second function\")\n",
|
||||||
|
"ax2.legend()\n",
|
||||||
|
"plt.show()\n",
|
||||||
|
"\n",
|
||||||
|
"fig2,ax2 = plt.subplots()\n",
|
||||||
|
"ax2.semilogx(n_sample_all, mean_all2b,'r-',label='mean estimate')\n",
|
||||||
|
"ax2.fill_between(n_sample_all, mean_all2b-2*np.sqrt(variance_all2b), mean_all2b+2*np.sqrt(variance_all2b))\n",
|
||||||
|
"ax2.set_xlabel(\"Number of samples\")\n",
|
||||||
|
"ax2.set_ylabel(\"Mean of estimate\")\n",
|
||||||
|
"ax2.plot([0,500],[ 0.43163734204459125, 0.43163734204459125],'k--',label='true value')\n",
|
||||||
|
"ax2.set_ylim(-5,6)\n",
|
||||||
|
"ax2.set_title(\"Second function with importance sampling\")\n",
|
||||||
|
"ax2.legend()\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "C0beD4sNNM3L"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"You can see that the importance sampling technique has reduced the amount of variance for any given number of samples."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "y8rgge9MNiOc"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
471
Notebooks/Chap18/18_1_Diffusion_Encoder.ipynb
Normal file
471
Notebooks/Chap18/18_1_Diffusion_Encoder.ipynb
Normal file
@@ -0,0 +1,471 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyMpC8kgLnXx0XQBtwNAQ4jJ",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap18/18_1_Diffusion_Encoder.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 18.1: Diffusion Encoder**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook investigates the diffusion encoder as described in section 18.2 of the book.\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "t9vk9Elugvmi"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt\n",
|
||||||
|
"from matplotlib.colors import ListedColormap\n",
|
||||||
|
"from operator import itemgetter"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "OLComQyvCIJ7"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"#Create pretty colormap as in book\n",
|
||||||
|
"my_colormap_vals_hex =('2a0902', '2b0a03', '2c0b04', '2d0c05', '2e0c06', '2f0d07', '300d08', '310e09', '320f0a', '330f0b', '34100b', '35110c', '36110d', '37120e', '38120f', '39130f', '3a1410', '3b1411', '3c1511', '3d1612', '3e1613', '3f1713', '401714', '411814', '421915', '431915', '451a16', '461b16', '471b17', '481c17', '491d18', '4a1d18', '4b1e19', '4c1f19', '4d1f1a', '4e201b', '50211b', '51211c', '52221c', '53231d', '54231d', '55241e', '56251e', '57261f', '58261f', '592720', '5b2821', '5c2821', '5d2922', '5e2a22', '5f2b23', '602b23', '612c24', '622d25', '632e25', '652e26', '662f26', '673027', '683027', '693128', '6a3229', '6b3329', '6c342a', '6d342a', '6f352b', '70362c', '71372c', '72372d', '73382e', '74392e', '753a2f', '763a2f', '773b30', '783c31', '7a3d31', '7b3e32', '7c3e33', '7d3f33', '7e4034', '7f4134', '804235', '814236', '824336', '834437', '854538', '864638', '874739', '88473a', '89483a', '8a493b', '8b4a3c', '8c4b3c', '8d4c3d', '8e4c3e', '8f4d3f', '904e3f', '924f40', '935041', '945141', '955242', '965343', '975343', '985444', '995545', '9a5646', '9b5746', '9c5847', '9d5948', '9e5a49', '9f5a49', 'a05b4a', 'a15c4b', 'a35d4b', 'a45e4c', 'a55f4d', 'a6604e', 'a7614e', 'a8624f', 'a96350', 'aa6451', 'ab6552', 'ac6552', 'ad6653', 'ae6754', 'af6855', 'b06955', 'b16a56', 'b26b57', 'b36c58', 'b46d59', 'b56e59', 'b66f5a', 'b7705b', 'b8715c', 'b9725d', 'ba735d', 'bb745e', 'bc755f', 'bd7660', 'be7761', 'bf7862', 'c07962', 'c17a63', 'c27b64', 'c27c65', 'c37d66', 'c47e67', 'c57f68', 'c68068', 'c78169', 'c8826a', 'c9836b', 'ca846c', 'cb856d', 'cc866e', 'cd876f', 'ce886f', 'ce8970', 'cf8a71', 'd08b72', 'd18c73', 'd28d74', 'd38e75', 'd48f76', 'd59077', 'd59178', 'd69279', 'd7937a', 'd8957b', 'd9967b', 'da977c', 'da987d', 'db997e', 'dc9a7f', 'dd9b80', 'de9c81', 'de9d82', 'df9e83', 'e09f84', 'e1a185', 'e2a286', 'e2a387', 'e3a488', 'e4a589', 'e5a68a', 'e5a78b', 'e6a88c', 'e7aa8d', 'e7ab8e', 'e8ac8f', 'e9ad90', 'eaae91', 'eaaf92', 'ebb093', 'ecb295', 'ecb396', 'edb497', 'eeb598', 'eeb699', 'efb79a', 'efb99b', 'f0ba9c', 'f1bb9d', 'f1bc9e', 'f2bd9f', 'f2bfa1', 'f3c0a2', 'f3c1a3', 'f4c2a4', 'f5c3a5', 'f5c5a6', 'f6c6a7', 'f6c7a8', 'f7c8aa', 'f7c9ab', 'f8cbac', 'f8ccad', 'f8cdae', 'f9ceb0', 'f9d0b1', 'fad1b2', 'fad2b3', 'fbd3b4', 'fbd5b6', 'fbd6b7', 'fcd7b8', 'fcd8b9', 'fcdaba', 'fddbbc', 'fddcbd', 'fddebe', 'fddfbf', 'fee0c1', 'fee1c2', 'fee3c3', 'fee4c5', 'ffe5c6', 'ffe7c7', 'ffe8c9', 'ffe9ca', 'ffebcb', 'ffeccd', 'ffedce', 'ffefcf', 'fff0d1', 'fff2d2', 'fff3d3', 'fff4d5', 'fff6d6', 'fff7d8', 'fff8d9', 'fffada', 'fffbdc', 'fffcdd', 'fffedf', 'ffffe0')\n",
|
||||||
|
"my_colormap_vals_dec = np.array([int(element,base=16) for element in my_colormap_vals_hex])\n",
|
||||||
|
"r = np.floor(my_colormap_vals_dec/(256*256))\n",
|
||||||
|
"g = np.floor((my_colormap_vals_dec - r *256 *256)/256)\n",
|
||||||
|
"b = np.floor(my_colormap_vals_dec - r * 256 *256 - g * 256)\n",
|
||||||
|
"my_colormap_vals = np.vstack((r,g,b)).transpose()/255.0\n",
|
||||||
|
"my_colormap = ListedColormap(my_colormap_vals)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "4PM8bf6lO0VE"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Probability distribution for normal\n",
|
||||||
|
"def norm_pdf(x, mu, sigma):\n",
|
||||||
|
" return np.exp(-0.5 * (x-mu) * (x-mu) / (sigma * sigma)) / np.sqrt(2*np.pi*sigma*sigma)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "ONGRaQscfIOo"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# True distribution is a mixture of four Gaussians\n",
|
||||||
|
"class TrueDataDistribution:\n",
|
||||||
|
" # Constructor initializes parameters\n",
|
||||||
|
" def __init__(self):\n",
|
||||||
|
" self.mu = [1.5, -0.216, 0.45, -1.875]\n",
|
||||||
|
" self.sigma = [0.3, 0.15, 0.525, 0.075]\n",
|
||||||
|
" self.w = [0.2, 0.3, 0.35, 0.15]\n",
|
||||||
|
"\n",
|
||||||
|
" # Return PDF\n",
|
||||||
|
" def pdf(self, x):\n",
|
||||||
|
" return(self.w[0] *norm_pdf(x,self.mu[0],self.sigma[0]) + self.w[1] *norm_pdf(x,self.mu[1],self.sigma[1]) + self.w[2] *norm_pdf(x,self.mu[2],self.sigma[2]) + self.w[3] *norm_pdf(x,self.mu[3],self.sigma[3]))\n",
|
||||||
|
"\n",
|
||||||
|
" # Draw samples\n",
|
||||||
|
" def sample(self, n):\n",
|
||||||
|
" hidden = np.random.choice(4, n, p=self.w)\n",
|
||||||
|
" epsilon = np.random.normal(size=(n))\n",
|
||||||
|
" mu_list = list(itemgetter(*hidden)(self.mu))\n",
|
||||||
|
" sigma_list = list(itemgetter(*hidden)(self.sigma))\n",
|
||||||
|
" return mu_list + sigma_list * epsilon"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "gZvG0MKhfY8Y"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Define ground truth probability distribution that we will model\n",
|
||||||
|
"true_dist = TrueDataDistribution()\n",
|
||||||
|
"# Let's visualize this\n",
|
||||||
|
"x_vals = np.arange(-3,3,0.01)\n",
|
||||||
|
"pr_x_true = true_dist.pdf(x_vals)\n",
|
||||||
|
"fig,ax = plt.subplots()\n",
|
||||||
|
"ax.plot(x_vals, pr_x_true, 'r-')\n",
|
||||||
|
"ax.set_xlabel(\"$x$\")\n",
|
||||||
|
"ax.set_ylabel(\"$Pr(x)$\")\n",
|
||||||
|
"ax.set_ylim(0,1.0)\n",
|
||||||
|
"ax.set_xlim(-3,3)\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "qXmej3TUuQyp"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Let's first implement the forward process"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "XHdtfRP47YLy"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Do one step of diffusion (equation 18.1)\n",
|
||||||
|
"def diffuse_one_step(z_t_minus_1, beta_t):\n",
|
||||||
|
" # TODO -- Implement this function\n",
|
||||||
|
" # Use np.random.normal to generate the samples epsilon\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" z_t = np.zeros_like(z_t_minus_1)\n",
|
||||||
|
"\n",
|
||||||
|
" return z_t"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "hkApJ2VJlQuk"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's run the diffusion process for a whole bunch of samples"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "ECAUfHNi9NVW"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Generate some samples\n",
|
||||||
|
"n_sample = 10000\n",
|
||||||
|
"np.random.seed(6)\n",
|
||||||
|
"x = true_dist.sample(n_sample)\n",
|
||||||
|
"\n",
|
||||||
|
"# Number of time steps\n",
|
||||||
|
"T = 100\n",
|
||||||
|
"# Noise schedule has same value at every time step\n",
|
||||||
|
"beta = 0.01511\n",
|
||||||
|
"\n",
|
||||||
|
"# We'll store the diffused samples in an array\n",
|
||||||
|
"samples = np.zeros((T+1, n_sample))\n",
|
||||||
|
"samples[0,:] = x\n",
|
||||||
|
"\n",
|
||||||
|
"for t in range(T):\n",
|
||||||
|
" samples[t+1,:] = diffuse_one_step(samples[t,:], beta)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "M-TY5w9Q8LYW"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Let's, plot the evolution of a few paths as in figure 18.2"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "jYrAW6tN-gJ4"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"fig, ax = plt.subplots()\n",
|
||||||
|
"t_vals = np.arange(0,101,1)\n",
|
||||||
|
"ax.plot(samples[:,0],t_vals,'r-')\n",
|
||||||
|
"ax.plot(samples[:,1],t_vals,'g-')\n",
|
||||||
|
"ax.plot(samples[:,2],t_vals,'b-')\n",
|
||||||
|
"ax.plot(samples[:,3],t_vals,'c-')\n",
|
||||||
|
"ax.plot(samples[:,4],t_vals,'m-')\n",
|
||||||
|
"ax.set_xlim([-3,3])\n",
|
||||||
|
"ax.set_ylim([101, 0])\n",
|
||||||
|
"ax.set_xlabel('value')\n",
|
||||||
|
"ax.set_ylabel('z_{t}')\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "4XU6CDZC_kFo"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Notice that the samples have a tendencey to move toward the center. Now let's look at the histogram of the samples at each stage"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "SGTYGGevAktz"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def draw_hist(z_t,title=''):\n",
|
||||||
|
" fig, ax = plt.subplots()\n",
|
||||||
|
" fig.set_size_inches(8,2.5)\n",
|
||||||
|
" plt.hist(z_t , bins=np.arange(-3,3, 0.1), density = True)\n",
|
||||||
|
" ax.set_xlim([-3,3])\n",
|
||||||
|
" ax.set_ylim([0,1.0])\n",
|
||||||
|
" ax.set_title('title')\n",
|
||||||
|
" plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "bn5E5NzL-evM"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"draw_hist(samples[0,:],'Original data')\n",
|
||||||
|
"draw_hist(samples[5,:],'Time step 5')\n",
|
||||||
|
"draw_hist(samples[10,:],'Time step 10')\n",
|
||||||
|
"draw_hist(samples[20,:],'Time step 20')\n",
|
||||||
|
"draw_hist(samples[40,:],'Time step 40')\n",
|
||||||
|
"draw_hist(samples[80,:],'Time step 80')\n",
|
||||||
|
"draw_hist(samples[100,:],'Time step 100')"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "pn_XD-EhBlwk"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"You can clearly see that as the diffusion process continues, the data becomes more Gaussian."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "skuLfGl5Czf4"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's investigate the diffusion kernel as in figure 18.3 of the book.\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "s37CBSzzK7wh"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def diffusion_kernel(x, t, beta):\n",
|
||||||
|
" # TODO -- write this function\n",
|
||||||
|
" # Replace this line:\n",
|
||||||
|
" dk_mean = 0.0 ; dk_std = 1.0\n",
|
||||||
|
"\n",
|
||||||
|
" return dk_mean, dk_std"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "vL62Iym0LEtY"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def draw_prob_dist(x_plot_vals, prob_dist, title=''):\n",
|
||||||
|
" fig, ax = plt.subplots()\n",
|
||||||
|
" fig.set_size_inches(8,2.5)\n",
|
||||||
|
" ax.plot(x_plot_vals, prob_dist, 'b-')\n",
|
||||||
|
" ax.set_xlim([-3,3])\n",
|
||||||
|
" ax.set_ylim([0,1.0])\n",
|
||||||
|
" ax.set_title(title)\n",
|
||||||
|
" plt.show()\n",
|
||||||
|
"\n",
|
||||||
|
"def compute_and_plot_diffusion_kernels(x, T, beta, my_colormap):\n",
|
||||||
|
" x_plot_vals = np.arange(-3,3,0.01)\n",
|
||||||
|
" diffusion_kernels = np.zeros((T+1,len(x_plot_vals)))\n",
|
||||||
|
" dk_mean_all = np.ones((T+1,1))*x\n",
|
||||||
|
" dk_std_all = np.zeros((T+1,1))\n",
|
||||||
|
" for t in range(T):\n",
|
||||||
|
" dk_mean_all[t+1], dk_std_all[t+1] = diffusion_kernel(x,t+1,beta)\n",
|
||||||
|
" diffusion_kernels[t+1,:] = norm_pdf(x_plot_vals, dk_mean_all[t+1], dk_std_all[t+1])\n",
|
||||||
|
"\n",
|
||||||
|
" samples = np.ones((T+1, 5))\n",
|
||||||
|
" samples[0,:] = x\n",
|
||||||
|
"\n",
|
||||||
|
" for t in range(T):\n",
|
||||||
|
" samples[t+1,:] = diffuse_one_step(samples[t,:], beta)\n",
|
||||||
|
"\n",
|
||||||
|
" fig, ax = plt.subplots()\n",
|
||||||
|
" fig.set_size_inches(6,6)\n",
|
||||||
|
"\n",
|
||||||
|
" # Plot the image containing all the kernels\n",
|
||||||
|
" plt.imshow(diffusion_kernels, cmap=my_colormap, interpolation='nearest')\n",
|
||||||
|
"\n",
|
||||||
|
" # Plot +/- 2 standard deviations\n",
|
||||||
|
" ax.plot((dk_mean_all -2 * dk_std_all)/0.01 + len(x_plot_vals)/2, np.arange(0,101,1),'y--')\n",
|
||||||
|
" ax.plot((dk_mean_all +2 * dk_std_all)/0.01 + len(x_plot_vals)/2, np.arange(0,101,1),'y--')\n",
|
||||||
|
"\n",
|
||||||
|
" # Plot a few trajectories\n",
|
||||||
|
" ax.plot(samples[:,0]/0.01 + + len(x_plot_vals)/2, np.arange(0,101,1), 'r-')\n",
|
||||||
|
" ax.plot(samples[:,1]/0.01 + + len(x_plot_vals)/2, np.arange(0,101,1), 'g-')\n",
|
||||||
|
" ax.plot(samples[:,2]/0.01 + + len(x_plot_vals)/2, np.arange(0,101,1), 'b-')\n",
|
||||||
|
" ax.plot(samples[:,3]/0.01 + + len(x_plot_vals)/2, np.arange(0,101,1), 'c-')\n",
|
||||||
|
" ax.plot(samples[:,4]/0.01 + + len(x_plot_vals)/2, np.arange(0,101,1), 'm-')\n",
|
||||||
|
"\n",
|
||||||
|
" # Tidy up and plot\n",
|
||||||
|
" ax.set_ylabel(\"$Pr(z_{t}|x)$\")\n",
|
||||||
|
" ax.get_xaxis().set_visible(False)\n",
|
||||||
|
" ax.set_xlim([0,601])\n",
|
||||||
|
" ax.set_aspect(601/T)\n",
|
||||||
|
" plt.show()\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" draw_prob_dist(x_plot_vals, diffusion_kernels[20,:],'$q(z_{20}|x)$')\n",
|
||||||
|
" draw_prob_dist(x_plot_vals, diffusion_kernels[40,:],'$q(z_{40}|x)$')\n",
|
||||||
|
" draw_prob_dist(x_plot_vals, diffusion_kernels[80,:],'$q(z_{80}|x)$')"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "KtP1KF8wMh8o"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"x = -2\n",
|
||||||
|
"compute_and_plot_diffusion_kernels(x, T, beta, my_colormap)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "g8TcI5wtRQsx"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"TODO -- Run this for different version of $x$ and check that you understand how the graphs change"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "-RuN2lR28-hK"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Finally, let's estimate the marginal distributions empirically and visualize them as in figure 18.4 of the book. This is only tractable because the data is in one dimension and we know the original distribution.\n",
|
||||||
|
"\n",
|
||||||
|
"The marginal distribution at time t is the sum of the diffusion kernels for each position x, weighted by the probability of seeing that value of x in the true distribution."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "n-x6Whz2J_zy"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def diffusion_marginal(x_plot_vals, pr_x_true, t, beta):\n",
|
||||||
|
" # If time is zero then marginal is just original distribution\n",
|
||||||
|
" if t == 0:\n",
|
||||||
|
" return pr_x_true\n",
|
||||||
|
"\n",
|
||||||
|
" # The thing we are computing\n",
|
||||||
|
" marginal_at_time_t = np.zeros_like(pr_x_true);\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" # TODO Write ths function\n",
|
||||||
|
" # 1. For each x (value in x_plot_vals):\n",
|
||||||
|
" # 2. Compute the mean and variance of the diffusion kernel at time t\n",
|
||||||
|
" # 3. Compute pdf of this Gaussian at every x_plot_val\n",
|
||||||
|
" # 4. Weight Gaussian by probability at position x and by 0.01 to componensate for bin size\n",
|
||||||
|
" # 5. Accumulate weighted Gaussian in marginal at time t.\n",
|
||||||
|
" # 6. Multiply result by 0.01 to compensate for bin size\n",
|
||||||
|
" # Replace this line:\n",
|
||||||
|
" marginal_at_time_t = marginal_at_time_t\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" return marginal_at_time_t"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "YzN5duYpg7C-"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"x_plot_vals = np.arange(-3,3,0.01)\n",
|
||||||
|
"marginal_distributions = np.zeros((T+1,len(x_plot_vals)))\n",
|
||||||
|
"\n",
|
||||||
|
"for t in range(T+1):\n",
|
||||||
|
" marginal_distributions[t,:] = diffusion_marginal(x_plot_vals, pr_x_true , t, beta)\n",
|
||||||
|
"\n",
|
||||||
|
"fig, ax = plt.subplots()\n",
|
||||||
|
"fig.set_size_inches(6,6)\n",
|
||||||
|
"\n",
|
||||||
|
"# Plot the image containing all the kernels\n",
|
||||||
|
"plt.imshow(marginal_distributions, cmap=my_colormap, interpolation='nearest')\n",
|
||||||
|
"\n",
|
||||||
|
"# Tidy up and plot\n",
|
||||||
|
"ax.set_ylabel(\"$Pr(z_{t})$\")\n",
|
||||||
|
"ax.get_xaxis().set_visible(False)\n",
|
||||||
|
"ax.set_xlim([0,601])\n",
|
||||||
|
"ax.set_aspect(601/T)\n",
|
||||||
|
"plt.show()\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"draw_prob_dist(x_plot_vals, marginal_distributions[0,:],'$q(z_{0})$')\n",
|
||||||
|
"draw_prob_dist(x_plot_vals, marginal_distributions[20,:],'$q(z_{20})$')\n",
|
||||||
|
"draw_prob_dist(x_plot_vals, marginal_distributions[60,:],'$q(z_{60})$')"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "OgEU9sxjRaeO"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
380
Notebooks/Chap18/18_2_1D_Diffusion_Model.ipynb
Normal file
380
Notebooks/Chap18/18_2_1D_Diffusion_Model.ipynb
Normal file
@@ -0,0 +1,380 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyM4DdZDGoP1xGst+Nn+rwvt",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap18/18_2_1D_Diffusion_Model.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 18.2: 1D Diffusion Model**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook investigates the diffusion encoder as described in section 18.3 and 18.4 of the book.\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "t9vk9Elugvmi"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt\n",
|
||||||
|
"from matplotlib.colors import ListedColormap\n",
|
||||||
|
"from operator import itemgetter\n",
|
||||||
|
"from scipy import stats\n",
|
||||||
|
"from IPython.display import display, clear_output"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "OLComQyvCIJ7"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"#Create pretty colormap as in book\n",
|
||||||
|
"my_colormap_vals_hex =('2a0902', '2b0a03', '2c0b04', '2d0c05', '2e0c06', '2f0d07', '300d08', '310e09', '320f0a', '330f0b', '34100b', '35110c', '36110d', '37120e', '38120f', '39130f', '3a1410', '3b1411', '3c1511', '3d1612', '3e1613', '3f1713', '401714', '411814', '421915', '431915', '451a16', '461b16', '471b17', '481c17', '491d18', '4a1d18', '4b1e19', '4c1f19', '4d1f1a', '4e201b', '50211b', '51211c', '52221c', '53231d', '54231d', '55241e', '56251e', '57261f', '58261f', '592720', '5b2821', '5c2821', '5d2922', '5e2a22', '5f2b23', '602b23', '612c24', '622d25', '632e25', '652e26', '662f26', '673027', '683027', '693128', '6a3229', '6b3329', '6c342a', '6d342a', '6f352b', '70362c', '71372c', '72372d', '73382e', '74392e', '753a2f', '763a2f', '773b30', '783c31', '7a3d31', '7b3e32', '7c3e33', '7d3f33', '7e4034', '7f4134', '804235', '814236', '824336', '834437', '854538', '864638', '874739', '88473a', '89483a', '8a493b', '8b4a3c', '8c4b3c', '8d4c3d', '8e4c3e', '8f4d3f', '904e3f', '924f40', '935041', '945141', '955242', '965343', '975343', '985444', '995545', '9a5646', '9b5746', '9c5847', '9d5948', '9e5a49', '9f5a49', 'a05b4a', 'a15c4b', 'a35d4b', 'a45e4c', 'a55f4d', 'a6604e', 'a7614e', 'a8624f', 'a96350', 'aa6451', 'ab6552', 'ac6552', 'ad6653', 'ae6754', 'af6855', 'b06955', 'b16a56', 'b26b57', 'b36c58', 'b46d59', 'b56e59', 'b66f5a', 'b7705b', 'b8715c', 'b9725d', 'ba735d', 'bb745e', 'bc755f', 'bd7660', 'be7761', 'bf7862', 'c07962', 'c17a63', 'c27b64', 'c27c65', 'c37d66', 'c47e67', 'c57f68', 'c68068', 'c78169', 'c8826a', 'c9836b', 'ca846c', 'cb856d', 'cc866e', 'cd876f', 'ce886f', 'ce8970', 'cf8a71', 'd08b72', 'd18c73', 'd28d74', 'd38e75', 'd48f76', 'd59077', 'd59178', 'd69279', 'd7937a', 'd8957b', 'd9967b', 'da977c', 'da987d', 'db997e', 'dc9a7f', 'dd9b80', 'de9c81', 'de9d82', 'df9e83', 'e09f84', 'e1a185', 'e2a286', 'e2a387', 'e3a488', 'e4a589', 'e5a68a', 'e5a78b', 'e6a88c', 'e7aa8d', 'e7ab8e', 'e8ac8f', 'e9ad90', 'eaae91', 'eaaf92', 'ebb093', 'ecb295', 'ecb396', 'edb497', 'eeb598', 'eeb699', 'efb79a', 'efb99b', 'f0ba9c', 'f1bb9d', 'f1bc9e', 'f2bd9f', 'f2bfa1', 'f3c0a2', 'f3c1a3', 'f4c2a4', 'f5c3a5', 'f5c5a6', 'f6c6a7', 'f6c7a8', 'f7c8aa', 'f7c9ab', 'f8cbac', 'f8ccad', 'f8cdae', 'f9ceb0', 'f9d0b1', 'fad1b2', 'fad2b3', 'fbd3b4', 'fbd5b6', 'fbd6b7', 'fcd7b8', 'fcd8b9', 'fcdaba', 'fddbbc', 'fddcbd', 'fddebe', 'fddfbf', 'fee0c1', 'fee1c2', 'fee3c3', 'fee4c5', 'ffe5c6', 'ffe7c7', 'ffe8c9', 'ffe9ca', 'ffebcb', 'ffeccd', 'ffedce', 'ffefcf', 'fff0d1', 'fff2d2', 'fff3d3', 'fff4d5', 'fff6d6', 'fff7d8', 'fff8d9', 'fffada', 'fffbdc', 'fffcdd', 'fffedf', 'ffffe0')\n",
|
||||||
|
"my_colormap_vals_dec = np.array([int(element,base=16) for element in my_colormap_vals_hex])\n",
|
||||||
|
"r = np.floor(my_colormap_vals_dec/(256*256))\n",
|
||||||
|
"g = np.floor((my_colormap_vals_dec - r *256 *256)/256)\n",
|
||||||
|
"b = np.floor(my_colormap_vals_dec - r * 256 *256 - g * 256)\n",
|
||||||
|
"my_colormap_vals = np.vstack((r,g,b)).transpose()/255.0\n",
|
||||||
|
"my_colormap = ListedColormap(my_colormap_vals)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "4PM8bf6lO0VE"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Probability distribution for normal\n",
|
||||||
|
"def norm_pdf(x, mu, sigma):\n",
|
||||||
|
" return np.exp(-0.5 * (x-mu) * (x-mu) / (sigma * sigma)) / np.sqrt(2*np.pi*sigma*sigma)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "ONGRaQscfIOo"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# True distribution is a mixture of four Gaussians\n",
|
||||||
|
"class TrueDataDistribution:\n",
|
||||||
|
" # Constructor initializes parameters\n",
|
||||||
|
" def __init__(self):\n",
|
||||||
|
" self.mu = [1.5, -0.216, 0.45, -1.875]\n",
|
||||||
|
" self.sigma = [0.3, 0.15, 0.525, 0.075]\n",
|
||||||
|
" self.w = [0.2, 0.3, 0.35, 0.15]\n",
|
||||||
|
"\n",
|
||||||
|
" # Return PDF\n",
|
||||||
|
" def pdf(self, x):\n",
|
||||||
|
" return(self.w[0] *norm_pdf(x,self.mu[0],self.sigma[0]) + self.w[1] *norm_pdf(x,self.mu[1],self.sigma[1]) + self.w[2] *norm_pdf(x,self.mu[2],self.sigma[2]) + self.w[3] *norm_pdf(x,self.mu[3],self.sigma[3]))\n",
|
||||||
|
"\n",
|
||||||
|
" # Draw samples\n",
|
||||||
|
" def sample(self, n):\n",
|
||||||
|
" hidden = np.random.choice(4, n, p=self.w)\n",
|
||||||
|
" epsilon = np.random.normal(size=(n))\n",
|
||||||
|
" mu_list = list(itemgetter(*hidden)(self.mu))\n",
|
||||||
|
" sigma_list = list(itemgetter(*hidden)(self.sigma))\n",
|
||||||
|
" return mu_list + sigma_list * epsilon"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "gZvG0MKhfY8Y"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Define ground truth probability distribution that we will model\n",
|
||||||
|
"true_dist = TrueDataDistribution()\n",
|
||||||
|
"# Let's visualize this\n",
|
||||||
|
"x_vals = np.arange(-3,3,0.01)\n",
|
||||||
|
"pr_x_true = true_dist.pdf(x_vals)\n",
|
||||||
|
"fig,ax = plt.subplots()\n",
|
||||||
|
"fig.set_size_inches(8,2.5)\n",
|
||||||
|
"ax.plot(x_vals, pr_x_true, 'r-')\n",
|
||||||
|
"ax.set_xlabel(\"$x$\")\n",
|
||||||
|
"ax.set_ylabel(\"$Pr(x)$\")\n",
|
||||||
|
"ax.set_ylim(0,1.0)\n",
|
||||||
|
"ax.set_xlim(-3,3)\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "iJu_uBiaeUVv"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"To train the model to describe this distribution, we'll need to generate pairs of samples drawn from $Pr(z_t|x)$ (diffusion kernel) and $q(z_{t-1}|z_{t},x)$ (equation 18.15).\n",
|
||||||
|
"\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "DRHUG_41i4t_"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# The diffusion kernel returns the parameters of Pr(z_{t}|x)\n",
|
||||||
|
"def diffusion_kernel(x, t, beta):\n",
|
||||||
|
" alpha = np.power(1-beta,t)\n",
|
||||||
|
" dk_mean = x * np.sqrt(alpha)\n",
|
||||||
|
" dk_std = np.sqrt(1-alpha)\n",
|
||||||
|
" return dk_mean, dk_std\n",
|
||||||
|
"\n",
|
||||||
|
"# Compute mean and variance q(z_{t-1}|z_{t},x)\n",
|
||||||
|
"def conditional_diffusion_distribution(x,z_t,t,beta):\n",
|
||||||
|
" # TODO -- Implement this function\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" cd_mean = 0; cd_std = 1\n",
|
||||||
|
"\n",
|
||||||
|
" return cd_mean, cd_std\n",
|
||||||
|
"\n",
|
||||||
|
"def get_data_pairs(x_train,t,beta):\n",
|
||||||
|
" # Find diffusion kernel for every x_train and draw samples\n",
|
||||||
|
" dk_mean, dk_std = diffusion_kernel(x_train, t, beta)\n",
|
||||||
|
" z_t = np.random.normal(size=x_train.shape) * dk_std + dk_mean\n",
|
||||||
|
" # Find conditional diffusion distribution for each x_train, z pair and draw samlpes\n",
|
||||||
|
" cd_mean, cd_std = conditional_diffusion_distribution(x_train,z_t,t,beta)\n",
|
||||||
|
" if t == 1:\n",
|
||||||
|
" z_tminus1 = x_train\n",
|
||||||
|
" else:\n",
|
||||||
|
" z_tminus1 = np.random.normal(size=x_train.shape) * cd_std + cd_mean\n",
|
||||||
|
"\n",
|
||||||
|
" return z_t, z_tminus1"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "x6B8t72Ukscd"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"We also need models $\\mbox{f}_t[z_{t},\\phi_{t}]$ that map from $z_{t}$ to the mean of the distribution at time $z_{t-1}$. We're just going to use a very hacky non-parametric model (basically a lookup table) that tells you the result based on the (quantized) input."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "aSG_4uA8_zZ-"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# This code is really ugly! Don't look too closely at it!\n",
|
||||||
|
"# All you need to know is that it is a model that trains from pairs zt, zt_minus1\n",
|
||||||
|
"# And can then predict zt\n",
|
||||||
|
"class NonParametricModel():\n",
|
||||||
|
" # Constructor initializes parameters\n",
|
||||||
|
" def __init__(self):\n",
|
||||||
|
"\n",
|
||||||
|
" self.inc = 0.01\n",
|
||||||
|
" self.max_val = 3.0\n",
|
||||||
|
" self.model = []\n",
|
||||||
|
"\n",
|
||||||
|
" # Learns a model that predicts z_t_minus1 given z_t\n",
|
||||||
|
" def train(self, zt, zt_minus1):\n",
|
||||||
|
" zt = np.clip(zt,-self.max_val,self.max_val)\n",
|
||||||
|
" zt_minus1 = np.clip(zt_minus1,-self.max_val,self.max_val)\n",
|
||||||
|
" bins = np.arange(-self.max_val,self.max_val+self.inc,self.inc)\n",
|
||||||
|
" numerator, *_ = stats.binned_statistic(zt, zt_minus1-zt, statistic='sum',bins=bins)\n",
|
||||||
|
" denominator, *_ = stats.binned_statistic(zt, zt_minus1-zt, statistic='count',bins=bins)\n",
|
||||||
|
" self.model = numerator / (denominator + 1)\n",
|
||||||
|
"\n",
|
||||||
|
" def predict(self, zt):\n",
|
||||||
|
" bin_index = np.floor((zt+self.max_val)/self.inc)\n",
|
||||||
|
" bin_index = np.clip(bin_index,0, len(self.model)-1).astype('uint32')\n",
|
||||||
|
" return zt + self.model[bin_index]"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "ZHViC0pL_yy5"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Sample data from distribution (this would usually be our collected training set)\n",
|
||||||
|
"n_sample = 100000\n",
|
||||||
|
"x_train = true_dist.sample(n_sample)\n",
|
||||||
|
"\n",
|
||||||
|
"# Define model parameters\n",
|
||||||
|
"T = 100\n",
|
||||||
|
"beta = 0.01511\n",
|
||||||
|
"\n",
|
||||||
|
"all_models = []\n",
|
||||||
|
"for t in range(0,T):\n",
|
||||||
|
" clear_output(wait=True)\n",
|
||||||
|
" display(\"Training timestep %d\"%(t))\n",
|
||||||
|
" zt,zt_minus1 = get_data_pairs(x_train,t+1,beta)\n",
|
||||||
|
" all_models.append(NonParametricModel())\n",
|
||||||
|
" # The model at index t maps data from z_{t+1} to z_{t}\n",
|
||||||
|
" all_models[t].train(zt,zt_minus1)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "CzVFybWoBygu"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now that we've learned the model, let's draw some samples from it. We start at $z_{100}$ and use the model to predict $z_{99}$, then $z_{98}$ and so on until finally we get to $z_{1}$ and then $x$ (represented as $z_{0}$ here). We'll store all of the intermediate stages as well, so we can plot the trajectories. See equations 18.16."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "ZPc9SEvtl14U"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def sample(model, T, sigma_t, n_samples):\n",
|
||||||
|
" # Create the output array\n",
|
||||||
|
" # Each row represents a time step, first row will be sampled data\n",
|
||||||
|
" # Each column represents a different sample\n",
|
||||||
|
" samples = np.zeros((T+1,n_samples))\n",
|
||||||
|
"\n",
|
||||||
|
" # TODO -- Initialize the samples z_{T} at samples[T,:] from standard normal distribution\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" samples[T,:] = np.zeros((1,n_samples))\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" # For t=100...99..98... ...0\n",
|
||||||
|
" for t in range(T,0,-1):\n",
|
||||||
|
" clear_output(wait=True)\n",
|
||||||
|
" display(\"Predicting z_{%d} from z_{%d}\"%(t-1,t))\n",
|
||||||
|
" # TODO Predict samples[t-1,:] from samples[t,:] using the appropriate model\n",
|
||||||
|
" # Replace this line:\n",
|
||||||
|
" samples[t-1,:] = np.zeros((1,n_samples))\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" # If not the last time step\n",
|
||||||
|
" if t>0:\n",
|
||||||
|
" # TODO Add noise to the samples at z_t-1 we just generated with mean zero, standard deviation sigma_t\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" samples[t-1,:] = samples[t-1,:]\n",
|
||||||
|
"\n",
|
||||||
|
" return samples"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "A-ZMFOvACIOw"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's run the diffusion process for a whole bunch of samples"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "ECAUfHNi9NVW"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"sigma_t=0.12288\n",
|
||||||
|
"n_samples = 100000\n",
|
||||||
|
"samples = sample(all_models, T, sigma_t, n_samples)\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"# Plot the data\n",
|
||||||
|
"sampled_data = samples[0,:]\n",
|
||||||
|
"bins = np.arange(-3,3.05,0.05)\n",
|
||||||
|
"\n",
|
||||||
|
"fig,ax = plt.subplots()\n",
|
||||||
|
"fig.set_size_inches(8,2.5)\n",
|
||||||
|
"ax.set_xlim([-3,3])\n",
|
||||||
|
"plt.hist(sampled_data, bins=bins, density =True)\n",
|
||||||
|
"ax.set_ylim(0, 0.8)\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "M-TY5w9Q8LYW"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Let's, plot the evolution of a few of the paths as in figure 18.7 (paths are from bottom to top now)."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "jYrAW6tN-gJ4"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"fig, ax = plt.subplots()\n",
|
||||||
|
"t_vals = np.arange(0,101,1)\n",
|
||||||
|
"ax.plot(samples[:,0],t_vals,'r-')\n",
|
||||||
|
"ax.plot(samples[:,1],t_vals,'g-')\n",
|
||||||
|
"ax.plot(samples[:,2],t_vals,'b-')\n",
|
||||||
|
"ax.plot(samples[:,3],t_vals,'c-')\n",
|
||||||
|
"ax.plot(samples[:,4],t_vals,'m-')\n",
|
||||||
|
"ax.set_xlim([-3,3])\n",
|
||||||
|
"ax.set_ylim([101, 0])\n",
|
||||||
|
"ax.set_xlabel('value')\n",
|
||||||
|
"ax.set_ylabel('z_{t}')\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "4XU6CDZC_kFo"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Notice that the samples have a tendency to move from positions that are near the center at time 100 to positions that are high in the true probability distribution at time 0"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "SGTYGGevAktz"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
362
Notebooks/Chap18/18_3_Reparameterized_Model.ipynb
Normal file
362
Notebooks/Chap18/18_3_Reparameterized_Model.ipynb
Normal file
@@ -0,0 +1,362 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyNd+D0/IVWXtU2GKsofyk2d",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap18/18_3_Reparameterized_Model.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 18.3: 1D Reparameterized model**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook investigates the reparameterized model as described in section 18.5 of the book and implements algorithms 18.1 and 18.2.\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "t9vk9Elugvmi"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt\n",
|
||||||
|
"from matplotlib.colors import ListedColormap\n",
|
||||||
|
"from operator import itemgetter\n",
|
||||||
|
"from scipy import stats\n",
|
||||||
|
"from IPython.display import display, clear_output"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "OLComQyvCIJ7"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"#Create pretty colormap as in book\n",
|
||||||
|
"my_colormap_vals_hex =('2a0902', '2b0a03', '2c0b04', '2d0c05', '2e0c06', '2f0d07', '300d08', '310e09', '320f0a', '330f0b', '34100b', '35110c', '36110d', '37120e', '38120f', '39130f', '3a1410', '3b1411', '3c1511', '3d1612', '3e1613', '3f1713', '401714', '411814', '421915', '431915', '451a16', '461b16', '471b17', '481c17', '491d18', '4a1d18', '4b1e19', '4c1f19', '4d1f1a', '4e201b', '50211b', '51211c', '52221c', '53231d', '54231d', '55241e', '56251e', '57261f', '58261f', '592720', '5b2821', '5c2821', '5d2922', '5e2a22', '5f2b23', '602b23', '612c24', '622d25', '632e25', '652e26', '662f26', '673027', '683027', '693128', '6a3229', '6b3329', '6c342a', '6d342a', '6f352b', '70362c', '71372c', '72372d', '73382e', '74392e', '753a2f', '763a2f', '773b30', '783c31', '7a3d31', '7b3e32', '7c3e33', '7d3f33', '7e4034', '7f4134', '804235', '814236', '824336', '834437', '854538', '864638', '874739', '88473a', '89483a', '8a493b', '8b4a3c', '8c4b3c', '8d4c3d', '8e4c3e', '8f4d3f', '904e3f', '924f40', '935041', '945141', '955242', '965343', '975343', '985444', '995545', '9a5646', '9b5746', '9c5847', '9d5948', '9e5a49', '9f5a49', 'a05b4a', 'a15c4b', 'a35d4b', 'a45e4c', 'a55f4d', 'a6604e', 'a7614e', 'a8624f', 'a96350', 'aa6451', 'ab6552', 'ac6552', 'ad6653', 'ae6754', 'af6855', 'b06955', 'b16a56', 'b26b57', 'b36c58', 'b46d59', 'b56e59', 'b66f5a', 'b7705b', 'b8715c', 'b9725d', 'ba735d', 'bb745e', 'bc755f', 'bd7660', 'be7761', 'bf7862', 'c07962', 'c17a63', 'c27b64', 'c27c65', 'c37d66', 'c47e67', 'c57f68', 'c68068', 'c78169', 'c8826a', 'c9836b', 'ca846c', 'cb856d', 'cc866e', 'cd876f', 'ce886f', 'ce8970', 'cf8a71', 'd08b72', 'd18c73', 'd28d74', 'd38e75', 'd48f76', 'd59077', 'd59178', 'd69279', 'd7937a', 'd8957b', 'd9967b', 'da977c', 'da987d', 'db997e', 'dc9a7f', 'dd9b80', 'de9c81', 'de9d82', 'df9e83', 'e09f84', 'e1a185', 'e2a286', 'e2a387', 'e3a488', 'e4a589', 'e5a68a', 'e5a78b', 'e6a88c', 'e7aa8d', 'e7ab8e', 'e8ac8f', 'e9ad90', 'eaae91', 'eaaf92', 'ebb093', 'ecb295', 'ecb396', 'edb497', 'eeb598', 'eeb699', 'efb79a', 'efb99b', 'f0ba9c', 'f1bb9d', 'f1bc9e', 'f2bd9f', 'f2bfa1', 'f3c0a2', 'f3c1a3', 'f4c2a4', 'f5c3a5', 'f5c5a6', 'f6c6a7', 'f6c7a8', 'f7c8aa', 'f7c9ab', 'f8cbac', 'f8ccad', 'f8cdae', 'f9ceb0', 'f9d0b1', 'fad1b2', 'fad2b3', 'fbd3b4', 'fbd5b6', 'fbd6b7', 'fcd7b8', 'fcd8b9', 'fcdaba', 'fddbbc', 'fddcbd', 'fddebe', 'fddfbf', 'fee0c1', 'fee1c2', 'fee3c3', 'fee4c5', 'ffe5c6', 'ffe7c7', 'ffe8c9', 'ffe9ca', 'ffebcb', 'ffeccd', 'ffedce', 'ffefcf', 'fff0d1', 'fff2d2', 'fff3d3', 'fff4d5', 'fff6d6', 'fff7d8', 'fff8d9', 'fffada', 'fffbdc', 'fffcdd', 'fffedf', 'ffffe0')\n",
|
||||||
|
"my_colormap_vals_dec = np.array([int(element,base=16) for element in my_colormap_vals_hex])\n",
|
||||||
|
"r = np.floor(my_colormap_vals_dec/(256*256))\n",
|
||||||
|
"g = np.floor((my_colormap_vals_dec - r *256 *256)/256)\n",
|
||||||
|
"b = np.floor(my_colormap_vals_dec - r * 256 *256 - g * 256)\n",
|
||||||
|
"my_colormap_vals = np.vstack((r,g,b)).transpose()/255.0\n",
|
||||||
|
"my_colormap = ListedColormap(my_colormap_vals)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "4PM8bf6lO0VE"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Probability distribution for normal\n",
|
||||||
|
"def norm_pdf(x, mu, sigma):\n",
|
||||||
|
" return np.exp(-0.5 * (x-mu) * (x-mu) / (sigma * sigma)) / np.sqrt(2*np.pi*sigma*sigma)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "ONGRaQscfIOo"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# True distribution is a mixture of four Gaussians\n",
|
||||||
|
"class TrueDataDistribution:\n",
|
||||||
|
" # Constructor initializes parameters\n",
|
||||||
|
" def __init__(self):\n",
|
||||||
|
" self.mu = [1.5, -0.216, 0.45, -1.875]\n",
|
||||||
|
" self.sigma = [0.3, 0.15, 0.525, 0.075]\n",
|
||||||
|
" self.w = [0.2, 0.3, 0.35, 0.15]\n",
|
||||||
|
"\n",
|
||||||
|
" # Return PDF\n",
|
||||||
|
" def pdf(self, x):\n",
|
||||||
|
" return(self.w[0] *norm_pdf(x,self.mu[0],self.sigma[0]) + self.w[1] *norm_pdf(x,self.mu[1],self.sigma[1]) + self.w[2] *norm_pdf(x,self.mu[2],self.sigma[2]) + self.w[3] *norm_pdf(x,self.mu[3],self.sigma[3]))\n",
|
||||||
|
"\n",
|
||||||
|
" # Draw samples\n",
|
||||||
|
" def sample(self, n):\n",
|
||||||
|
" hidden = np.random.choice(4, n, p=self.w)\n",
|
||||||
|
" epsilon = np.random.normal(size=(n))\n",
|
||||||
|
" mu_list = list(itemgetter(*hidden)(self.mu))\n",
|
||||||
|
" sigma_list = list(itemgetter(*hidden)(self.sigma))\n",
|
||||||
|
" return mu_list + sigma_list * epsilon"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "gZvG0MKhfY8Y"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Define ground truth probability distribution that we will model\n",
|
||||||
|
"true_dist = TrueDataDistribution()\n",
|
||||||
|
"# Let's visualize this\n",
|
||||||
|
"x_vals = np.arange(-3,3,0.01)\n",
|
||||||
|
"pr_x_true = true_dist.pdf(x_vals)\n",
|
||||||
|
"fig,ax = plt.subplots()\n",
|
||||||
|
"fig.set_size_inches(8,2.5)\n",
|
||||||
|
"ax.plot(x_vals, pr_x_true, 'r-')\n",
|
||||||
|
"ax.set_xlabel(\"$x$\")\n",
|
||||||
|
"ax.set_ylabel(\"$Pr(x)$\")\n",
|
||||||
|
"ax.set_ylim(0,1.0)\n",
|
||||||
|
"ax.set_xlim(-3,3)\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "iJu_uBiaeUVv"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"To train the model to describe this distribution, we'll need to generate pairs of samples drawn from $Pr(z_t|x)$ (diffusion kernel) and $q(z_{t-1}|z_{t},x)$ (equation 18.15).\n",
|
||||||
|
"\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "DRHUG_41i4t_"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Return z_t (the argument of g_{t}[] in the loss function in algorithm 18.1) and epsilon\n",
|
||||||
|
"def get_data_pairs(x_train,t,beta):\n",
|
||||||
|
" # TODO -- write this function\n",
|
||||||
|
" # Replace these lines\n",
|
||||||
|
" epsilon = np.ones_like(x_train)\n",
|
||||||
|
" z_t = np.ones_like(x_train)\n",
|
||||||
|
"\n",
|
||||||
|
" return z_t, epsilon"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "x6B8t72Ukscd"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"We also need models $\\mbox{g}_t[z_{t},\\phi_{t}]$ that map from $z_{t}$ to the noise $\\epsilon$ that was added. We're just going to use a very hacky non-parametric model (basically a lookup table) that tells you the result based on the (quantized) input."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "aSG_4uA8_zZ-"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# This code is really ugly! Don't look too closely at it!\n",
|
||||||
|
"# All you need to know is that it is a model that trains from pairs zt, zt_minus1\n",
|
||||||
|
"# And can then predict zt\n",
|
||||||
|
"class NonParametricModel():\n",
|
||||||
|
" # Constructor initializes parameters\n",
|
||||||
|
" def __init__(self):\n",
|
||||||
|
"\n",
|
||||||
|
" self.inc = 0.01\n",
|
||||||
|
" self.max_val = 3.0\n",
|
||||||
|
" self.model = []\n",
|
||||||
|
"\n",
|
||||||
|
" # Learns a model that predicts epsilon given z_t\n",
|
||||||
|
" def train(self, zt, epsilon):\n",
|
||||||
|
" zt = np.clip(zt,-self.max_val,self.max_val)\n",
|
||||||
|
" epsilon = np.clip(epsilon,-self.max_val,self.max_val)\n",
|
||||||
|
" bins = np.arange(-self.max_val,self.max_val+self.inc,self.inc)\n",
|
||||||
|
" numerator, *_ = stats.binned_statistic(zt, epsilon, statistic='sum',bins=bins)\n",
|
||||||
|
" denominator, *_ = stats.binned_statistic(zt, epsilon, statistic='count',bins=bins)\n",
|
||||||
|
" self.model = numerator / (denominator + 1)\n",
|
||||||
|
"\n",
|
||||||
|
" def predict(self, zt):\n",
|
||||||
|
" bin_index = np.floor((zt+self.max_val)/self.inc)\n",
|
||||||
|
" bin_index = np.clip(bin_index,0, len(self.model)-1).astype('uint32')\n",
|
||||||
|
" return self.model[bin_index]"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "ZHViC0pL_yy5"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Sample data from distribution (this would usually be our collected training set)\n",
|
||||||
|
"n_sample = 100000\n",
|
||||||
|
"x_train = true_dist.sample(n_sample)\n",
|
||||||
|
"\n",
|
||||||
|
"# Define model parameters\n",
|
||||||
|
"T = 100\n",
|
||||||
|
"beta = 0.01511\n",
|
||||||
|
"\n",
|
||||||
|
"all_models = []\n",
|
||||||
|
"for t in range(0,T):\n",
|
||||||
|
" clear_output(wait=True)\n",
|
||||||
|
" display(\"Training timestep %d\"%(t))\n",
|
||||||
|
" zt,epsilon= get_data_pairs(x_train,t,beta)\n",
|
||||||
|
" all_models.append(NonParametricModel())\n",
|
||||||
|
" # The model at index t maps data from z_{t+1} to epsilon\n",
|
||||||
|
" all_models[t].train(zt,epsilon)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "CzVFybWoBygu"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now that we've learned the model, let's draw some samples from it. We start at $z_{100}$ and use the model to predict $z_{99}$, then $z_{98}$ and so on until finally we get to $z_{1}$ and then $x$ (represented as $z_{0}$ here). We'll store all of the intermediate stages as well, so we can plot the trajectories. See algorithm 18.2"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "ZPc9SEvtl14U"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def sample(model, T, sigma_t, n_samples):\n",
|
||||||
|
" # Create the output array\n",
|
||||||
|
" # Each row represents a time step, first row will be sampled data\n",
|
||||||
|
" # Each column represents a different sample\n",
|
||||||
|
" samples = np.zeros((T+1,n_samples))\n",
|
||||||
|
"\n",
|
||||||
|
" # TODO -- Initialize the samples z_{T} at samples[T,:] from standard normal distribution\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" samples[T,:] = np.zeros((1,n_samples))\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" # For t=100...99..98... ...0\n",
|
||||||
|
" for t in range(T,0,-1):\n",
|
||||||
|
" clear_output(wait=True)\n",
|
||||||
|
" display(\"Predicting z_{%d} from z_{%d}\"%(t-1,t))\n",
|
||||||
|
" # TODO Predict samples[t-1,:] from samples[t,:] using the appropriate model\n",
|
||||||
|
" # Replace this line:\n",
|
||||||
|
" samples[t-1,:] = np.zeros((1,n_samples))\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" # If not the last time step\n",
|
||||||
|
" if t>0:\n",
|
||||||
|
" # TODO Add noise to the samples at z_t-1 we just generated with mean zero, standard deviation sigma_t\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" samples[t-1,:] = samples[t-1,:]\n",
|
||||||
|
"\n",
|
||||||
|
" return samples"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "A-ZMFOvACIOw"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's run the diffusion process for a whole bunch of samples"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "ECAUfHNi9NVW"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"sigma_t=0.12288\n",
|
||||||
|
"n_samples = 100000\n",
|
||||||
|
"samples = sample(all_models, T, sigma_t, n_samples)\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"# Plot the data\n",
|
||||||
|
"sampled_data = samples[0,:]\n",
|
||||||
|
"bins = np.arange(-3,3.05,0.05)\n",
|
||||||
|
"\n",
|
||||||
|
"fig,ax = plt.subplots()\n",
|
||||||
|
"fig.set_size_inches(8,2.5)\n",
|
||||||
|
"ax.set_xlim([-3,3])\n",
|
||||||
|
"plt.hist(sampled_data, bins=bins, density =True)\n",
|
||||||
|
"ax.set_ylim(0, 0.8)\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "M-TY5w9Q8LYW"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Let's, plot the evolution of a few of the paths as in figure 18.7 (paths are from bottom to top now)."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "jYrAW6tN-gJ4"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"fig, ax = plt.subplots()\n",
|
||||||
|
"t_vals = np.arange(0,101,1)\n",
|
||||||
|
"ax.plot(samples[:,0],t_vals,'r-')\n",
|
||||||
|
"ax.plot(samples[:,1],t_vals,'g-')\n",
|
||||||
|
"ax.plot(samples[:,2],t_vals,'b-')\n",
|
||||||
|
"ax.plot(samples[:,3],t_vals,'c-')\n",
|
||||||
|
"ax.plot(samples[:,4],t_vals,'m-')\n",
|
||||||
|
"ax.set_xlim([-3,3])\n",
|
||||||
|
"ax.set_ylim([101, 0])\n",
|
||||||
|
"ax.set_xlabel('value')\n",
|
||||||
|
"ax.set_ylabel('z_{t}')\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "4XU6CDZC_kFo"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Notice that the samples have a tendency to move from positions that are near the center at time 100 to positions that are high in the true probability distribution at time 0"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "SGTYGGevAktz"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
484
Notebooks/Chap18/18_4_Families_of_Diffusion_Models.ipynb
Normal file
484
Notebooks/Chap18/18_4_Families_of_Diffusion_Models.ipynb
Normal file
@@ -0,0 +1,484 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyNFSvISBXo/Z1l+onknF2Gw",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap18/18_4_Families_of_Diffusion_Models.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 18.4: Families of diffusion models**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook investigates the reparameterized model as described in section 18.5 of the book and computers the results shown in figure 18.10c-f. These models are based on the paper \"Denoising diffusion implicit models\" which can be found [here](https://arxiv.org/pdf/2010.02502.pdf).\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "t9vk9Elugvmi"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt\n",
|
||||||
|
"from matplotlib.colors import ListedColormap\n",
|
||||||
|
"from operator import itemgetter\n",
|
||||||
|
"from scipy import stats\n",
|
||||||
|
"from IPython.display import display, clear_output"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "OLComQyvCIJ7"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"#Create pretty colormap as in book\n",
|
||||||
|
"my_colormap_vals_hex =('2a0902', '2b0a03', '2c0b04', '2d0c05', '2e0c06', '2f0d07', '300d08', '310e09', '320f0a', '330f0b', '34100b', '35110c', '36110d', '37120e', '38120f', '39130f', '3a1410', '3b1411', '3c1511', '3d1612', '3e1613', '3f1713', '401714', '411814', '421915', '431915', '451a16', '461b16', '471b17', '481c17', '491d18', '4a1d18', '4b1e19', '4c1f19', '4d1f1a', '4e201b', '50211b', '51211c', '52221c', '53231d', '54231d', '55241e', '56251e', '57261f', '58261f', '592720', '5b2821', '5c2821', '5d2922', '5e2a22', '5f2b23', '602b23', '612c24', '622d25', '632e25', '652e26', '662f26', '673027', '683027', '693128', '6a3229', '6b3329', '6c342a', '6d342a', '6f352b', '70362c', '71372c', '72372d', '73382e', '74392e', '753a2f', '763a2f', '773b30', '783c31', '7a3d31', '7b3e32', '7c3e33', '7d3f33', '7e4034', '7f4134', '804235', '814236', '824336', '834437', '854538', '864638', '874739', '88473a', '89483a', '8a493b', '8b4a3c', '8c4b3c', '8d4c3d', '8e4c3e', '8f4d3f', '904e3f', '924f40', '935041', '945141', '955242', '965343', '975343', '985444', '995545', '9a5646', '9b5746', '9c5847', '9d5948', '9e5a49', '9f5a49', 'a05b4a', 'a15c4b', 'a35d4b', 'a45e4c', 'a55f4d', 'a6604e', 'a7614e', 'a8624f', 'a96350', 'aa6451', 'ab6552', 'ac6552', 'ad6653', 'ae6754', 'af6855', 'b06955', 'b16a56', 'b26b57', 'b36c58', 'b46d59', 'b56e59', 'b66f5a', 'b7705b', 'b8715c', 'b9725d', 'ba735d', 'bb745e', 'bc755f', 'bd7660', 'be7761', 'bf7862', 'c07962', 'c17a63', 'c27b64', 'c27c65', 'c37d66', 'c47e67', 'c57f68', 'c68068', 'c78169', 'c8826a', 'c9836b', 'ca846c', 'cb856d', 'cc866e', 'cd876f', 'ce886f', 'ce8970', 'cf8a71', 'd08b72', 'd18c73', 'd28d74', 'd38e75', 'd48f76', 'd59077', 'd59178', 'd69279', 'd7937a', 'd8957b', 'd9967b', 'da977c', 'da987d', 'db997e', 'dc9a7f', 'dd9b80', 'de9c81', 'de9d82', 'df9e83', 'e09f84', 'e1a185', 'e2a286', 'e2a387', 'e3a488', 'e4a589', 'e5a68a', 'e5a78b', 'e6a88c', 'e7aa8d', 'e7ab8e', 'e8ac8f', 'e9ad90', 'eaae91', 'eaaf92', 'ebb093', 'ecb295', 'ecb396', 'edb497', 'eeb598', 'eeb699', 'efb79a', 'efb99b', 'f0ba9c', 'f1bb9d', 'f1bc9e', 'f2bd9f', 'f2bfa1', 'f3c0a2', 'f3c1a3', 'f4c2a4', 'f5c3a5', 'f5c5a6', 'f6c6a7', 'f6c7a8', 'f7c8aa', 'f7c9ab', 'f8cbac', 'f8ccad', 'f8cdae', 'f9ceb0', 'f9d0b1', 'fad1b2', 'fad2b3', 'fbd3b4', 'fbd5b6', 'fbd6b7', 'fcd7b8', 'fcd8b9', 'fcdaba', 'fddbbc', 'fddcbd', 'fddebe', 'fddfbf', 'fee0c1', 'fee1c2', 'fee3c3', 'fee4c5', 'ffe5c6', 'ffe7c7', 'ffe8c9', 'ffe9ca', 'ffebcb', 'ffeccd', 'ffedce', 'ffefcf', 'fff0d1', 'fff2d2', 'fff3d3', 'fff4d5', 'fff6d6', 'fff7d8', 'fff8d9', 'fffada', 'fffbdc', 'fffcdd', 'fffedf', 'ffffe0')\n",
|
||||||
|
"my_colormap_vals_dec = np.array([int(element,base=16) for element in my_colormap_vals_hex])\n",
|
||||||
|
"r = np.floor(my_colormap_vals_dec/(256*256))\n",
|
||||||
|
"g = np.floor((my_colormap_vals_dec - r *256 *256)/256)\n",
|
||||||
|
"b = np.floor(my_colormap_vals_dec - r * 256 *256 - g * 256)\n",
|
||||||
|
"my_colormap_vals = np.vstack((r,g,b)).transpose()/255.0\n",
|
||||||
|
"my_colormap = ListedColormap(my_colormap_vals)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "4PM8bf6lO0VE"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Probability distribution for normal\n",
|
||||||
|
"def norm_pdf(x, mu, sigma):\n",
|
||||||
|
" return np.exp(-0.5 * (x-mu) * (x-mu) / (sigma * sigma)) / np.sqrt(2*np.pi*sigma*sigma)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "ONGRaQscfIOo"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# True distribution is a mixture of four Gaussians\n",
|
||||||
|
"class TrueDataDistribution:\n",
|
||||||
|
" # Constructor initializes parameters\n",
|
||||||
|
" def __init__(self):\n",
|
||||||
|
" self.mu = [1.5, -0.216, 0.45, -1.875]\n",
|
||||||
|
" self.sigma = [0.3, 0.15, 0.525, 0.075]\n",
|
||||||
|
" self.w = [0.2, 0.3, 0.35, 0.15]\n",
|
||||||
|
"\n",
|
||||||
|
" # Return PDF\n",
|
||||||
|
" def pdf(self, x):\n",
|
||||||
|
" return(self.w[0] *norm_pdf(x,self.mu[0],self.sigma[0]) + self.w[1] *norm_pdf(x,self.mu[1],self.sigma[1]) + self.w[2] *norm_pdf(x,self.mu[2],self.sigma[2]) + self.w[3] *norm_pdf(x,self.mu[3],self.sigma[3]))\n",
|
||||||
|
"\n",
|
||||||
|
" # Draw samples\n",
|
||||||
|
" def sample(self, n):\n",
|
||||||
|
" hidden = np.random.choice(4, n, p=self.w)\n",
|
||||||
|
" epsilon = np.random.normal(size=(n))\n",
|
||||||
|
" mu_list = list(itemgetter(*hidden)(self.mu))\n",
|
||||||
|
" sigma_list = list(itemgetter(*hidden)(self.sigma))\n",
|
||||||
|
" return mu_list + sigma_list * epsilon"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "gZvG0MKhfY8Y"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Define ground truth probability distribution that we will model\n",
|
||||||
|
"true_dist = TrueDataDistribution()\n",
|
||||||
|
"# Let's visualize this\n",
|
||||||
|
"x_vals = np.arange(-3,3,0.01)\n",
|
||||||
|
"pr_x_true = true_dist.pdf(x_vals)\n",
|
||||||
|
"fig,ax = plt.subplots()\n",
|
||||||
|
"fig.set_size_inches(8,2.5)\n",
|
||||||
|
"ax.plot(x_vals, pr_x_true, 'r-')\n",
|
||||||
|
"ax.set_xlabel(\"$x$\")\n",
|
||||||
|
"ax.set_ylabel(\"$Pr(x)$\")\n",
|
||||||
|
"ax.set_ylim(0,1.0)\n",
|
||||||
|
"ax.set_xlim(-3,3)\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "iJu_uBiaeUVv"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"To train the model to describe this distribution, we'll need to generate pairs of samples drawn from $Pr(z_t|x)$ (diffusion kernel) and $q(z_{t-1}|z_{t},x)$ (equation 18.15).\n",
|
||||||
|
"\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "DRHUG_41i4t_"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Return z_t (the argument of g_{t}[] in the loss function in algorithm 18.1) and epsilon\n",
|
||||||
|
"def get_data_pairs(x_train,t,beta):\n",
|
||||||
|
"\n",
|
||||||
|
" epsilon = np.random.standard_normal(x_train.shape)\n",
|
||||||
|
" alpha_t = np.power(1-beta,t)\n",
|
||||||
|
" z_t = x_train * np.sqrt(alpha_t) + np.sqrt(1-alpha_t) * epsilon\n",
|
||||||
|
"\n",
|
||||||
|
" return z_t, epsilon"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "x6B8t72Ukscd"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"We also need models $\\mbox{g}_t[z_{t},\\phi_{t}]$ that map from $z_{t}$ to the noise $\\epsilon$ that was added. We're just going to use a very hacky non-parametric model (basically a lookup table) that tells you the result based on the (quantized) input."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "aSG_4uA8_zZ-"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# This code is really ugly! Don't look too closely at it!\n",
|
||||||
|
"# All you need to know is that it is a model that trains from pairs zt, zt_minus1\n",
|
||||||
|
"# And can then predict zt\n",
|
||||||
|
"class NonParametricModel():\n",
|
||||||
|
" # Constructor initializes parameters\n",
|
||||||
|
" def __init__(self):\n",
|
||||||
|
"\n",
|
||||||
|
" self.inc = 0.01\n",
|
||||||
|
" self.max_val = 3.0\n",
|
||||||
|
" self.model = []\n",
|
||||||
|
"\n",
|
||||||
|
" # Learns a model that predicts epsilon given z_t\n",
|
||||||
|
" def train(self, zt, epsilon):\n",
|
||||||
|
" zt = np.clip(zt,-self.max_val,self.max_val)\n",
|
||||||
|
" epsilon = np.clip(epsilon,-self.max_val,self.max_val)\n",
|
||||||
|
" bins = np.arange(-self.max_val,self.max_val+self.inc,self.inc)\n",
|
||||||
|
" numerator, *_ = stats.binned_statistic(zt, epsilon, statistic='sum',bins=bins)\n",
|
||||||
|
" denominator, *_ = stats.binned_statistic(zt, epsilon, statistic='count',bins=bins)\n",
|
||||||
|
" self.model = numerator / (denominator + 1)\n",
|
||||||
|
"\n",
|
||||||
|
" def predict(self, zt):\n",
|
||||||
|
" bin_index = np.floor((zt+self.max_val)/self.inc)\n",
|
||||||
|
" bin_index = np.clip(bin_index,0, len(self.model)-1).astype('uint32')\n",
|
||||||
|
" return self.model[bin_index]"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "ZHViC0pL_yy5"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Sample data from distribution (this would usually be our collected training set)\n",
|
||||||
|
"n_sample = 100000\n",
|
||||||
|
"x_train = true_dist.sample(n_sample)\n",
|
||||||
|
"\n",
|
||||||
|
"# Define model parameters\n",
|
||||||
|
"T = 100\n",
|
||||||
|
"beta = 0.01511\n",
|
||||||
|
"\n",
|
||||||
|
"all_models = []\n",
|
||||||
|
"for t in range(0,T):\n",
|
||||||
|
" clear_output(wait=True)\n",
|
||||||
|
" display(\"Training timestep %d\"%(t))\n",
|
||||||
|
" zt,epsilon= get_data_pairs(x_train,t,beta)\n",
|
||||||
|
" all_models.append(NonParametricModel())\n",
|
||||||
|
" # The model at index t maps data from z_{t+1} to epsilon\n",
|
||||||
|
" all_models[t].train(zt,epsilon)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "CzVFybWoBygu"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now that we've learned the model, let's draw some samples from it. We start at $z_{100}$ and use the model to predict $z_{99}$, then $z_{98}$ and so on until finally we get to $z_{1}$ and then $x$ (represented as $z_{0}$ here). We'll store all of the intermediate stages as well, so we can plot the trajectories.\n",
|
||||||
|
"\n",
|
||||||
|
"This is the same model we learned last time. The whole point of this is that it is compatible with any forward process with the same diffusion kernel.\n",
|
||||||
|
"\n",
|
||||||
|
"One such model is the denoising diffusion implicit model, which has a sampling step:\n",
|
||||||
|
"\n",
|
||||||
|
"\\begin{equation}\n",
|
||||||
|
"\\mathbf{z}_{t-1} = \\sqrt{\\alpha_{t-1}}\\left(\\frac{\\mathbf{z}_{t}-\\sqrt{1-\\alpha_{t}}\\mbox{g}_t[\\mathbf{z}_{t},\\boldsymbol\\phi]}{\\sqrt{\\alpha_{t}}}\\right) + \\sqrt{1-\\alpha_{t-1}-\\sigma^2}\\mbox{g}_t[\\mathbf{z}_{t},\\boldsymbol\\phi]+\\sigma\\epsilon\n",
|
||||||
|
"\\end{equation}\n",
|
||||||
|
"\n",
|
||||||
|
"(see equation 12 of the denoising [diffusion implicit models paper ](https://arxiv.org/pdf/2010.02502.pdf).\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "ZPc9SEvtl14U"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def sample_ddim(model, T, sigma_t, n_samples):\n",
|
||||||
|
" # Create the output array\n",
|
||||||
|
" # Each row represents a time step, first row will be sampled data\n",
|
||||||
|
" # Each column represents a different sample\n",
|
||||||
|
" samples = np.zeros((T+1,n_samples))\n",
|
||||||
|
" samples[T,:] = np.random.standard_normal(n_samples)\n",
|
||||||
|
"\n",
|
||||||
|
" # For t=100...99..98... ...0\n",
|
||||||
|
" for t in range(T,0,-1):\n",
|
||||||
|
" clear_output(wait=True)\n",
|
||||||
|
" display(\"Predicting z_{%d} from z_{%d}\"%(t-1,t))\n",
|
||||||
|
"\n",
|
||||||
|
" alpha_t = np.power(1-beta,t+1)\n",
|
||||||
|
" alpha_t_minus1 = np.power(1-beta,t)\n",
|
||||||
|
"\n",
|
||||||
|
" # TODO -- implement the DDIM sampling step\n",
|
||||||
|
" # Note the final noise term is already added in the \"if\" statement below\n",
|
||||||
|
" # Replace this line:\n",
|
||||||
|
" samples[t-1,:] = samples[t-1,:]\n",
|
||||||
|
"\n",
|
||||||
|
" # If not the last time step\n",
|
||||||
|
" if t>0:\n",
|
||||||
|
" samples[t-1,:] = samples[t-1,:]+ np.random.standard_normal(n_samples) * sigma_t\n",
|
||||||
|
" return samples"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "A-ZMFOvACIOw"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's run the diffusion process for a whole bunch of samples"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "ECAUfHNi9NVW"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Now we'll set the noise to a MUCH smaller level\n",
|
||||||
|
"sigma_t=0.001\n",
|
||||||
|
"n_samples = 100000\n",
|
||||||
|
"samples_low_noise = sample_ddim(all_models, T, sigma_t, n_samples)\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"# Plot the data\n",
|
||||||
|
"sampled_data = samples_low_noise[0,:]\n",
|
||||||
|
"bins = np.arange(-3,3.05,0.05)\n",
|
||||||
|
"\n",
|
||||||
|
"fig,ax = plt.subplots()\n",
|
||||||
|
"fig.set_size_inches(8,2.5)\n",
|
||||||
|
"ax.set_xlim([-3,3])\n",
|
||||||
|
"plt.hist(sampled_data, bins=bins, density =True)\n",
|
||||||
|
"ax.set_ylim(0, 0.8)\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "M-TY5w9Q8LYW"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Let's, plot the evolution of a few of the paths as in figure 18.7 (paths are from bottom to top now)."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "jYrAW6tN-gJ4"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"fig, ax = plt.subplots()\n",
|
||||||
|
"t_vals = np.arange(0,101,1)\n",
|
||||||
|
"ax.plot(samples_low_noise[:,0],t_vals,'r-')\n",
|
||||||
|
"ax.plot(samples_low_noise[:,1],t_vals,'g-')\n",
|
||||||
|
"ax.plot(samples_low_noise[:,2],t_vals,'b-')\n",
|
||||||
|
"ax.plot(samples_low_noise[:,3],t_vals,'c-')\n",
|
||||||
|
"ax.plot(samples_low_noise[:,4],t_vals,'m-')\n",
|
||||||
|
"ax.set_xlim([-3,3])\n",
|
||||||
|
"ax.set_ylim([101, 0])\n",
|
||||||
|
"ax.set_xlabel('value')\n",
|
||||||
|
"ax.set_ylabel('z_{t}')\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "4XU6CDZC_kFo"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"The samples have a tendency to move from positions that are near the center at time 100 to positions that are high in the true probability distribution at time 0"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "SGTYGGevAktz"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Let's now sample from the accelerated model, that requires fewer models. Again, we don't need to learn anything new -- this is just the reverse process that corresponds to a different forward process that is compatible with the same diffusion kernel.\n",
|
||||||
|
"\n",
|
||||||
|
"There's nothing to do here except read the code. It uses the same DDIM model as you just implemented in the previous step, but it jumps timesteps five at a time."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "Z-LZp_fMXxRt"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def sample_accelerated(model, T, sigma_t, n_steps, n_samples):\n",
|
||||||
|
" # Create the output array\n",
|
||||||
|
" # Each row represents a sample (i.e. fewer than the time steps), first row will be sampled data\n",
|
||||||
|
" # Each column represents a different sample\n",
|
||||||
|
" samples = np.zeros((n_steps+1,n_samples))\n",
|
||||||
|
" samples[n_steps,:] = np.random.standard_normal(n_samples)\n",
|
||||||
|
"\n",
|
||||||
|
" # For each sampling step\n",
|
||||||
|
" for c_step in range(n_steps,0,-1):\n",
|
||||||
|
" # Find the corresponding time step and previous time step\n",
|
||||||
|
" t= int(T * c_step/n_steps)\n",
|
||||||
|
" tminus1 = int(T * (c_step-1)/n_steps)\n",
|
||||||
|
" display(\"Predicting z_{%d} from z_{%d}\"%(tminus1,t))\n",
|
||||||
|
"\n",
|
||||||
|
" alpha_t = np.power(1-beta,t+1)\n",
|
||||||
|
" alpha_t_minus1 = np.power(1-beta,tminus1+1)\n",
|
||||||
|
" epsilon_est = all_models[t-1].predict(samples[c_step,:])\n",
|
||||||
|
"\n",
|
||||||
|
" samples[c_step-1,:]=np.sqrt(alpha_t_minus1)*(samples[c_step,:]-np.sqrt(1-alpha_t) * epsilon_est)/np.sqrt(alpha_t) \\\n",
|
||||||
|
" + np.sqrt(1-alpha_t_minus1 - sigma_t*sigma_t) * epsilon_est\n",
|
||||||
|
" # If not the last time step\n",
|
||||||
|
" if t>0:\n",
|
||||||
|
" samples[c_step-1,:] = samples[c_step-1,:]+ np.random.standard_normal(n_samples) * sigma_t\n",
|
||||||
|
" return samples"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "3Z0erjGbYj1u"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's draw a bunch of samples from the model"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "D3Sm_WYrcuED"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"sigma_t=0.11\n",
|
||||||
|
"n_samples = 100000\n",
|
||||||
|
"n_steps = 20 # i.e. sample 5 times as fast as before -- should be a divisor of 100\n",
|
||||||
|
"samples_accelerated = sample_accelerated(all_models, T, sigma_t, n_steps, n_samples)\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"# Plot the data\n",
|
||||||
|
"sampled_data = samples_accelerated[0,:]\n",
|
||||||
|
"bins = np.arange(-3,3.05,0.05)\n",
|
||||||
|
"\n",
|
||||||
|
"fig,ax = plt.subplots()\n",
|
||||||
|
"fig.set_size_inches(8,2.5)\n",
|
||||||
|
"ax.set_xlim([-3,3])\n",
|
||||||
|
"plt.hist(sampled_data, bins=bins, density =True)\n",
|
||||||
|
"ax.set_ylim(0, 0.9)\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "UB45c7VMcGy-"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"fig, ax = plt.subplots()\n",
|
||||||
|
"step_increment = 100/ n_steps\n",
|
||||||
|
"t_vals = np.arange(0,101,5)\n",
|
||||||
|
"\n",
|
||||||
|
"for i in range(len(t_vals)-1):\n",
|
||||||
|
" ax.plot( (samples_accelerated[i,0],samples_accelerated[i+1,0]), (t_vals[i], t_vals[i+1]),'r.-')\n",
|
||||||
|
" ax.plot( (samples_accelerated[i,1],samples_accelerated[i+1,1]), (t_vals[i], t_vals[i+1]),'g.-')\n",
|
||||||
|
" ax.plot( (samples_accelerated[i,2],samples_accelerated[i+1,2]), (t_vals[i], t_vals[i+1]),'b.-')\n",
|
||||||
|
" ax.plot( (samples_accelerated[i,3],samples_accelerated[i+1,3]), (t_vals[i], t_vals[i+1]),'c.-')\n",
|
||||||
|
" ax.plot( (samples_accelerated[i,4],samples_accelerated[i+1,4]), (t_vals[i], t_vals[i+1]),'m.-')\n",
|
||||||
|
"\n",
|
||||||
|
"ax.set_xlim([-3,3])\n",
|
||||||
|
"ax.set_ylim([101, 0])\n",
|
||||||
|
"ax.set_xlabel('value')\n",
|
||||||
|
"ax.set_ylabel('z_{t}')\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "Luv-6w84c_qO"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [],
|
||||||
|
"metadata": {
|
||||||
|
"id": "LSJi72f0kw_e"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
736
Notebooks/Chap19/19_1_Markov_Decision_Processes.ipynb
Normal file
736
Notebooks/Chap19/19_1_Markov_Decision_Processes.ipynb
Normal file
@@ -0,0 +1,736 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyPg3umHnqmIXX6jGe809Nxf",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap19/19_1_Markov_Decision_Processes.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 19.1: Markov Decision Processes**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook investigates Markov decision processes as described in section 19.1 of the book.\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "t9vk9Elugvmi"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt\n",
|
||||||
|
"from PIL import Image"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "OLComQyvCIJ7"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Get local copies of components of images\n",
|
||||||
|
"!wget https://raw.githubusercontent.com/udlbook/udlbook/main/Notebooks/Chap19/Empty.png\n",
|
||||||
|
"!wget https://raw.githubusercontent.com/udlbook/udlbook/main/Notebooks/Chap19/Hole.png\n",
|
||||||
|
"!wget https://raw.githubusercontent.com/udlbook/udlbook/main/Notebooks/Chap19/Fish.png\n",
|
||||||
|
"!wget https://raw.githubusercontent.com/udlbook/udlbook/main/Notebooks/Chap19/Penguin.png"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "ZsvrUszPLyEG"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Ugly class that takes care of drawing pictures like in the book.\n",
|
||||||
|
"# You can totally ignore this code!\n",
|
||||||
|
"class DrawMDP:\n",
|
||||||
|
" # Constructor initializes parameters\n",
|
||||||
|
" def __init__(self, n_row, n_col):\n",
|
||||||
|
" self.empty_image = np.asarray(Image.open('Empty.png'))\n",
|
||||||
|
" self.hole_image = np.asarray(Image.open('Hole.png'))\n",
|
||||||
|
" self.fish_image = np.asarray(Image.open('Fish.png'))\n",
|
||||||
|
" self.penguin_image = np.asarray(Image.open('Penguin.png'))\n",
|
||||||
|
" self.fig,self.ax = plt.subplots()\n",
|
||||||
|
" self.n_row = n_row\n",
|
||||||
|
" self.n_col = n_col\n",
|
||||||
|
"\n",
|
||||||
|
" my_colormap_vals_hex =('2a0902', '2b0a03', '2c0b04', '2d0c05', '2e0c06', '2f0d07', '300d08', '310e09', '320f0a', '330f0b', '34100b', '35110c', '36110d', '37120e', '38120f', '39130f', '3a1410', '3b1411', '3c1511', '3d1612', '3e1613', '3f1713', '401714', '411814', '421915', '431915', '451a16', '461b16', '471b17', '481c17', '491d18', '4a1d18', '4b1e19', '4c1f19', '4d1f1a', '4e201b', '50211b', '51211c', '52221c', '53231d', '54231d', '55241e', '56251e', '57261f', '58261f', '592720', '5b2821', '5c2821', '5d2922', '5e2a22', '5f2b23', '602b23', '612c24', '622d25', '632e25', '652e26', '662f26', '673027', '683027', '693128', '6a3229', '6b3329', '6c342a', '6d342a', '6f352b', '70362c', '71372c', '72372d', '73382e', '74392e', '753a2f', '763a2f', '773b30', '783c31', '7a3d31', '7b3e32', '7c3e33', '7d3f33', '7e4034', '7f4134', '804235', '814236', '824336', '834437', '854538', '864638', '874739', '88473a', '89483a', '8a493b', '8b4a3c', '8c4b3c', '8d4c3d', '8e4c3e', '8f4d3f', '904e3f', '924f40', '935041', '945141', '955242', '965343', '975343', '985444', '995545', '9a5646', '9b5746', '9c5847', '9d5948', '9e5a49', '9f5a49', 'a05b4a', 'a15c4b', 'a35d4b', 'a45e4c', 'a55f4d', 'a6604e', 'a7614e', 'a8624f', 'a96350', 'aa6451', 'ab6552', 'ac6552', 'ad6653', 'ae6754', 'af6855', 'b06955', 'b16a56', 'b26b57', 'b36c58', 'b46d59', 'b56e59', 'b66f5a', 'b7705b', 'b8715c', 'b9725d', 'ba735d', 'bb745e', 'bc755f', 'bd7660', 'be7761', 'bf7862', 'c07962', 'c17a63', 'c27b64', 'c27c65', 'c37d66', 'c47e67', 'c57f68', 'c68068', 'c78169', 'c8826a', 'c9836b', 'ca846c', 'cb856d', 'cc866e', 'cd876f', 'ce886f', 'ce8970', 'cf8a71', 'd08b72', 'd18c73', 'd28d74', 'd38e75', 'd48f76', 'd59077', 'd59178', 'd69279', 'd7937a', 'd8957b', 'd9967b', 'da977c', 'da987d', 'db997e', 'dc9a7f', 'dd9b80', 'de9c81', 'de9d82', 'df9e83', 'e09f84', 'e1a185', 'e2a286', 'e2a387', 'e3a488', 'e4a589', 'e5a68a', 'e5a78b', 'e6a88c', 'e7aa8d', 'e7ab8e', 'e8ac8f', 'e9ad90', 'eaae91', 'eaaf92', 'ebb093', 'ecb295', 'ecb396', 'edb497', 'eeb598', 'eeb699', 'efb79a', 'efb99b', 'f0ba9c', 'f1bb9d', 'f1bc9e', 'f2bd9f', 'f2bfa1', 'f3c0a2', 'f3c1a3', 'f4c2a4', 'f5c3a5', 'f5c5a6', 'f6c6a7', 'f6c7a8', 'f7c8aa', 'f7c9ab', 'f8cbac', 'f8ccad', 'f8cdae', 'f9ceb0', 'f9d0b1', 'fad1b2', 'fad2b3', 'fbd3b4', 'fbd5b6', 'fbd6b7', 'fcd7b8', 'fcd8b9', 'fcdaba', 'fddbbc', 'fddcbd', 'fddebe', 'fddfbf', 'fee0c1', 'fee1c2', 'fee3c3', 'fee4c5', 'ffe5c6', 'ffe7c7', 'ffe8c9', 'ffe9ca', 'ffebcb', 'ffeccd', 'ffedce', 'ffefcf', 'fff0d1', 'fff2d2', 'fff3d3', 'fff4d5', 'fff6d6', 'fff7d8', 'fff8d9', 'fffada', 'fffbdc', 'fffcdd', 'fffedf', 'ffffe0')\n",
|
||||||
|
" my_colormap_vals_dec = np.array([int(element,base=16) for element in my_colormap_vals_hex])\n",
|
||||||
|
" r = np.floor(my_colormap_vals_dec/(256*256))\n",
|
||||||
|
" g = np.floor((my_colormap_vals_dec - r *256 *256)/256)\n",
|
||||||
|
" b = np.floor(my_colormap_vals_dec - r * 256 *256 - g * 256)\n",
|
||||||
|
" self.colormap = np.vstack((r,g,b)).transpose()/255.0\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" def draw_text(self, text, row, col, position, color):\n",
|
||||||
|
" if position == 'bc':\n",
|
||||||
|
" self.ax.text( 83*col+41,83 * (row+1) -10, text, horizontalalignment=\"center\", color=color, fontweight='bold')\n",
|
||||||
|
" if position == 'tl':\n",
|
||||||
|
" self.ax.text( 83*col+5,83 * row +5, text, verticalalignment = 'top', horizontalalignment=\"left\", color=color, fontweight='bold')\n",
|
||||||
|
"\n",
|
||||||
|
" # Draws a set of states\n",
|
||||||
|
" def draw_path(self, path, color1, color2):\n",
|
||||||
|
" for i in range(len(path)-1):\n",
|
||||||
|
" row_start = np.floor(path[i]/self.n_col)\n",
|
||||||
|
" row_end = np.floor(path[i+1]/self.n_col)\n",
|
||||||
|
" col_start = path[i] - row_start * self.n_col\n",
|
||||||
|
" col_end = path[i+1] - row_end * self.n_col\n",
|
||||||
|
"\n",
|
||||||
|
" color_index = int(np.floor(255 * i/(len(path)-1.)))\n",
|
||||||
|
" self.ax.plot([col_start * 83+41 + i, col_end * 83+41 + i ],[row_start * 83+41 + i, row_end * 83+41 + i ], color=(self.colormap[color_index,0],self.colormap[color_index,1],self.colormap[color_index,2]))\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" # Draw deterministic policy\n",
|
||||||
|
" def draw_deterministic_policy(self,i, action):\n",
|
||||||
|
" row = np.floor(i/self.n_col)\n",
|
||||||
|
" col = i - row * self.n_col\n",
|
||||||
|
" center_x = 83 * col + 41\n",
|
||||||
|
" center_y = 83 * row + 41\n",
|
||||||
|
" arrow_base_width = 10\n",
|
||||||
|
" arrow_height = 15\n",
|
||||||
|
" # Draw arrow pointing upward\n",
|
||||||
|
" if action ==0:\n",
|
||||||
|
" triangle_indices = np.array([[center_x, center_y-arrow_height/2],\n",
|
||||||
|
" [center_x - arrow_base_width/2, center_y+arrow_height/2],\n",
|
||||||
|
" [center_x + arrow_base_width/2, center_y+arrow_height/2]])\n",
|
||||||
|
" # Draw arrow pointing right\n",
|
||||||
|
" if action ==1:\n",
|
||||||
|
" triangle_indices = np.array([[center_x + arrow_height/2, center_y],\n",
|
||||||
|
" [center_x - arrow_height/2, center_y-arrow_base_width/2],\n",
|
||||||
|
" [center_x - arrow_height/2, center_y+arrow_base_width/2]])\n",
|
||||||
|
" # Draw arrow pointing downward\n",
|
||||||
|
" if action ==2:\n",
|
||||||
|
" triangle_indices = np.array([[center_x, center_y+arrow_height/2],\n",
|
||||||
|
" [center_x - arrow_base_width/2, center_y-arrow_height/2],\n",
|
||||||
|
" [center_x + arrow_base_width/2, center_y-arrow_height/2]])\n",
|
||||||
|
" # Draw arrow pointing left\n",
|
||||||
|
" if action ==3:\n",
|
||||||
|
" triangle_indices = np.array([[center_x - arrow_height/2, center_y],\n",
|
||||||
|
" [center_x + arrow_height/2, center_y-arrow_base_width/2],\n",
|
||||||
|
" [center_x + arrow_height/2, center_y+arrow_base_width/2]])\n",
|
||||||
|
" self.ax.fill(triangle_indices[:,0], triangle_indices[:,1],facecolor='cyan', edgecolor='darkcyan', linewidth=1)\n",
|
||||||
|
"\n",
|
||||||
|
" # Draw stochastic policy\n",
|
||||||
|
" def draw_stochastic_policy(self,i, action_probs):\n",
|
||||||
|
" row = np.floor(i/self.n_col)\n",
|
||||||
|
" col = i - row * self.n_col\n",
|
||||||
|
" offset = 20\n",
|
||||||
|
" # Draw arrow pointing upward\n",
|
||||||
|
" center_x = 83 * col + 41\n",
|
||||||
|
" center_y = 83 * row + 41 - offset\n",
|
||||||
|
" arrow_base_width = 15 * action_probs[0]\n",
|
||||||
|
" arrow_height = 20 * action_probs[0]\n",
|
||||||
|
" triangle_indices = np.array([[center_x, center_y-arrow_height/2],\n",
|
||||||
|
" [center_x - arrow_base_width/2, center_y+arrow_height/2],\n",
|
||||||
|
" [center_x + arrow_base_width/2, center_y+arrow_height/2]])\n",
|
||||||
|
" self.ax.fill(triangle_indices[:,0], triangle_indices[:,1],facecolor='cyan', edgecolor='darkcyan', linewidth=1)\n",
|
||||||
|
"\n",
|
||||||
|
" # Draw arrow pointing right\n",
|
||||||
|
" center_x = 83 * col + 41 + offset\n",
|
||||||
|
" center_y = 83 * row + 41\n",
|
||||||
|
" arrow_base_width = 15 * action_probs[1]\n",
|
||||||
|
" arrow_height = 20 * action_probs[1]\n",
|
||||||
|
" triangle_indices = np.array([[center_x + arrow_height/2, center_y],\n",
|
||||||
|
" [center_x - arrow_height/2, center_y-arrow_base_width/2],\n",
|
||||||
|
" [center_x - arrow_height/2, center_y+arrow_base_width/2]])\n",
|
||||||
|
" self.ax.fill(triangle_indices[:,0], triangle_indices[:,1],facecolor='cyan', edgecolor='darkcyan', linewidth=1)\n",
|
||||||
|
"\n",
|
||||||
|
" # Draw arrow pointing downward\n",
|
||||||
|
" center_x = 83 * col + 41\n",
|
||||||
|
" center_y = 83 * row + 41 +offset\n",
|
||||||
|
" arrow_base_width = 15 * action_probs[2]\n",
|
||||||
|
" arrow_height = 20 * action_probs[2]\n",
|
||||||
|
" triangle_indices = np.array([[center_x, center_y+arrow_height/2],\n",
|
||||||
|
" [center_x - arrow_base_width/2, center_y-arrow_height/2],\n",
|
||||||
|
" [center_x + arrow_base_width/2, center_y-arrow_height/2]])\n",
|
||||||
|
" self.ax.fill(triangle_indices[:,0], triangle_indices[:,1],facecolor='cyan', edgecolor='darkcyan', linewidth=1)\n",
|
||||||
|
"\n",
|
||||||
|
" # Draw arrow pointing left\n",
|
||||||
|
" center_x = 83 * col + 41 -offset\n",
|
||||||
|
" center_y = 83 * row + 41\n",
|
||||||
|
" arrow_base_width = 15 * action_probs[3]\n",
|
||||||
|
" arrow_height = 20 * action_probs[3]\n",
|
||||||
|
" triangle_indices = np.array([[center_x - arrow_height/2, center_y],\n",
|
||||||
|
" [center_x + arrow_height/2, center_y-arrow_base_width/2],\n",
|
||||||
|
" [center_x + arrow_height/2, center_y+arrow_base_width/2]])\n",
|
||||||
|
" self.ax.fill(triangle_indices[:,0], triangle_indices[:,1],facecolor='cyan', edgecolor='darkcyan', linewidth=1)\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" def draw(self, layout, state, draw_state_index= False, rewards=None, policy=None, state_values=None, action_values=None,path1=None, path2 = None):\n",
|
||||||
|
" # Construct the image\n",
|
||||||
|
" image_out = np.zeros((self.n_row * 83, self.n_col * 83, 4),dtype='uint8')\n",
|
||||||
|
" for c_row in range (self.n_row):\n",
|
||||||
|
" for c_col in range(self.n_col):\n",
|
||||||
|
" if layout[c_row * self.n_col + c_col]==0:\n",
|
||||||
|
" image_out[c_row*83:c_row*83+83, c_col*83:c_col*83+83,:] = self.empty_image\n",
|
||||||
|
" elif layout[c_row * self.n_col + c_col]==1:\n",
|
||||||
|
" image_out[c_row*83:c_row*83+83, c_col*83:c_col*83+83,:] = self.hole_image\n",
|
||||||
|
" else:\n",
|
||||||
|
" image_out[c_row*83:c_row*83+83, c_col*83:c_col*83+83,:] = self.fish_image\n",
|
||||||
|
" if state == c_row * self.n_col + c_col:\n",
|
||||||
|
" image_out[c_row*83:c_row*83+83, c_col*83:c_col*83+83,:] = self.penguin_image\n",
|
||||||
|
"\n",
|
||||||
|
" # Draw the image\n",
|
||||||
|
" plt.imshow(image_out)\n",
|
||||||
|
" self.ax.get_xaxis().set_visible(False)\n",
|
||||||
|
" self.ax.get_yaxis().set_visible(False)\n",
|
||||||
|
" self.ax.spines['top'].set_visible(False)\n",
|
||||||
|
" self.ax.spines['right'].set_visible(False)\n",
|
||||||
|
" self.ax.spines['bottom'].set_visible(False)\n",
|
||||||
|
" self.ax.spines['left'].set_visible(False)\n",
|
||||||
|
"\n",
|
||||||
|
" if draw_state_index:\n",
|
||||||
|
" for c_cell in range(layout.size):\n",
|
||||||
|
" self.draw_text(\"%d\"%(c_cell), np.floor(c_cell/self.n_col), c_cell-np.floor(c_cell/self.n_col)*self.n_col,'tl','k')\n",
|
||||||
|
"\n",
|
||||||
|
" # Draw the policy as triangles\n",
|
||||||
|
" if policy is not None:\n",
|
||||||
|
" # If the policy is deterministic\n",
|
||||||
|
" if len(policy) == len(layout):\n",
|
||||||
|
" for i in range(len(layout)):\n",
|
||||||
|
" self.draw_deterministic_policy(i, policy[i])\n",
|
||||||
|
" # Else it is stochastic\n",
|
||||||
|
" else:\n",
|
||||||
|
" for i in range(len(layout)):\n",
|
||||||
|
" self.draw_stochastic_policy(i,policy[:,i])\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" if path1 is not None:\n",
|
||||||
|
" # self.draw_path(path1, np.array([0.81, 0.51, 0.38]), np.array([1.0, 0.2, 0.5]))\n",
|
||||||
|
" self.draw_path(path1, np.array([1.0, 0.0, 0.0]), np.array([0.0, 1.0, 1.0]))\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "Gq1HfJsHN3SB"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Let's draw the initial situation with the penguin in top right\n",
|
||||||
|
"n_rows = 4; n_cols = 4\n",
|
||||||
|
"layout = np.zeros(n_rows * n_cols)\n",
|
||||||
|
"initial_state = 0\n",
|
||||||
|
"mdp_drawer = DrawMDP(n_rows, n_cols)\n",
|
||||||
|
"mdp_drawer.draw(layout, state = initial_state, draw_state_index = True)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "eBQ7lTpJQBSe"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Note that the states are indexed from 0 rather than 1 as in the book to make\n",
|
||||||
|
"the code neater."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "P7P40UyMunKb"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Define the state probabilities\n",
|
||||||
|
"transition_probabilities = np.array( \\\n",
|
||||||
|
"[[0.00 , 0.33, 0.00, 0.00, 0.33, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.50 , 0.00, 0.33, 0.00, 0.00, 0.25, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.33, 0.00, 0.50, 0.00, 0.00, 0.25, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.33, 0.00, 0.00, 0.00, 0.00, 0.33, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.50 , 0.00, 0.00, 0.00, 0.00, 0.25, 0.00, 0.00, 0.33, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.34, 0.00, 0.00, 0.33, 0.00, 0.25, 0.00, 0.00, 0.25, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.34, 0.00, 0.00, 0.25, 0.00, 0.33, 0.00, 0.00, 0.25, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.50, 0.00, 0.00, 0.25, 0.00, 0.00, 0.00, 0.00, 0.33, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.34, 0.00, 0.00, 0.00, 0.00, 0.25, 0.00, 0.00, 0.50, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.25, 0.00, 0.00, 0.33, 0.00, 0.25, 0.00, 0.00, 0.33, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.25, 0.00, 0.00, 0.25, 0.00, 0.33, 0.00, 0.00, 0.33, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.34, 0.00, 0.00, 0.25, 0.00, 0.00, 0.00, 0.00, 0.50 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.34, 0.00, 0.00, 0.00, 0.00, 0.33, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.25, 0.00, 0.00, 0.50, 0.00, 0.33, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.25, 0.00, 0.00, 0.34, 0.00, 0.50 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.34, 0.00, 0.00, 0.34, 0.00 ],\n",
|
||||||
|
"])\n",
|
||||||
|
"initial_state = 0"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "wgFcIi4YQJWI"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Define a step from the Markov process"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "axllRDDuDDLS"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def markov_process_step(state, transition_probabilities):\n",
|
||||||
|
" # TODO -- update the state according to the appropriate transition probabilities\n",
|
||||||
|
" # One way to do this is to use np.random.choice\n",
|
||||||
|
" # Replace this line:\n",
|
||||||
|
" new_state = 0\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" return new_state"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "FrSZrS67sdbN"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Run the Markov process for 10 steps and visualise the results"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "uTj7rN6LDFXd"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"np.random.seed(0)\n",
|
||||||
|
"T = 10\n",
|
||||||
|
"states = np.zeros(T, dtype='uint8')\n",
|
||||||
|
"states[0] = 0\n",
|
||||||
|
"for t in range(T-1):\n",
|
||||||
|
" states[t+1] = markov_process_step(states[t], transition_probabilities)\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"print(\"Your States:\", states)\n",
|
||||||
|
"print(\"True States: [ 0 4 8 9 10 9 10 9 13 14]\")\n",
|
||||||
|
"mdp_drawer = DrawMDP(n_rows, n_cols)\n",
|
||||||
|
"mdp_drawer.draw(layout, state = states[0], path1=states, draw_state_index = True)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "lRIdjagCwP62"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Define a Markov one step of a reward process."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "QLyjyBjjDMin"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def markov_reward_process_step(state, transition_probabilities, reward_structure):\n",
|
||||||
|
"\n",
|
||||||
|
" # TODO -- write this function\n",
|
||||||
|
" # Update the state. Return a reward of +1 if the Penguin lands on the fish\n",
|
||||||
|
" # or zero otherwise.\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" new_state = 0; reward = 0\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" return new_state, reward"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "YPHSJRKx-pgO"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Run the Markov reward process for 10 steps and visualise the results"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "AIz8QEiRFoCm"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Set up the reward structure so it matches figure 19.2\n",
|
||||||
|
"reward_structure = np.zeros((16,1))\n",
|
||||||
|
"reward_structure[3] = 1; reward_structure[8] = 1; reward_structure[10] = 1\n",
|
||||||
|
"\n",
|
||||||
|
"# Initialize random numbers\n",
|
||||||
|
"np.random.seed(0)\n",
|
||||||
|
"T = 10\n",
|
||||||
|
"# Set up the states, so the fish are in the same positions as figure 19.2\n",
|
||||||
|
"states = np.zeros(T, dtype='uint8')\n",
|
||||||
|
"rewards = np.zeros(T, dtype='uint8')\n",
|
||||||
|
"\n",
|
||||||
|
"states[0] = 0\n",
|
||||||
|
"for t in range(T-1):\n",
|
||||||
|
" states[t+1],rewards[t+1] = markov_reward_process_step(states[t], transition_probabilities, reward_structure)\n",
|
||||||
|
"\n",
|
||||||
|
"print(\"Your States:\", states)\n",
|
||||||
|
"print(\"Your Rewards:\", rewards)\n",
|
||||||
|
"print(\"True Rewards: [0 0 1 0 1 0 1 0 0 0]\")\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"# Draw the figure\n",
|
||||||
|
"layout = np.zeros(n_rows * n_cols)\n",
|
||||||
|
"layout[3] = 2; layout[8] = 2 ; layout[10] = 2\n",
|
||||||
|
"mdp_drawer = DrawMDP(n_rows, n_cols)\n",
|
||||||
|
"mdp_drawer.draw(layout, state = states[0], path1=states, draw_state_index = True)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "0p1gCpGoFn4M"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's calculate the return -- the sum of discounted future rewards"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "lyz47NWrITfj"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def calculate_return(rewards, gamma):\n",
|
||||||
|
" # TODO -- you write this function\n",
|
||||||
|
" # It should compute one return for the start of the sequence (i.e. G_1)\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" return_val = 0.0\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" return return_val"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "4fEuBRPnFm_N"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"gamma = 0.9\n",
|
||||||
|
"for t in range(len(states)):\n",
|
||||||
|
" print(\"Return at time %d = %3.3f\"%(t, calculate_return(rewards[t:],gamma)))\n",
|
||||||
|
"\n",
|
||||||
|
"# Reality check!\n",
|
||||||
|
"print(\"True return at time 0: 1.998\")"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "o19lQgM3JrOz"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's define the state transition function $Pr(s_{t+1}|s_{t},a)$ in full where $a$ is the actions. Here $a=0$ means try to go upward, $a=1$, right, $a=2$ down and $a=3$ right. However, the ice is slippery, so we don't always go the direction we want to.\n",
|
||||||
|
"\n",
|
||||||
|
"Note that as for the states, we've indexed the actions from zero (unlike in the book, so they map to the indices of arrays better)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "Fhc6DzZNOjiC"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"transition_probabilities_given_action1 = np.array(\\\n",
|
||||||
|
"[[0.00 , 0.33, 0.00, 0.00, 0.50, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.50 , 0.00, 0.33, 0.00, 0.00, 0.50, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.33, 0.00, 0.50, 0.00, 0.00, 0.50, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.33, 0.00, 0.00, 0.00, 0.00, 0.50, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.50 , 0.00, 0.00, 0.00, 0.00, 0.17, 0.00, 0.00, 0.50, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.34, 0.00, 0.00, 0.25, 0.00, 0.17, 0.00, 0.00, 0.50, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.34, 0.00, 0.00, 0.17, 0.00, 0.25, 0.00, 0.00, 0.50, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.50, 0.00, 0.00, 0.17, 0.00, 0.00, 0.00, 0.00, 0.50, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.25, 0.00, 0.00, 0.00, 0.00, 0.17, 0.00, 0.00, 0.75, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.16, 0.00, 0.00, 0.25, 0.00, 0.17, 0.00, 0.00, 0.50, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.16, 0.00, 0.00, 0.17, 0.00, 0.25, 0.00, 0.00, 0.50, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.25, 0.00, 0.00, 0.17, 0.00, 0.00, 0.00, 0.00, 0.75 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.25, 0.00, 0.00, 0.00, 0.00, 0.25, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.16, 0.00, 0.00, 0.25, 0.00, 0.25, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.16, 0.00, 0.00, 0.25, 0.00, 0.25 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.25, 0.00, 0.00, 0.25, 0.00 ],\n",
|
||||||
|
"])\n",
|
||||||
|
"\n",
|
||||||
|
"transition_probabilities_given_action2 = np.array(\\\n",
|
||||||
|
"[[0.00 , 0.25, 0.00, 0.00, 0.25, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.75 , 0.00, 0.25, 0.00, 0.00, 0.17, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.50, 0.00, 0.50, 0.00, 0.00, 0.17, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.50, 0.00, 0.00, 0.00, 0.00, 0.33, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.25 , 0.00, 0.00, 0.00, 0.00, 0.17, 0.00, 0.00, 0.25, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.25, 0.00, 0.00, 0.50, 0.00, 0.17, 0.00, 0.00, 0.17, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.25, 0.00, 0.00, 0.50, 0.00, 0.33, 0.00, 0.00, 0.17, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.50, 0.00, 0.00, 0.50, 0.00, 0.00, 0.00, 0.00, 0.33, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.25, 0.00, 0.00, 0.00, 0.00, 0.17, 0.00, 0.00, 0.25, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.16, 0.00, 0.00, 0.50, 0.00, 0.17, 0.00, 0.00, 0.25, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.16, 0.00, 0.00, 0.50, 0.00, 0.33, 0.00, 0.00, 0.25, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.34, 0.00, 0.00, 0.50, 0.00, 0.00, 0.00, 0.00, 0.50 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.25, 0.00, 0.00, 0.00, 0.00, 0.25, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.16, 0.00, 0.00, 0.75, 0.00, 0.25, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.16, 0.00, 0.00, 0.50, 0.00, 0.50 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.34, 0.00, 0.00, 0.50, 0.00 ],\n",
|
||||||
|
"])\n",
|
||||||
|
"\n",
|
||||||
|
"transition_probabilities_given_action3 = np.array(\\\n",
|
||||||
|
"[[0.00 , 0.25, 0.00, 0.00, 0.25, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.25 , 0.00, 0.25, 0.00, 0.00, 0.17, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.25, 0.00, 0.25, 0.00, 0.00, 0.17, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.25, 0.00, 0.00, 0.00, 0.00, 0.25, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.75 , 0.00, 0.00, 0.00, 0.00, 0.17, 0.00, 0.00, 0.25, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.50, 0.00, 0.00, 0.25, 0.00, 0.17, 0.00, 0.00, 0.17, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.50, 0.00, 0.00, 0.16, 0.00, 0.25, 0.00, 0.00, 0.17, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.75, 0.00, 0.00, 0.16, 0.00, 0.00, 0.00, 0.00, 0.25, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.50, 0.00, 0.00, 0.00, 0.00, 0.17, 0.00, 0.00, 0.50, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.50, 0.00, 0.00, 0.25, 0.00, 0.17, 0.00, 0.00, 0.33, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.50, 0.00, 0.00, 0.16, 0.00, 0.25, 0.00, 0.00, 0.33, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.50, 0.00, 0.00, 0.16, 0.00, 0.00, 0.00, 0.00, 0.50 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.50, 0.00, 0.00, 0.00, 0.00, 0.33, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.50, 0.00, 0.00, 0.50, 0.00, 0.33, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.50, 0.00, 0.00, 0.34, 0.00, 0.50 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.50, 0.00, 0.00, 0.34, 0.00 ],\n",
|
||||||
|
"])\n",
|
||||||
|
"\n",
|
||||||
|
"transition_probabilities_given_action4 = np.array(\\\n",
|
||||||
|
"[[0.00 , 0.25, 0.00, 0.00, 0.33, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.50 , 0.00, 0.25, 0.00, 0.00, 0.17, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.50, 0.00, 0.75, 0.00, 0.00, 0.17, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.50, 0.00, 0.00, 0.00, 0.00, 0.25, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.50 , 0.00, 0.00, 0.00, 0.00, 0.50, 0.00, 0.00, 0.33, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.25, 0.00, 0.00, 0.33, 0.00, 0.50, 0.00, 0.00, 0.17, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.25, 0.00, 0.00, 0.17, 0.00, 0.50, 0.00, 0.00, 0.17, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.25, 0.00, 0.00, 0.17, 0.00, 0.00, 0.00, 0.00, 0.25, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.34, 0.00, 0.00, 0.00, 0.00, 0.50, 0.00, 0.00, 0.50, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.16, 0.00, 0.00, 0.33, 0.00, 0.50, 0.00, 0.00, 0.25, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.16, 0.00, 0.00, 0.17, 0.00, 0.50, 0.00, 0.00, 0.25, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.25, 0.00, 0.00, 0.17, 0.00, 0.00, 0.00, 0.00, 0.25 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.34, 0.00, 0.00, 0.00, 0.00, 0.50, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.16, 0.00, 0.00, 0.50, 0.00, 0.50, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.16, 0.00, 0.00, 0.25, 0.00, 0.75 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.25, 0.00, 0.00, 0.25, 0.00 ],\n",
|
||||||
|
"])\n",
|
||||||
|
"\n",
|
||||||
|
"# Store all of these in a three dimension array\n",
|
||||||
|
"# Pr(s_{t+1}=2|s_{t}=1, a_{t}=3] is stored at position [2,1,3]\n",
|
||||||
|
"transition_probabilities_given_action = np.concatenate((np.expand_dims(transition_probabilities_given_action1,2),\n",
|
||||||
|
" np.expand_dims(transition_probabilities_given_action2,2),\n",
|
||||||
|
" np.expand_dims(transition_probabilities_given_action3,2),\n",
|
||||||
|
" np.expand_dims(transition_probabilities_given_action4,2)),axis=2)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "l7rT78BbOgTi"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Now we need a policy. Let's start with the deterministic policy in figure 19.5a:\n",
|
||||||
|
"policy = [2,2,1,1, 2,1,1,1, 1,1,0,2, 1,0,1,1]\n",
|
||||||
|
"\n",
|
||||||
|
"# Let's draw the policy first\n",
|
||||||
|
"layout = np.zeros(n_rows * n_cols)\n",
|
||||||
|
"layout[15] = 2\n",
|
||||||
|
"mdp_drawer = DrawMDP(n_rows, n_cols)\n",
|
||||||
|
"mdp_drawer.draw(layout, state = states[0], policy = policy, draw_state_index = True)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "8jWhDlkaKj7Q"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def markov_decision_process_step_deterministic(state, transition_probabilities_given_action, reward_structure, policy):\n",
|
||||||
|
" # TODO -- complete this function.\n",
|
||||||
|
" # For each state, theres is a corresponding action.\n",
|
||||||
|
" # Draw the next state based on the current state and that action\n",
|
||||||
|
" # and calculate the reward\n",
|
||||||
|
" # Replace this line:\n",
|
||||||
|
" new_state = 0; reward = 0;\n",
|
||||||
|
"\n",
|
||||||
|
" return new_state, reward\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "dueNbS2SUVUK"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Set up the reward structure so it matches figure 19.2\n",
|
||||||
|
"reward_structure = np.zeros((16,1))\n",
|
||||||
|
"reward_structure[15] = 1\n",
|
||||||
|
"\n",
|
||||||
|
"# Initialize random number seed\n",
|
||||||
|
"np.random.seed(3)\n",
|
||||||
|
"T = 10\n",
|
||||||
|
"# Set up the states, so the fish are in the same positions as figure 19.5\n",
|
||||||
|
"states = np.zeros(T, dtype='uint8')\n",
|
||||||
|
"rewards = np.zeros(T, dtype='uint8')\n",
|
||||||
|
"\n",
|
||||||
|
"states[0] = 0\n",
|
||||||
|
"for t in range(T-1):\n",
|
||||||
|
" states[t+1],rewards[t+1] = markov_decision_process_step_deterministic(states[t], transition_probabilities_given_action, reward_structure, policy)\n",
|
||||||
|
"\n",
|
||||||
|
"print(\"Your States:\", states)\n",
|
||||||
|
"print(\"True States: [ 0 4 8 9 13 14 15 11 7 3]\")\n",
|
||||||
|
"print(\"Your Rewards:\", rewards)\n",
|
||||||
|
"print(\"True Rewards: [0 0 0 0 0 0 1 0 0 0]\")\n",
|
||||||
|
"\n",
|
||||||
|
"mdp_drawer = DrawMDP(n_rows, n_cols)\n",
|
||||||
|
"mdp_drawer.draw(layout, state = states[0], path1=states, policy = policy, draw_state_index = True)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "4Du5aUfd2Lci"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"You can see that the Penguin usually follows the policy, (heads in the direction of the cyan arrows (when it can). But sometimes, the penguin \"slips\" to a different neighboring state\n",
|
||||||
|
"\n",
|
||||||
|
"Now let's investigate a stochastic policy"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "bLEd8xug33b-"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"np.random.seed(0)\n",
|
||||||
|
"# Let's now choose a random policy. We'll generate a set of random numbers and pass\n",
|
||||||
|
"# them through a softmax function\n",
|
||||||
|
"stochastic_policy = np.random.normal(size=(4,n_rows*n_cols))\n",
|
||||||
|
"stochastic_policy = np.exp(stochastic_policy) / (np.ones((4,1))@ np.expand_dims(np.sum(np.exp(stochastic_policy), axis=0),0))\n",
|
||||||
|
"np.set_printoptions(precision=2)\n",
|
||||||
|
"print(stochastic_policy)\n",
|
||||||
|
"\n",
|
||||||
|
"# Let's draw the policy first\n",
|
||||||
|
"layout = np.zeros(n_rows * n_cols)\n",
|
||||||
|
"layout[15] = 2\n",
|
||||||
|
"mdp_drawer = DrawMDP(n_rows, n_cols)\n",
|
||||||
|
"mdp_drawer.draw(layout, state = states[0], path1=states, policy = stochastic_policy, draw_state_index = True)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "o7T0b3tyilDc"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def markov_decision_process_step_stochastic(state, transition_probabilities_given_action, reward_structure, stochastic_policy):\n",
|
||||||
|
" # TODO -- complete this function.\n",
|
||||||
|
" # For each state, theres is a corresponding distribution over actions\n",
|
||||||
|
" # Draw a sample from that distribution to get the action\n",
|
||||||
|
" # Draw the next state based on the current state and that action\n",
|
||||||
|
" # and calculate the reward\n",
|
||||||
|
" # Replace this line:\n",
|
||||||
|
" new_state = 0; reward = 0;action = 0\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" return new_state, reward, action"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "T68mTZSe6A3w"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Set up the reward structure so it matches figure 19.2\n",
|
||||||
|
"reward_structure = np.zeros((16,1))\n",
|
||||||
|
"reward_structure[15] = 1\n",
|
||||||
|
"\n",
|
||||||
|
"# Initialize random number seed\n",
|
||||||
|
"np.random.seed(0)\n",
|
||||||
|
"T = 10\n",
|
||||||
|
"# Set up the states, so the fish are in the same positions as figure 19.5\n",
|
||||||
|
"states = np.zeros(T, dtype='uint8')\n",
|
||||||
|
"rewards = np.zeros(T, dtype='uint8')\n",
|
||||||
|
"actions = np.zeros(T-1, dtype='uint8')\n",
|
||||||
|
"\n",
|
||||||
|
"states[0] = 0\n",
|
||||||
|
"for t in range(T-1):\n",
|
||||||
|
" states[t+1],rewards[t+1],actions[t] = markov_decision_process_step_stochastic(states[t], transition_probabilities_given_action, reward_structure, stochastic_policy)\n",
|
||||||
|
"\n",
|
||||||
|
"print(\"Actions\", actions)\n",
|
||||||
|
"print(\"Your States:\", states)\n",
|
||||||
|
"print(\"Your Rewards:\", rewards)\n",
|
||||||
|
"\n",
|
||||||
|
"mdp_drawer = DrawMDP(n_rows, n_cols)\n",
|
||||||
|
"mdp_drawer.draw(layout, state = states[0], path1=states, policy = stochastic_policy, draw_state_index = True)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "hMRVYX2HtqMg"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
769
Notebooks/Chap19/19_2_Dynamic_Programming.ipynb
Normal file
769
Notebooks/Chap19/19_2_Dynamic_Programming.ipynb
Normal file
File diff suppressed because one or more lines are too long
826
Notebooks/Chap19/19_3_Monte_Carlo_Methods.ipynb
Normal file
826
Notebooks/Chap19/19_3_Monte_Carlo_Methods.ipynb
Normal file
File diff suppressed because one or more lines are too long
521
Notebooks/Chap19/19_4_Temporal_Difference_Methods.ipynb
Normal file
521
Notebooks/Chap19/19_4_Temporal_Difference_Methods.ipynb
Normal file
@@ -0,0 +1,521 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyNEAhORON7DFN1dZMhDK/PO",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap19/19_4_Temporal_Difference_Methods.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 19.4: Temporal difference methods**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook investigates temporal differnece methods for tabular reinforcement learning as described in section 19.3.3 of the book\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "t9vk9Elugvmi"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt\n",
|
||||||
|
"from PIL import Image"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "OLComQyvCIJ7"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Get local copies of components of images\n",
|
||||||
|
"!wget https://raw.githubusercontent.com/udlbook/udlbook/main/Notebooks/Chap19/Empty.png\n",
|
||||||
|
"!wget https://raw.githubusercontent.com/udlbook/udlbook/main/Notebooks/Chap19/Hole.png\n",
|
||||||
|
"!wget https://raw.githubusercontent.com/udlbook/udlbook/main/Notebooks/Chap19/Fish.png\n",
|
||||||
|
"!wget https://raw.githubusercontent.com/udlbook/udlbook/main/Notebooks/Chap19/Penguin.png"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "ZsvrUszPLyEG"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Ugly class that takes care of drawing pictures like in the book.\n",
|
||||||
|
"# You can totally ignore this code!\n",
|
||||||
|
"class DrawMDP:\n",
|
||||||
|
" # Constructor initializes parameters\n",
|
||||||
|
" def __init__(self, n_row, n_col):\n",
|
||||||
|
" self.empty_image = np.asarray(Image.open('Empty.png'))\n",
|
||||||
|
" self.hole_image = np.asarray(Image.open('Hole.png'))\n",
|
||||||
|
" self.fish_image = np.asarray(Image.open('Fish.png'))\n",
|
||||||
|
" self.penguin_image = np.asarray(Image.open('Penguin.png'))\n",
|
||||||
|
" self.fig,self.ax = plt.subplots()\n",
|
||||||
|
" self.n_row = n_row\n",
|
||||||
|
" self.n_col = n_col\n",
|
||||||
|
"\n",
|
||||||
|
" my_colormap_vals_hex =('2a0902', '2b0a03', '2c0b04', '2d0c05', '2e0c06', '2f0d07', '300d08', '310e09', '320f0a', '330f0b', '34100b', '35110c', '36110d', '37120e', '38120f', '39130f', '3a1410', '3b1411', '3c1511', '3d1612', '3e1613', '3f1713', '401714', '411814', '421915', '431915', '451a16', '461b16', '471b17', '481c17', '491d18', '4a1d18', '4b1e19', '4c1f19', '4d1f1a', '4e201b', '50211b', '51211c', '52221c', '53231d', '54231d', '55241e', '56251e', '57261f', '58261f', '592720', '5b2821', '5c2821', '5d2922', '5e2a22', '5f2b23', '602b23', '612c24', '622d25', '632e25', '652e26', '662f26', '673027', '683027', '693128', '6a3229', '6b3329', '6c342a', '6d342a', '6f352b', '70362c', '71372c', '72372d', '73382e', '74392e', '753a2f', '763a2f', '773b30', '783c31', '7a3d31', '7b3e32', '7c3e33', '7d3f33', '7e4034', '7f4134', '804235', '814236', '824336', '834437', '854538', '864638', '874739', '88473a', '89483a', '8a493b', '8b4a3c', '8c4b3c', '8d4c3d', '8e4c3e', '8f4d3f', '904e3f', '924f40', '935041', '945141', '955242', '965343', '975343', '985444', '995545', '9a5646', '9b5746', '9c5847', '9d5948', '9e5a49', '9f5a49', 'a05b4a', 'a15c4b', 'a35d4b', 'a45e4c', 'a55f4d', 'a6604e', 'a7614e', 'a8624f', 'a96350', 'aa6451', 'ab6552', 'ac6552', 'ad6653', 'ae6754', 'af6855', 'b06955', 'b16a56', 'b26b57', 'b36c58', 'b46d59', 'b56e59', 'b66f5a', 'b7705b', 'b8715c', 'b9725d', 'ba735d', 'bb745e', 'bc755f', 'bd7660', 'be7761', 'bf7862', 'c07962', 'c17a63', 'c27b64', 'c27c65', 'c37d66', 'c47e67', 'c57f68', 'c68068', 'c78169', 'c8826a', 'c9836b', 'ca846c', 'cb856d', 'cc866e', 'cd876f', 'ce886f', 'ce8970', 'cf8a71', 'd08b72', 'd18c73', 'd28d74', 'd38e75', 'd48f76', 'd59077', 'd59178', 'd69279', 'd7937a', 'd8957b', 'd9967b', 'da977c', 'da987d', 'db997e', 'dc9a7f', 'dd9b80', 'de9c81', 'de9d82', 'df9e83', 'e09f84', 'e1a185', 'e2a286', 'e2a387', 'e3a488', 'e4a589', 'e5a68a', 'e5a78b', 'e6a88c', 'e7aa8d', 'e7ab8e', 'e8ac8f', 'e9ad90', 'eaae91', 'eaaf92', 'ebb093', 'ecb295', 'ecb396', 'edb497', 'eeb598', 'eeb699', 'efb79a', 'efb99b', 'f0ba9c', 'f1bb9d', 'f1bc9e', 'f2bd9f', 'f2bfa1', 'f3c0a2', 'f3c1a3', 'f4c2a4', 'f5c3a5', 'f5c5a6', 'f6c6a7', 'f6c7a8', 'f7c8aa', 'f7c9ab', 'f8cbac', 'f8ccad', 'f8cdae', 'f9ceb0', 'f9d0b1', 'fad1b2', 'fad2b3', 'fbd3b4', 'fbd5b6', 'fbd6b7', 'fcd7b8', 'fcd8b9', 'fcdaba', 'fddbbc', 'fddcbd', 'fddebe', 'fddfbf', 'fee0c1', 'fee1c2', 'fee3c3', 'fee4c5', 'ffe5c6', 'ffe7c7', 'ffe8c9', 'ffe9ca', 'ffebcb', 'ffeccd', 'ffedce', 'ffefcf', 'fff0d1', 'fff2d2', 'fff3d3', 'fff4d5', 'fff6d6', 'fff7d8', 'fff8d9', 'fffada', 'fffbdc', 'fffcdd', 'fffedf', 'ffffe0')\n",
|
||||||
|
" my_colormap_vals_dec = np.array([int(element,base=16) for element in my_colormap_vals_hex])\n",
|
||||||
|
" r = np.floor(my_colormap_vals_dec/(256*256))\n",
|
||||||
|
" g = np.floor((my_colormap_vals_dec - r *256 *256)/256)\n",
|
||||||
|
" b = np.floor(my_colormap_vals_dec - r * 256 *256 - g * 256)\n",
|
||||||
|
" self.colormap = np.vstack((r,g,b)).transpose()/255.0\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" def draw_text(self, text, row, col, position, color):\n",
|
||||||
|
" if position == 'bc':\n",
|
||||||
|
" self.ax.text( 83*col+41,83 * (row+1) -5, text, horizontalalignment=\"center\", color=color, fontweight='bold')\n",
|
||||||
|
" if position == 'tc':\n",
|
||||||
|
" self.ax.text( 83*col+41,83 * (row) +10, text, horizontalalignment=\"center\", color=color, fontweight='bold')\n",
|
||||||
|
" if position == 'lc':\n",
|
||||||
|
" self.ax.text( 83*col+2,83 * (row) +41, text, verticalalignment=\"center\", color=color, fontweight='bold', rotation=90)\n",
|
||||||
|
" if position == 'rc':\n",
|
||||||
|
" self.ax.text( 83*(col+1)-5,83 * (row) +41, text, horizontalalignment=\"right\", verticalalignment=\"center\", color=color, fontweight='bold', rotation=-90)\n",
|
||||||
|
" if position == 'tl':\n",
|
||||||
|
" self.ax.text( 83*col+5,83 * row +5, text, verticalalignment = 'top', horizontalalignment=\"left\", color=color, fontweight='bold')\n",
|
||||||
|
" if position == 'tr':\n",
|
||||||
|
" self.ax.text( 83*(col+1)-5, 83 * row +5, text, verticalalignment = 'top', horizontalalignment=\"right\", color=color, fontweight='bold')\n",
|
||||||
|
"\n",
|
||||||
|
" # Draws a set of states\n",
|
||||||
|
" def draw_path(self, path, color1, color2):\n",
|
||||||
|
" for i in range(len(path)-1):\n",
|
||||||
|
" row_start = np.floor(path[i]/self.n_col)\n",
|
||||||
|
" row_end = np.floor(path[i+1]/self.n_col)\n",
|
||||||
|
" col_start = path[i] - row_start * self.n_col\n",
|
||||||
|
" col_end = path[i+1] - row_end * self.n_col\n",
|
||||||
|
"\n",
|
||||||
|
" color_index = int(np.floor(255 * i/(len(path)-1.)))\n",
|
||||||
|
" self.ax.plot([col_start * 83+41 + i, col_end * 83+41 + i ],[row_start * 83+41 + i, row_end * 83+41 + i ], color=(self.colormap[color_index,0],self.colormap[color_index,1],self.colormap[color_index,2]))\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" # Draw deterministic policy\n",
|
||||||
|
" def draw_deterministic_policy(self,i, action):\n",
|
||||||
|
" row = np.floor(i/self.n_col)\n",
|
||||||
|
" col = i - row * self.n_col\n",
|
||||||
|
" center_x = 83 * col + 41\n",
|
||||||
|
" center_y = 83 * row + 41\n",
|
||||||
|
" arrow_base_width = 10\n",
|
||||||
|
" arrow_height = 15\n",
|
||||||
|
" # Draw arrow pointing upward\n",
|
||||||
|
" if action ==0:\n",
|
||||||
|
" triangle_indices = np.array([[center_x, center_y-arrow_height/2],\n",
|
||||||
|
" [center_x - arrow_base_width/2, center_y+arrow_height/2],\n",
|
||||||
|
" [center_x + arrow_base_width/2, center_y+arrow_height/2]])\n",
|
||||||
|
" # Draw arrow pointing right\n",
|
||||||
|
" if action ==1:\n",
|
||||||
|
" triangle_indices = np.array([[center_x + arrow_height/2, center_y],\n",
|
||||||
|
" [center_x - arrow_height/2, center_y-arrow_base_width/2],\n",
|
||||||
|
" [center_x - arrow_height/2, center_y+arrow_base_width/2]])\n",
|
||||||
|
" # Draw arrow pointing downward\n",
|
||||||
|
" if action ==2:\n",
|
||||||
|
" triangle_indices = np.array([[center_x, center_y+arrow_height/2],\n",
|
||||||
|
" [center_x - arrow_base_width/2, center_y-arrow_height/2],\n",
|
||||||
|
" [center_x + arrow_base_width/2, center_y-arrow_height/2]])\n",
|
||||||
|
" # Draw arrow pointing left\n",
|
||||||
|
" if action ==3:\n",
|
||||||
|
" triangle_indices = np.array([[center_x - arrow_height/2, center_y],\n",
|
||||||
|
" [center_x + arrow_height/2, center_y-arrow_base_width/2],\n",
|
||||||
|
" [center_x + arrow_height/2, center_y+arrow_base_width/2]])\n",
|
||||||
|
" self.ax.fill(triangle_indices[:,0], triangle_indices[:,1],facecolor='cyan', edgecolor='darkcyan', linewidth=1)\n",
|
||||||
|
"\n",
|
||||||
|
" # Draw stochastic policy\n",
|
||||||
|
" def draw_stochastic_policy(self,i, action_probs):\n",
|
||||||
|
" row = np.floor(i/self.n_col)\n",
|
||||||
|
" col = i - row * self.n_col\n",
|
||||||
|
" offset = 20\n",
|
||||||
|
" # Draw arrow pointing upward\n",
|
||||||
|
" center_x = 83 * col + 41\n",
|
||||||
|
" center_y = 83 * row + 41 - offset\n",
|
||||||
|
" arrow_base_width = 15 * action_probs[0]\n",
|
||||||
|
" arrow_height = 20 * action_probs[0]\n",
|
||||||
|
" triangle_indices = np.array([[center_x, center_y-arrow_height/2],\n",
|
||||||
|
" [center_x - arrow_base_width/2, center_y+arrow_height/2],\n",
|
||||||
|
" [center_x + arrow_base_width/2, center_y+arrow_height/2]])\n",
|
||||||
|
" self.ax.fill(triangle_indices[:,0], triangle_indices[:,1],facecolor='cyan', edgecolor='darkcyan', linewidth=1)\n",
|
||||||
|
"\n",
|
||||||
|
" # Draw arrow pointing right\n",
|
||||||
|
" center_x = 83 * col + 41 + offset\n",
|
||||||
|
" center_y = 83 * row + 41\n",
|
||||||
|
" arrow_base_width = 15 * action_probs[1]\n",
|
||||||
|
" arrow_height = 20 * action_probs[1]\n",
|
||||||
|
" triangle_indices = np.array([[center_x + arrow_height/2, center_y],\n",
|
||||||
|
" [center_x - arrow_height/2, center_y-arrow_base_width/2],\n",
|
||||||
|
" [center_x - arrow_height/2, center_y+arrow_base_width/2]])\n",
|
||||||
|
" self.ax.fill(triangle_indices[:,0], triangle_indices[:,1],facecolor='cyan', edgecolor='darkcyan', linewidth=1)\n",
|
||||||
|
"\n",
|
||||||
|
" # Draw arrow pointing downward\n",
|
||||||
|
" center_x = 83 * col + 41\n",
|
||||||
|
" center_y = 83 * row + 41 +offset\n",
|
||||||
|
" arrow_base_width = 15 * action_probs[2]\n",
|
||||||
|
" arrow_height = 20 * action_probs[2]\n",
|
||||||
|
" triangle_indices = np.array([[center_x, center_y+arrow_height/2],\n",
|
||||||
|
" [center_x - arrow_base_width/2, center_y-arrow_height/2],\n",
|
||||||
|
" [center_x + arrow_base_width/2, center_y-arrow_height/2]])\n",
|
||||||
|
" self.ax.fill(triangle_indices[:,0], triangle_indices[:,1],facecolor='cyan', edgecolor='darkcyan', linewidth=1)\n",
|
||||||
|
"\n",
|
||||||
|
" # Draw arrow pointing left\n",
|
||||||
|
" center_x = 83 * col + 41 -offset\n",
|
||||||
|
" center_y = 83 * row + 41\n",
|
||||||
|
" arrow_base_width = 15 * action_probs[3]\n",
|
||||||
|
" arrow_height = 20 * action_probs[3]\n",
|
||||||
|
" triangle_indices = np.array([[center_x - arrow_height/2, center_y],\n",
|
||||||
|
" [center_x + arrow_height/2, center_y-arrow_base_width/2],\n",
|
||||||
|
" [center_x + arrow_height/2, center_y+arrow_base_width/2]])\n",
|
||||||
|
" self.ax.fill(triangle_indices[:,0], triangle_indices[:,1],facecolor='cyan', edgecolor='darkcyan', linewidth=1)\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" def draw(self, layout, state=None, draw_state_index= False, rewards=None, policy=None, state_values=None, state_action_values=None,path1=None, path2 = None):\n",
|
||||||
|
" # Construct the image\n",
|
||||||
|
" image_out = np.zeros((self.n_row * 83, self.n_col * 83, 4),dtype='uint8')\n",
|
||||||
|
" for c_row in range (self.n_row):\n",
|
||||||
|
" for c_col in range(self.n_col):\n",
|
||||||
|
" if layout[c_row * self.n_col + c_col]==0:\n",
|
||||||
|
" image_out[c_row*83:c_row*83+83, c_col*83:c_col*83+83,:] = self.empty_image\n",
|
||||||
|
" elif layout[c_row * self.n_col + c_col]==1:\n",
|
||||||
|
" image_out[c_row*83:c_row*83+83, c_col*83:c_col*83+83,:] = self.hole_image\n",
|
||||||
|
" else:\n",
|
||||||
|
" image_out[c_row*83:c_row*83+83, c_col*83:c_col*83+83,:] = self.fish_image\n",
|
||||||
|
" if state is not None and state == c_row * self.n_col + c_col:\n",
|
||||||
|
" image_out[c_row*83:c_row*83+83, c_col*83:c_col*83+83,:] = self.penguin_image\n",
|
||||||
|
"\n",
|
||||||
|
" # Draw the image\n",
|
||||||
|
" plt.imshow(image_out)\n",
|
||||||
|
" self.ax.get_xaxis().set_visible(False)\n",
|
||||||
|
" self.ax.get_yaxis().set_visible(False)\n",
|
||||||
|
" self.ax.spines['top'].set_visible(False)\n",
|
||||||
|
" self.ax.spines['right'].set_visible(False)\n",
|
||||||
|
" self.ax.spines['bottom'].set_visible(False)\n",
|
||||||
|
" self.ax.spines['left'].set_visible(False)\n",
|
||||||
|
"\n",
|
||||||
|
" if draw_state_index:\n",
|
||||||
|
" for c_cell in range(layout.size):\n",
|
||||||
|
" self.draw_text(\"%d\"%(c_cell), np.floor(c_cell/self.n_col), c_cell-np.floor(c_cell/self.n_col)*self.n_col,'tl','k')\n",
|
||||||
|
"\n",
|
||||||
|
" # Draw the policy as triangles\n",
|
||||||
|
" if policy is not None:\n",
|
||||||
|
" # If the policy is deterministic\n",
|
||||||
|
" if len(policy) == len(layout):\n",
|
||||||
|
" for i in range(len(layout)):\n",
|
||||||
|
" self.draw_deterministic_policy(i, policy[i])\n",
|
||||||
|
" # Else it is stochastic\n",
|
||||||
|
" else:\n",
|
||||||
|
" for i in range(len(layout)):\n",
|
||||||
|
" self.draw_stochastic_policy(i,policy[:,i])\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
" if path1 is not None:\n",
|
||||||
|
" self.draw_path(path1, np.array([1.0, 0.0, 0.0]), np.array([0.0, 1.0, 1.0]))\n",
|
||||||
|
"\n",
|
||||||
|
" if rewards is not None:\n",
|
||||||
|
" for c_cell in range(layout.size):\n",
|
||||||
|
" self.draw_text(\"%d\"%(rewards[c_cell]), np.floor(c_cell/self.n_col), c_cell-np.floor(c_cell/self.n_col)*self.n_col,'tr','r')\n",
|
||||||
|
"\n",
|
||||||
|
" if state_values is not None:\n",
|
||||||
|
" for c_cell in range(layout.size):\n",
|
||||||
|
" self.draw_text(\"%2.2f\"%(state_values[c_cell]), np.floor(c_cell/self.n_col), c_cell-np.floor(c_cell/self.n_col)*self.n_col,'bc','black')\n",
|
||||||
|
"\n",
|
||||||
|
" if state_action_values is not None:\n",
|
||||||
|
" for c_cell in range(layout.size):\n",
|
||||||
|
" self.draw_text(\"%2.2f\"%(state_action_values[0, c_cell]), np.floor(c_cell/self.n_col), c_cell-np.floor(c_cell/self.n_col)*self.n_col,'tc','black')\n",
|
||||||
|
" self.draw_text(\"%2.2f\"%(state_action_values[1, c_cell]), np.floor(c_cell/self.n_col), c_cell-np.floor(c_cell/self.n_col)*self.n_col,'rc','black')\n",
|
||||||
|
" self.draw_text(\"%2.2f\"%(state_action_values[2, c_cell]), np.floor(c_cell/self.n_col), c_cell-np.floor(c_cell/self.n_col)*self.n_col,'bc','black')\n",
|
||||||
|
" self.draw_text(\"%2.2f\"%(state_action_values[3, c_cell]), np.floor(c_cell/self.n_col), c_cell-np.floor(c_cell/self.n_col)*self.n_col,'lc','black')\n",
|
||||||
|
"\n",
|
||||||
|
" plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "Gq1HfJsHN3SB"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# We're going to work on the problem depicted in figure 19.10a\n",
|
||||||
|
"n_rows = 4; n_cols = 4\n",
|
||||||
|
"layout = np.zeros(n_rows * n_cols)\n",
|
||||||
|
"reward_structure = np.zeros(n_rows * n_cols)\n",
|
||||||
|
"layout[9] = 1 ; reward_structure[9] = -2\n",
|
||||||
|
"layout[10] = 1; reward_structure[10] = -2\n",
|
||||||
|
"layout[14] = 1; reward_structure[14] = -2\n",
|
||||||
|
"layout[15] = 2; reward_structure[15] = 3\n",
|
||||||
|
"initial_state = 0\n",
|
||||||
|
"mdp_drawer = DrawMDP(n_rows, n_cols)\n",
|
||||||
|
"mdp_drawer.draw(layout, state = initial_state, rewards=reward_structure, draw_state_index = True)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "eBQ7lTpJQBSe"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"For clarity, the black numbers are the state number and the red numbers are the reward for being in that state. Note that the states are indexed from 0 rather than 1 as in the book to make the code neater."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "6Vku6v_se2IG"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's define the state transition function $Pr(s_{t+1}|s_{t},a)$ in full where $a$ is the actions. Here $a=0$ means try to go upward, $a=1$, right, $a=2$ down and $a=3$ right. However, the ice is slippery, so we don't always go the direction we want to.\n",
|
||||||
|
"\n",
|
||||||
|
"Note that as for the states, we've indexed the actions from zero (unlike in the book) so they map to the indices of arrays better"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "Fhc6DzZNOjiC"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"transition_probabilities_given_action0 = np.array(\\\n",
|
||||||
|
"[[0.00 , 0.33, 0.00, 0.00, 0.50, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.50 , 0.00, 0.33, 0.00, 0.00, 0.50, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.33, 0.00, 0.50, 0.00, 0.00, 0.50, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.33, 0.00, 0.00, 0.00, 0.00, 0.50, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.50 , 0.00, 0.00, 0.00, 0.00, 0.17, 0.00, 0.00, 0.50, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.34, 0.00, 0.00, 0.25, 0.00, 0.17, 0.00, 0.00, 0.50, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.34, 0.00, 0.00, 0.17, 0.00, 0.25, 0.00, 0.00, 0.50, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.50, 0.00, 0.00, 0.17, 0.00, 0.00, 0.00, 0.00, 0.50, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.25, 0.00, 0.00, 0.00, 0.00, 0.17, 0.00, 0.00, 0.75, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.16, 0.00, 0.00, 0.25, 0.00, 0.17, 0.00, 0.00, 0.50, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.16, 0.00, 0.00, 0.17, 0.00, 0.25, 0.00, 0.00, 0.50, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.25, 0.00, 0.00, 0.17, 0.00, 0.00, 0.00, 0.00, 0.75 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.25, 0.00, 0.00, 0.00, 0.00, 0.25, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.16, 0.00, 0.00, 0.25, 0.00, 0.25, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.16, 0.00, 0.00, 0.25, 0.00, 0.25 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.25, 0.00, 0.00, 0.25, 0.00 ],\n",
|
||||||
|
"])\n",
|
||||||
|
"\n",
|
||||||
|
"transition_probabilities_given_action1 = np.array(\\\n",
|
||||||
|
"[[0.00 , 0.25, 0.00, 0.00, 0.25, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.75 , 0.00, 0.25, 0.00, 0.00, 0.17, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.50, 0.00, 0.50, 0.00, 0.00, 0.17, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.50, 0.00, 0.00, 0.00, 0.00, 0.33, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.25 , 0.00, 0.00, 0.00, 0.00, 0.17, 0.00, 0.00, 0.25, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.25, 0.00, 0.00, 0.50, 0.00, 0.17, 0.00, 0.00, 0.17, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.25, 0.00, 0.00, 0.50, 0.00, 0.33, 0.00, 0.00, 0.17, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.50, 0.00, 0.00, 0.50, 0.00, 0.00, 0.00, 0.00, 0.33, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.25, 0.00, 0.00, 0.00, 0.00, 0.17, 0.00, 0.00, 0.25, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.16, 0.00, 0.00, 0.50, 0.00, 0.17, 0.00, 0.00, 0.25, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.16, 0.00, 0.00, 0.50, 0.00, 0.33, 0.00, 0.00, 0.25, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.34, 0.00, 0.00, 0.50, 0.00, 0.00, 0.00, 0.00, 0.50 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.25, 0.00, 0.00, 0.00, 0.00, 0.25, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.16, 0.00, 0.00, 0.75, 0.00, 0.25, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.16, 0.00, 0.00, 0.50, 0.00, 0.50 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.34, 0.00, 0.00, 0.50, 0.00 ],\n",
|
||||||
|
"])\n",
|
||||||
|
"\n",
|
||||||
|
"transition_probabilities_given_action2 = np.array(\\\n",
|
||||||
|
"[[0.00 , 0.25, 0.00, 0.00, 0.25, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.25 , 0.00, 0.25, 0.00, 0.00, 0.17, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.25, 0.00, 0.25, 0.00, 0.00, 0.17, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.25, 0.00, 0.00, 0.00, 0.00, 0.25, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.75 , 0.00, 0.00, 0.00, 0.00, 0.17, 0.00, 0.00, 0.25, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.50, 0.00, 0.00, 0.25, 0.00, 0.17, 0.00, 0.00, 0.17, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.50, 0.00, 0.00, 0.16, 0.00, 0.25, 0.00, 0.00, 0.17, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.75, 0.00, 0.00, 0.16, 0.00, 0.00, 0.00, 0.00, 0.25, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.50, 0.00, 0.00, 0.00, 0.00, 0.17, 0.00, 0.00, 0.50, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.50, 0.00, 0.00, 0.25, 0.00, 0.17, 0.00, 0.00, 0.33, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.50, 0.00, 0.00, 0.16, 0.00, 0.25, 0.00, 0.00, 0.33, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.50, 0.00, 0.00, 0.16, 0.00, 0.00, 0.00, 0.00, 0.50 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.50, 0.00, 0.00, 0.00, 0.00, 0.33, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.50, 0.00, 0.00, 0.50, 0.00, 0.33, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.50, 0.00, 0.00, 0.34, 0.00, 0.50 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.50, 0.00, 0.00, 0.34, 0.00 ],\n",
|
||||||
|
"])\n",
|
||||||
|
"\n",
|
||||||
|
"transition_probabilities_given_action3 = np.array(\\\n",
|
||||||
|
"[[0.00 , 0.25, 0.00, 0.00, 0.33, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.50 , 0.00, 0.25, 0.00, 0.00, 0.17, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.50, 0.00, 0.75, 0.00, 0.00, 0.17, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.50, 0.00, 0.00, 0.00, 0.00, 0.25, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.50 , 0.00, 0.00, 0.00, 0.00, 0.50, 0.00, 0.00, 0.33, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.25, 0.00, 0.00, 0.33, 0.00, 0.50, 0.00, 0.00, 0.17, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.25, 0.00, 0.00, 0.17, 0.00, 0.50, 0.00, 0.00, 0.17, 0.00, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.25, 0.00, 0.00, 0.17, 0.00, 0.00, 0.00, 0.00, 0.25, 0.00, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.34, 0.00, 0.00, 0.00, 0.00, 0.50, 0.00, 0.00, 0.50, 0.00, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.16, 0.00, 0.00, 0.33, 0.00, 0.50, 0.00, 0.00, 0.25, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.16, 0.00, 0.00, 0.17, 0.00, 0.50, 0.00, 0.00, 0.25, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.25, 0.00, 0.00, 0.17, 0.00, 0.00, 0.00, 0.00, 0.25 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.34, 0.00, 0.00, 0.00, 0.00, 0.50, 0.00, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.16, 0.00, 0.00, 0.50, 0.00, 0.50, 0.00 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.16, 0.00, 0.00, 0.25, 0.00, 0.75 ],\n",
|
||||||
|
" [0.00 , 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.25, 0.00, 0.00, 0.25, 0.00 ],\n",
|
||||||
|
"])\n",
|
||||||
|
"\n",
|
||||||
|
"# Store all of these in a three dimension array\n",
|
||||||
|
"# Pr(s_{t+1}=2|s_{t}=1, a_{t}=3] is stored at position [2,1,3]\n",
|
||||||
|
"transition_probabilities_given_action = np.concatenate((np.expand_dims(transition_probabilities_given_action0,2),\n",
|
||||||
|
" np.expand_dims(transition_probabilities_given_action1,2),\n",
|
||||||
|
" np.expand_dims(transition_probabilities_given_action2,2),\n",
|
||||||
|
" np.expand_dims(transition_probabilities_given_action3,2)),axis=2)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "l7rT78BbOgTi"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def q_learning_step(state_action_values, reward, state, new_state, action, gamma, alpha = 0.1):\n",
|
||||||
|
" # TODO -- write this function\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" state_action_values_after = np.copy(state_action_values)\n",
|
||||||
|
"\n",
|
||||||
|
" return state_action_values_after"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "5pO6-9ACWhiV"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# This takes a single step from an MDP which just has a completely random policy\n",
|
||||||
|
"def markov_decision_process_step(state, transition_probabilities_given_action, reward_structure):\n",
|
||||||
|
" # Pick action\n",
|
||||||
|
" action = np.random.randint(4)\n",
|
||||||
|
" # Update the state\n",
|
||||||
|
" new_state = np.random.choice(a=np.arange(0,transition_probabilities_given_action.shape[0]),p = transition_probabilities_given_action[:,state,action])\n",
|
||||||
|
" # Return the reward -- here the reward is for leaving the state\n",
|
||||||
|
" reward = reward_structure[state]\n",
|
||||||
|
"\n",
|
||||||
|
" return new_state, reward, action"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "akjrncMF-FkU"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Initialize the state-action values to random numbers\n",
|
||||||
|
"np.random.seed(0)\n",
|
||||||
|
"n_state = transition_probabilities_given_action.shape[0]\n",
|
||||||
|
"n_action = transition_probabilities_given_action.shape[2]\n",
|
||||||
|
"state_action_values = np.random.normal(size=(n_action, n_state))\n",
|
||||||
|
"gamma = 0.9\n",
|
||||||
|
"\n",
|
||||||
|
"policy = np.argmax(state_action_values, axis=0).astype(int)\n",
|
||||||
|
"mdp_drawer = DrawMDP(n_rows, n_cols)\n",
|
||||||
|
"mdp_drawer.draw(layout, policy = policy, state_action_values = state_action_values, rewards = reward_structure)\n",
|
||||||
|
"\n",
|
||||||
|
"# Now let's simulate a single Q-learning step\n",
|
||||||
|
"initial_state = 9\n",
|
||||||
|
"print(\"Initial state = \", initial_state)\n",
|
||||||
|
"new_state, reward, action = markov_decision_process_step(initial_state, transition_probabilities_given_action, reward_structure)\n",
|
||||||
|
"print(\"Action = \", action)\n",
|
||||||
|
"print(\"New state = \", new_state)\n",
|
||||||
|
"print(\"Reward = \", reward)\n",
|
||||||
|
"\n",
|
||||||
|
"state_action_values_after = q_learning_step(state_action_values, reward, initial_state, new_state, action, gamma)\n",
|
||||||
|
"print(\"Your value:\",state_action_values_after[action, initial_state])\n",
|
||||||
|
"print(\"True value: 0.27650262412468796\")\n",
|
||||||
|
"\n",
|
||||||
|
"policy = np.argmax(state_action_values, axis=0).astype(int)\n",
|
||||||
|
"mdp_drawer = DrawMDP(n_rows, n_cols)\n",
|
||||||
|
"mdp_drawer.draw(layout, policy = policy, state_action_values = state_action_values_after, rewards = reward_structure)\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "Fu5_VjvbSwfJ"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's run this for a while and watch the policy improve"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "Ogh0qucmb68J"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Initialize the state-action values to random numbers\n",
|
||||||
|
"np.random.seed(0)\n",
|
||||||
|
"n_state = transition_probabilities_given_action.shape[0]\n",
|
||||||
|
"n_action = transition_probabilities_given_action.shape[2]\n",
|
||||||
|
"state_action_values = np.random.normal(size=(n_action, n_state))\n",
|
||||||
|
"# Hard code termination state of finding fish\n",
|
||||||
|
"state_action_values[:,n_state-1] = 3.0\n",
|
||||||
|
"gamma = 0.9\n",
|
||||||
|
"\n",
|
||||||
|
"# Draw the initial setup\n",
|
||||||
|
"policy = np.argmax(state_action_values, axis=0).astype(int)\n",
|
||||||
|
"mdp_drawer = DrawMDP(n_rows, n_cols)\n",
|
||||||
|
"mdp_drawer.draw(layout, policy = policy, state_action_values = state_action_values, rewards = reward_structure)\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"state= np.random.randint(n_state-1)\n",
|
||||||
|
"\n",
|
||||||
|
"# Run for a number of iterations\n",
|
||||||
|
"for c_iter in range(10000):\n",
|
||||||
|
" new_state, reward, action = markov_decision_process_step(state, transition_probabilities_given_action, reward_structure)\n",
|
||||||
|
" state_action_values_after = q_learning_step(state_action_values, reward, state, new_state, action, gamma)\n",
|
||||||
|
" # If in termination state, reset state randomly\n",
|
||||||
|
" if new_state==15:\n",
|
||||||
|
" state= np.random.randint(n_state-1)\n",
|
||||||
|
" else:\n",
|
||||||
|
" state = new_state\n",
|
||||||
|
" # Update the policy\n",
|
||||||
|
" state_action_values = np.copy(state_action_values_after)\n",
|
||||||
|
" policy = np.argmax(state_action_values, axis=0).astype(int)\n",
|
||||||
|
"\n",
|
||||||
|
"# Draw the final situation\n",
|
||||||
|
"mdp_drawer = DrawMDP(n_rows, n_cols)\n",
|
||||||
|
"mdp_drawer.draw(layout, policy = policy, state_action_values = state_action_values, rewards = reward_structure)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "qQFhwVqPcCFH"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
170
Notebooks/Chap19/19_5_Control_Variates.ipynb
Normal file
170
Notebooks/Chap19/19_5_Control_Variates.ipynb
Normal file
@@ -0,0 +1,170 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyO6CLgMIO5bUVAMkzPT3z4y",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap19/19_5_Control_Variates.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 19.5: Control variates**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook investigates the method of control variates as described in figure 19.16\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "t9vk9Elugvmi"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "OLComQyvCIJ7"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Genearate from our two variables, $a$ and $b$. We are interested in estimating the mean of $a$, but we can use $b$$ to improve our estimates if it is correlated"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "uwmhcAZBzTRO"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Sample from two variables with mean zero, standard deviation one, and a given correlation coefficient\n",
|
||||||
|
"def get_samples(n_samples, correlation_coeff=0.8):\n",
|
||||||
|
" a = np.random.normal(size=(1,n_samples))\n",
|
||||||
|
" temp = np.random.normal(size=(1, n_samples))\n",
|
||||||
|
" b = correlation_coeff * a + np.sqrt(1-correlation_coeff * correlation_coeff) * temp\n",
|
||||||
|
" return a, b"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "bC8MBXPawQJ3"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"N = 10000000\n",
|
||||||
|
"a,b, = get_samples(N)\n",
|
||||||
|
"\n",
|
||||||
|
"# Verify that these two variables have zero mean and unit standard deviation\n",
|
||||||
|
"print(\"Mean of a = %3.3f, Std of a = %3.3f\"%(np.mean(a),np.std(a)))\n",
|
||||||
|
"print(\"Mean of b = %3.3f, Std of b = %3.3f\"%(np.mean(b),np.std(b)))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "1cT66nbRyW34"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's samples $N=10$ examples from $a$ and estimate their mean $\\mathbb{E}[a]$. We'll do this 1000000 times and then compute the variance of those estimates."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "PWoYRpjS0Nlf"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"n_estimate = 1000000\n",
|
||||||
|
"\n",
|
||||||
|
"N = 5\n",
|
||||||
|
"\n",
|
||||||
|
"# TODO -- sample N examples of variable $a$\n",
|
||||||
|
"# Compute the mean of each\n",
|
||||||
|
"# Compute the mean and variance of these estimates of the mean\n",
|
||||||
|
"# Replace this line\n",
|
||||||
|
"mean_of_estimator_1 = -1; std_of_estimator_1 = -1\n",
|
||||||
|
"\n",
|
||||||
|
"print(\"Standard estimator mean = %3.3f, Standard estimator variance = %3.3f\"%(mean_of_estimator_1, std_of_estimator_1))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "n6Uem2aYzBp7"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Now let's estimate the mean $\\mathbf{E}[a]$ of $a$ by computing $a-b$ where $b$ is correlated with $a$"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "F-af86z13TFc"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"n_estimate = 1000000\n",
|
||||||
|
"\n",
|
||||||
|
"N = 5\n",
|
||||||
|
"\n",
|
||||||
|
"# TODO -- sample N examples of variables $a$ and $b$\n",
|
||||||
|
"# Compute $c=a-b$ for each and then compute the mean of $c$\n",
|
||||||
|
"# Compute the mean and variance of these estimates of the mean of $c$\n",
|
||||||
|
"# Replace this line\n",
|
||||||
|
"mean_of_estimator_2 = -1; std_of_estimator_2 = -1\n",
|
||||||
|
"\n",
|
||||||
|
"print(\"Control variate estimator mean = %3.3f, Control variate estimator variance = %3.3f\"%(mean_of_estimator_2, std_of_estimator_2))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "MrEVDggY0IGU"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Note that they both have a very similar mean, but the second estimator has a lower variance. \n",
|
||||||
|
"\n",
|
||||||
|
"TODO -- Experiment with different samples sizes $N$ and correlation coefficients."
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "Jklzkca14ofS"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
BIN
Notebooks/Chap19/Empty.png
Normal file
BIN
Notebooks/Chap19/Empty.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 12 KiB |
BIN
Notebooks/Chap19/Fish.png
Normal file
BIN
Notebooks/Chap19/Fish.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 12 KiB |
BIN
Notebooks/Chap19/Hole.png
Normal file
BIN
Notebooks/Chap19/Hole.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 9.2 KiB |
BIN
Notebooks/Chap19/Penguin.png
Normal file
BIN
Notebooks/Chap19/Penguin.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 13 KiB |
305
Notebooks/Chap20/20_1_Random_Data.ipynb
Normal file
305
Notebooks/Chap20/20_1_Random_Data.ipynb
Normal file
@@ -0,0 +1,305 @@
|
|||||||
|
{
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0,
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"authorship_tag": "ABX9TyPkSYbEjOcEmLt8tU6HxNuR",
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"name": "python3",
|
||||||
|
"display_name": "Python 3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap20/20_1_Random_Data.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"# **Notebook 20.1: Random Data**\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook investigates training the network with random data, as illustrated in figure 20.1.\n",
|
||||||
|
"\n",
|
||||||
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
|
||||||
|
"\n"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "t9vk9Elugvmi"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Run this if you're in a Colab to make a local copy of the MNIST 1D repository\n",
|
||||||
|
"!git clone https://github.com/greydanus/mnist1d"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "D5yLObtZCi9J"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import os\n",
|
||||||
|
"import torch, torch.nn as nn\n",
|
||||||
|
"from torch.utils.data import TensorDataset, DataLoader\n",
|
||||||
|
"from torch.optim.lr_scheduler import StepLR\n",
|
||||||
|
"import matplotlib.pyplot as plt\n",
|
||||||
|
"import mnist1d\n",
|
||||||
|
"import random\n",
|
||||||
|
"from IPython.display import display, clear_output"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "YrXWAH7sUWvU"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"args = mnist1d.data.get_dataset_args()\n",
|
||||||
|
"data = mnist1d.data.get_dataset(args, path='./mnist1d_data.pkl', download=False, regenerate=False)\n",
|
||||||
|
"\n",
|
||||||
|
"# The training and test input and outputs are in\n",
|
||||||
|
"# data['x'], data['y'], data['x_test'], and data['y_test']\n",
|
||||||
|
"print(\"Examples in training set: {}\".format(len(data['y'])))\n",
|
||||||
|
"print(\"Examples in test set: {}\".format(len(data['y_test'])))\n",
|
||||||
|
"print(\"Length of each example: {}\".format(data['x'].shape[-1]))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "twI72ZCrCt5z"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"source": [
|
||||||
|
"Define the network"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "_sFvRDGrl4qe"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"D_i = 40 # Input dimensions\n",
|
||||||
|
"D_k = 300 # Hidden dimensions\n",
|
||||||
|
"D_o = 10 # Output dimensions\n",
|
||||||
|
"\n",
|
||||||
|
"model = nn.Sequential(\n",
|
||||||
|
"nn.Linear(D_i, D_k),\n",
|
||||||
|
"nn.ReLU(),\n",
|
||||||
|
"nn.Linear(D_k, D_k),\n",
|
||||||
|
"nn.ReLU(),\n",
|
||||||
|
"nn.Linear(D_k, D_k),\n",
|
||||||
|
"nn.ReLU(),\n",
|
||||||
|
"nn.Linear(D_k, D_k),\n",
|
||||||
|
"nn.ReLU(),\n",
|
||||||
|
"nn.Linear(D_k, D_o))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "FslroPJJffrh"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# He initialization of weights\n",
|
||||||
|
"def weights_init(layer_in):\n",
|
||||||
|
" if isinstance(layer_in, nn.Linear):\n",
|
||||||
|
" nn.init.kaiming_uniform_(layer_in.weight)\n",
|
||||||
|
" layer_in.bias.data.fill_(0.0)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "YgLaex1pfhqz"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def train_model(train_data_x, train_data_y, n_epoch):\n",
|
||||||
|
" # choose cross entropy loss function (equation 5.24 in the loss notes)\n",
|
||||||
|
" loss_function = nn.CrossEntropyLoss()\n",
|
||||||
|
" # construct SGD optimizer and initialize learning rate and momentum\n",
|
||||||
|
" optimizer = torch.optim.SGD(model.parameters(), lr = 0.02, momentum=0.9)\n",
|
||||||
|
" # object that decreases learning rate by half every 20 epochs\n",
|
||||||
|
" scheduler = StepLR(optimizer, step_size=20, gamma=0.5)\n",
|
||||||
|
" # create 100 dummy data points and store in data loader class\n",
|
||||||
|
" x_train = torch.tensor(train_data_x.transpose().astype('float32'))\n",
|
||||||
|
" y_train = torch.tensor(train_data_y.astype('long'))\n",
|
||||||
|
"\n",
|
||||||
|
" # load the data into a class that creates the batches\n",
|
||||||
|
" data_loader = DataLoader(TensorDataset(x_train,y_train), batch_size=100, shuffle=True, worker_init_fn=np.random.seed(1))\n",
|
||||||
|
"\n",
|
||||||
|
" # Initialize model weights\n",
|
||||||
|
" model.apply(weights_init)\n",
|
||||||
|
"\n",
|
||||||
|
" # store the loss and the % correct at each epoch\n",
|
||||||
|
" losses_train = np.zeros((n_epoch))\n",
|
||||||
|
"\n",
|
||||||
|
" for epoch in range(n_epoch):\n",
|
||||||
|
" # loop over batches\n",
|
||||||
|
" for i, data in enumerate(data_loader):\n",
|
||||||
|
" # retrieve inputs and labels for this batch\n",
|
||||||
|
" x_batch, y_batch = data\n",
|
||||||
|
" # zero the parameter gradients\n",
|
||||||
|
" optimizer.zero_grad()\n",
|
||||||
|
" # forward pass -- calculate model output\n",
|
||||||
|
" pred = model(x_batch)\n",
|
||||||
|
" # compute the loss\n",
|
||||||
|
" loss = loss_function(pred, y_batch)\n",
|
||||||
|
" # backward pass\n",
|
||||||
|
" loss.backward()\n",
|
||||||
|
" # SGD update\n",
|
||||||
|
" optimizer.step()\n",
|
||||||
|
"\n",
|
||||||
|
" # Run whole dataset to get statistics -- normally wouldn't do this\n",
|
||||||
|
" pred_train = model(x_train)\n",
|
||||||
|
" _, predicted_train_class = torch.max(pred_train.data, 1)\n",
|
||||||
|
" losses_train[epoch] = loss_function(pred_train, y_train).item()\n",
|
||||||
|
" if epoch % 5 == 0:\n",
|
||||||
|
" clear_output(wait=True)\n",
|
||||||
|
" display(\"Epoch %d, train loss %3.3f\"%(epoch, losses_train[epoch]))\n",
|
||||||
|
"\n",
|
||||||
|
" # tell scheduler to consider updating learning rate\n",
|
||||||
|
" scheduler.step()\n",
|
||||||
|
"\n",
|
||||||
|
" return losses_train"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "NYw8I_3mmX5c"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Load in the data\n",
|
||||||
|
"train_data_x = data['x'].transpose()\n",
|
||||||
|
"train_data_y = data['y']\n",
|
||||||
|
"# Print out sizes\n",
|
||||||
|
"print(\"Train data: %d examples (columns), each of which has %d dimensions (rows)\"%((train_data_x.shape[1],train_data_x.shape[0])))"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "4FE3HQ_vedXO"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Compute loss for proper data and plot\n",
|
||||||
|
"n_epoch = 60\n",
|
||||||
|
"loss_true_labels = train_model(train_data_x, train_data_y, n_epoch)\n",
|
||||||
|
"# Plot the results\n",
|
||||||
|
"fig, ax = plt.subplots()\n",
|
||||||
|
"ax.plot(loss_true_labels,'r-',label='true_labels')\n",
|
||||||
|
"# ax.set_ylim(0,0.7); ax.set_xlim(0,n_epoch)\n",
|
||||||
|
"ax.set_xlabel('Epoch'); ax.set_ylabel('Loss')\n",
|
||||||
|
"ax.legend()\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "b56wdODqemF1"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# TODO -- Randomize the input data (train_data_x), but retain overall mean and variance\n",
|
||||||
|
"# Replace this line\n",
|
||||||
|
"train_data_x_randomized = np.copy(train_data_x)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "SbPCiiUKgTLw"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Compute loss for true labels and plot\n",
|
||||||
|
"n_epoch = 60\n",
|
||||||
|
"loss_randomized_data = train_model(train_data_x_randomized, train_data_y, n_epoch)\n",
|
||||||
|
"# Plot the results\n",
|
||||||
|
"fig, ax = plt.subplots()\n",
|
||||||
|
"ax.plot(loss_true_labels,'r-',label='true_labels')\n",
|
||||||
|
"ax.plot(loss_randomized_data,'b-',label='random_data')\n",
|
||||||
|
"# ax.set_ylim(0,0.7); ax.set_xlim(0,n_epoch)\n",
|
||||||
|
"ax.set_xlabel('Epoch'); ax.set_ylabel('Loss')\n",
|
||||||
|
"ax.legend()\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "y7CcCJvvjLnn"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# TODO -- Permute the labels\n",
|
||||||
|
"# Replace this line:\n",
|
||||||
|
"train_data_y_permuted = np.copy(train_data_y)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "ojaMTrzKj_74"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"# Compute loss for true labels and plot\n",
|
||||||
|
"n_epoch = 60\n",
|
||||||
|
"loss_permuted_labels = train_model(train_data_x, train_data_y_permuted, n_epoch)\n",
|
||||||
|
"# Plot the results\n",
|
||||||
|
"fig, ax = plt.subplots()\n",
|
||||||
|
"ax.plot(loss_true_labels,'r-',label='true_labels')\n",
|
||||||
|
"ax.plot(loss_randomized_data,'b-',label='random_data')\n",
|
||||||
|
"ax.plot(loss_permuted_labels,'g-',label='random_labels')\n",
|
||||||
|
"# ax.set_ylim(0,0.7); ax.set_xlim(0,n_epoch)\n",
|
||||||
|
"ax.set_xlabel('Epoch'); ax.set_ylabel('Loss')\n",
|
||||||
|
"ax.legend()\n",
|
||||||
|
"plt.show()"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "LaYCSjyMo9LQ"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
437
index.html
437
index.html
@@ -1,24 +1,31 @@
|
|||||||
<h1>Understanding Deep Learning</h1>
|
<!DOCTYPE html>
|
||||||
by Simon J.D. Prince
|
<html lang="en">
|
||||||
<br>
|
<head>
|
||||||
To be published by MIT Press Dec 5th 2023.<br>
|
<meta charset="UTF-8">
|
||||||
|
<title>udlbook</title>
|
||||||
|
<link rel="stylesheet" href="style.css">
|
||||||
|
</head>
|
||||||
|
|
||||||
<img src="https://raw.githubusercontent.com/udlbook/udlbook/main/UDLCoverSmall.jpg" alt="front cover">
|
<body>
|
||||||
|
<div id="head">
|
||||||
<h2> Download draft PDF </h2>
|
<div>
|
||||||
|
<h1 style="margin: 0; font-size: 36px">Understanding Deep Learning</h1>
|
||||||
<a href="https://github.com/udlbook/udlbook/releases/download/v1.1/UnderstandingDeepLearning_23_07_23_C.pdf">Draft PDF Chapters 1-21</a><br> 2023-07-23. CC-BY-NC-ND license
|
by Simon J.D. Prince
|
||||||
<br>
|
<br>To be published by MIT Press Dec 5th 2023.<br>
|
||||||
|
<ul>
|
||||||
|
<li>
|
||||||
|
<p style="font-size: larger; margin-bottom: 0">Download draft PDF Chapters 1-21 <a
|
||||||
|
href="https://github.com/udlbook/udlbook/releases/download/v1.14/UnderstandingDeepLearning_13_10_23_C.pdf">here</a>
|
||||||
|
</p>2023-10-13. CC-BY-NC-ND license<br>
|
||||||
<img src="https://img.shields.io/github/downloads/udlbook/udlbook/total" alt="download stats shield">
|
<img src="https://img.shields.io/github/downloads/udlbook/udlbook/total" alt="download stats shield">
|
||||||
<br>
|
</li>
|
||||||
<ul>
|
<li> Report errata via <a href="https://github.com/udlbook/udlbook/issues">github</a>
|
||||||
<li> Appendices and notebooks coming soon
|
or contact me directly at udlbookmail@gmail.com
|
||||||
<li> Report errata via <a href="https://github.com/udlbook/udlbook/issues">github</a> or contact me directly at udlbookmail@gmail.com
|
<li> Follow me on <a href="https://twitter.com/SimonPrinceAI">Twitter</a> or <a
|
||||||
<li> Follow me on <a href="https://twitter.com/SimonPrinceAI">Twitter</a> or <a href="https://www.linkedin.com/in/simon-prince-615bb9165/">LinkedIn</a> for updates.
|
href="https://www.linkedin.com/in/simon-prince-615bb9165/">LinkedIn</a> for updates.
|
||||||
</ul>
|
</ul>
|
||||||
|
<h2>Table of contents</h2>
|
||||||
<h2>Table of contents</h2>
|
<ul>
|
||||||
<ul>
|
|
||||||
<li> Chapter 1 - Introduction
|
<li> Chapter 1 - Introduction
|
||||||
<li> Chapter 2 - Supervised learning
|
<li> Chapter 2 - Supervised learning
|
||||||
<li> Chapter 3 - Shallow neural networks
|
<li> Chapter 3 - Shallow neural networks
|
||||||
@@ -40,74 +47,330 @@ To be published by MIT Press Dec 5th 2023.<br>
|
|||||||
<li> Chapter 19 - Deep reinforcement learning
|
<li> Chapter 19 - Deep reinforcement learning
|
||||||
<li> Chapter 20 - Why does deep learning work?
|
<li> Chapter 20 - Why does deep learning work?
|
||||||
<li> Chapter 21 - Deep learning and ethics
|
<li> Chapter 21 - Deep learning and ethics
|
||||||
</ul>
|
</ul>
|
||||||
|
</div>
|
||||||
|
<div id="cover">
|
||||||
|
<img src="https://raw.githubusercontent.com/udlbook/udlbook/main/UDLCoverSmall.jpg"
|
||||||
|
alt="front cover">
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div id="body">
|
||||||
|
<h2>Resources for instructors </h2>
|
||||||
|
<p>Instructor answer booklet available with proof of credentials via <a
|
||||||
|
href="https://mitpress.mit.edu/9780262048644/understanding-deep-learning"> MIT Press</a>.</p>
|
||||||
|
<p>Figures in PDF (vector) / SVG (vector) / Powerpoint (images):
|
||||||
|
<ul>
|
||||||
|
<li> Chapter 1 - Introduction: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap1PDF.zip">PDF
|
||||||
|
Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1udnl5pUOAc8DcAQ7HQwyzP9pwL95ynnv">
|
||||||
|
SVG
|
||||||
|
Figures</a> / <a
|
||||||
|
href="https://docs.google.com/presentation/d/1IjTqIUvWCJc71b5vEJYte-Dwujcp7rvG/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint
|
||||||
|
Figures</a>
|
||||||
|
<li> Chapter 2 - Supervised learning: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap2PDF.zip">PDF Figures</a> / <a
|
||||||
|
href="https://drive.google.com/uc?export=download&id=1VSxcU5y1qNFlmd3Lb3uOWyzILuOj1Dla"> SVG Figures</a>
|
||||||
|
/
|
||||||
|
<a href="https://docs.google.com/presentation/d/1Br7R01ROtRWPlNhC_KOommeHAWMBpWtz/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint
|
||||||
|
Figures</a>
|
||||||
|
<li> Chapter 3 - Shallow neural networks: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap3PDF.zip">PDF Figures</a> / <a
|
||||||
|
href="https://drive.google.com/uc?export=download&id=19kZFWlXhzN82Zx02ByMmSZOO4T41fmqI"> SVG Figures</a>
|
||||||
|
/
|
||||||
|
<a href="https://docs.google.com/presentation/d/1e9M3jB5I9qZ4dCBY90Q3Hwft_i068QVQ/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint
|
||||||
|
Figures</a>
|
||||||
|
<li> Chapter 4 - Deep neural networks: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap4PDF.zip">PDF Figures</a> / <a
|
||||||
|
href="https://drive.google.com/uc?export=download&id=1ojr0ebsOhzvS04ItAflX2cVmYqHQHZUa"> SVG Figures</a>
|
||||||
|
/
|
||||||
|
<a href="https://docs.google.com/presentation/d/1LTSsmY4mMrJbqXVvoTOCkQwHrRKoYnJj/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint
|
||||||
|
Figures</a>
|
||||||
|
<li> Chapter 5 - Loss functions: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap5PDF.zip">PDF
|
||||||
|
Figures</a> / <a href="https://drive.google.com/uc?export=download&id=17MJO7fiMpFZVqKeqXTbQ36AMpmR4GizZ">
|
||||||
|
SVG
|
||||||
|
Figures</a> / <a
|
||||||
|
href="https://docs.google.com/presentation/d/1gcpC_3z9oRp87eMkoco-kdLD-MM54Puk/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint
|
||||||
|
Figures</a>
|
||||||
|
<li> Chapter 6 - Training models: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap6PDF.zip">PDF
|
||||||
|
Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1VPdhFRnCr9_idTrX0UdHKGAw2shUuwhK">
|
||||||
|
SVG
|
||||||
|
Figures</a> / <a
|
||||||
|
href="https://docs.google.com/presentation/d/1AKoeggAFBl9yLC7X5tushAGzCCxmB7EY/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint
|
||||||
|
Figures</a>
|
||||||
|
<li> Chapter 7 - Gradients and initialization: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap7PDF.zip">PDF Figures</a> / <a
|
||||||
|
href="https://drive.google.com/uc?export=download&id=1TTl4gvrTvNbegnml4CoGoKOOd6O8-PGs"> SVG Figures</a>
|
||||||
|
/
|
||||||
|
<a href="https://docs.google.com/presentation/d/11zhB6PI-Dp6Ogmr4IcI6fbvbqNqLyYcz/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint
|
||||||
|
Figures</a>
|
||||||
|
<li> Chapter 8 - Measuring performance: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap8PDF.zip">PDF Figures</a> / <a
|
||||||
|
href="https://drive.google.com/uc?export=download&id=19eQOnygd_l0DzgtJxXuYnWa4z7QKJrJx"> SVG Figures</a>
|
||||||
|
/
|
||||||
|
<a href="https://docs.google.com/presentation/d/1SHRmJscDLUuQrG7tmysnScb3ZUAqVMZo/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint
|
||||||
|
Figures</a>
|
||||||
|
<li> Chapter 9 - Regularization: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap9PDF.zip">PDF
|
||||||
|
Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1LprgnUGL7xAM9-jlGZC9LhMPeefjY0r0">
|
||||||
|
SVG
|
||||||
|
Figures</a> / <a
|
||||||
|
href="https://docs.google.com/presentation/d/1VwIfvjpdfTny6sEfu4ZETwCnw6m8Eg-5/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint
|
||||||
|
Figures</a>
|
||||||
|
<li> Chapter 10 - Convolutional networks: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap10PDF.zip">PDF Figures</a> / <a
|
||||||
|
href="https://drive.google.com/uc?export=download&id=1-Wb3VzaSvVeRzoUzJbI2JjZE0uwqupM9"> SVG Figures</a>
|
||||||
|
/
|
||||||
|
<a href="https://docs.google.com/presentation/d/1MtfKBC4Y9hWwGqeP6DVwUNbi1j5ncQCg/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint
|
||||||
|
Figures</a>
|
||||||
|
<li> Chapter 11 - Residual networks: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap11PDF.zip">PDF Figures</a> / <a
|
||||||
|
href="https://drive.google.com/uc?export=download&id=1Mr58jzEVseUAfNYbGWCQyDtEDwvfHRi1"> SVG Figures</a>
|
||||||
|
/
|
||||||
|
<a href="https://docs.google.com/presentation/d/1saY8Faz0KTKAAifUrbkQdLA2qkyEjOPI/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint
|
||||||
|
Figures</a>
|
||||||
|
<li> Chapter 12 - Transformers: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap12PDF.zip">PDF
|
||||||
|
Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1txzOVNf8-jH4UfJ6SLnrtOfPd1Q3ebzd">
|
||||||
|
SVG
|
||||||
|
Figures</a> / <a
|
||||||
|
href="https://docs.google.com/presentation/d/1GVNvYWa0WJA6oKg89qZre-UZEhABfm0l/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint
|
||||||
|
Figures</a>
|
||||||
|
<li> Chapter 13 - Graph neural networks: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap13PDF.zip">PDF Figures</a> / <a
|
||||||
|
href="https://drive.google.com/uc?export=download&id=1lQIV6nRp6LVfaMgpGFhuwEXG-lTEaAwe"> SVG Figures</a>
|
||||||
|
/
|
||||||
|
<a href="https://docs.google.com/presentation/d/1YwF3U82c1mQ74c1WqHVTzLZ0j7GgKaWP/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint
|
||||||
|
Figures</a>
|
||||||
|
<li> Chapter 14 - Unsupervised learning: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap14PDF.zip">PDF Figures</a> / <a
|
||||||
|
href="https://drive.google.com/uc?export=download&id=1aMbI6iCuUvOywqk5pBOmppJu1L1anqsM"> SVG Figures</a>
|
||||||
|
/
|
||||||
|
<a href="https://docs.google.com/presentation/d/1A-lBGv3NHl4L32NvfFgy1EKeSwY-0UeB/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">
|
||||||
|
PowerPoint Figures</a>
|
||||||
|
<li> Chapter 15 - Generative adversarial networks: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap15PDF.zip">PDF Figures</a> / <a
|
||||||
|
href="https://drive.google.com/uc?export=download&id=1EErnlZCOlXc3HK7m83T2Jh_0NzIUHvtL"> SVG Figures</a>
|
||||||
|
/
|
||||||
|
<a href="https://docs.google.com/presentation/d/10Ernk41ShOTf4IYkMD-l4dJfKATkXH4w/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint
|
||||||
|
Figures</a>
|
||||||
|
<li> Chapter 16 - Normalizing flows: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap16PDF.zip">PDF Figures</a> / <a
|
||||||
|
href="https://drive.google.com/uc?export=download&id=1B9bxtmdugwtg-b7Y4AdQKAIEVWxjx8l3"> SVG Figures</a>
|
||||||
|
/
|
||||||
|
<a href="https://docs.google.com/presentation/d/1nLLzqb9pdfF_h6i1HUDSyp7kSMIkSUUA/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint
|
||||||
|
Figures</a>
|
||||||
|
<li> Chapter 17 - Variational autoencoders: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap17PDF.zip">PDF Figures</a> / <a
|
||||||
|
href="https://drive.google.com/uc?export=download&id=1SNtNIY7khlHQYMtaOH-FosSH3kWwL4b7"> SVG Figures</a>
|
||||||
|
/
|
||||||
|
<a href="https://docs.google.com/presentation/d/1lQE4Bu7-LgvV2VlJOt_4dQT-kusYl7Vo/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint
|
||||||
|
Figures</a>
|
||||||
|
<li> Chapter 18 - Diffusion models: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap18PDF.zip">PDF Figures</a> / <a
|
||||||
|
href="https://docs.google.com/presentation/d/1x_ufIBtVPzWUvRieKMkpw5SdRjXWwdfR/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">
|
||||||
|
PowerPoint Figures</a>
|
||||||
|
<li> Chapter 19 - Deep reinforcement learning: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap19PDF.zip">PDF Figures</a> / <a
|
||||||
|
href="https://drive.google.com/uc?export=download&id=1a5WUoF7jeSgwC_PVdckJi1Gny46fCqh0"> SVG Figures</a>
|
||||||
|
/
|
||||||
|
<a href="https://docs.google.com/presentation/d/1TnYmVbFNhmMFetbjyfXGmkxp1EHauMqr/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">
|
||||||
|
PowerPoint Figures </a>
|
||||||
|
<li> Chapter 20 - Why does deep learning work?: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap20PDF.zip">PDF Figures</a> / <a
|
||||||
|
href="https://drive.google.com/uc?export=download&id=1M2d0DHEgddAQoIedKSDTTt7m1ZdmBLQ3"> SVG Figures</a>
|
||||||
|
/
|
||||||
|
<a href="https://docs.google.com/presentation/d/1coxF4IsrCzDTLrNjRagHvqB_FBy10miA/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">
|
||||||
|
PowerPoint Figures</a>
|
||||||
|
<li> Chapter 21 - Deep learning and ethics: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap21PDF.zip">PDF Figures</a> / <a
|
||||||
|
href="https://drive.google.com/uc?export=download&id=1jixmFfwmZkW_UVYzcxmDcMsdFFtnZ0bU"> SVG Figures</a>/
|
||||||
|
<a
|
||||||
|
href="https://docs.google.com/presentation/d/1EtfzanZYILvi9_-Idm28zD94I_6OrN9R/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint
|
||||||
|
Figures</a>
|
||||||
|
<li> Appendices - <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLAppendixPDF.zip">PDF
|
||||||
|
Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1k2j7hMN40ISPSg9skFYWFL3oZT7r8v-l">
|
||||||
|
SVG
|
||||||
|
Figures</a> / <a
|
||||||
|
href="https://docs.google.com/presentation/d/1_2cJHRnsoQQHst0rwZssv-XH4o5SEHks/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">Powerpoint
|
||||||
|
Figures</a>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
Instructions for editing figures / equations can be found <a
|
||||||
|
href="https://drive.google.com/file/d/1T_MXXVR4AfyMnlEFI-UVDh--FXI5deAp/view?usp=sharing">here</a>.
|
||||||
|
|
||||||
|
<h2>Resources for students</h2>
|
||||||
|
|
||||||
|
<p>Answers to selected questions: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/raw/main/UDL_Answer_Booklet_Students.pdf">PDF</a>
|
||||||
|
</p>
|
||||||
|
<p>Python notebooks: (Early ones more thoroughly tested than later ones!)</p>
|
||||||
|
|
||||||
|
<ul>
|
||||||
|
<li> Notebook 1.1 - Background mathematics: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap01/1_1_BackgroundMathematics.ipynb">ipynb/colab</a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 2.1 - Supervised learning: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap02/2_1_Supervised_Learning.ipynb">ipynb/colab</a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 3.1 - Shallow networks I: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap03/3_1_Shallow_Networks_I.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 3.2 - Shallow networks II: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap03/3_2_Shallow_Networks_II.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 3.3 - Shallow network regions: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap03/3_3_Shallow_Network_Regions.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 3.4 - Activation functions: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap03/3_4_Activation_Functions.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 4.1 - Composing networks: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap04/4_1_Composing_Networks.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 4.2 - Clipping functions: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap04/4_2_Clipping_functions.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 4.3 - Deep networks: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap04/4_3_Deep_Networks.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 5.1 - Least squares loss: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap05/5_1_Least_Squares_Loss.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 5.2 - Binary cross-entropy loss: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap05/5_2_Binary_Cross_Entropy_Loss.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 5.3 - Multiclass cross-entropy loss: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap05/5_3_Multiclass_Cross_entropy_Loss.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 6.1 - Line search: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap06/6_1_Line_Search.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 6.2 - Gradient descent: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap06/6_2_Gradient_Descent.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 6.3 - Stochastic gradient descent: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap06/6_3_Stochastic_Gradient_Descent.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 6.4 - Momentum: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap06/6_4_Momentum.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 6.5 - Adam: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap06/6_5_Adam.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 7.1 - Backpropagation in toy model: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap07/7_1_Backpropagation_in_Toy_Model.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 7.2 - Backpropagation: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap07/7_2_Backpropagation.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 7.3 - Initialization: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap07/7_3_Initialization.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 8.1 - MNIST-1D performance: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap08/8_1_MNIST_1D_Performance.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 8.2 - Bias-variance trade-off: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap08/8_2_Bias_Variance_Trade_Off.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 8.3 - Double descent: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap08/8_3_Double_Descent.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 8.4 - High-dimensional spaces: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap08/8_4_High_Dimensional_Spaces.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 9.1 - L2 regularization: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap09/9_1_L2_Regularization.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 9.2 - Implicit regularization: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap09/9_2_Implicit_Regularization.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 9.3 - Ensembling: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap09/9_3_Ensembling.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 9.4 - Bayesian approach: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap09/9_4_Bayesian_Approach.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 9.5 - Augmentation <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap09/9_5_Augmentation.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 10.1 - 1D convolution: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap10/10_1_1D_Convolution.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 10.2 - Convolution for MNIST-1D: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap10/10_2_Convolution_for_MNIST_1D.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 10.3 - 2D convolution: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap10/10_3_2D_Convolution.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 10.4 - Downsampling & upsampling: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap10/10_4_Downsampling_and_Upsampling.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 10.5 - Convolution for MNIST: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap10/10_5_Convolution_For_MNIST.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 11.1 - Shattered gradients: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap11/11_1_Shattered_Gradients.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 11.2 - Residual networks: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap11/11_2_Residual_Networks.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 11.3 - Batch normalization: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap11/11_3_Batch_Normalization.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 12.1 - Self-attention: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap12/12_1_Self_Attention.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 12.2 - Multi-head self-attention: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap12/12_2_Multihead_Self_Attention.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 12.3 - Tokenization: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap12/12_3_Tokenization.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 12.4 - Decoding strategies: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap12/12_4_Decoding_Strategies.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 13.1 - Encoding graphs: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap13/13_1_Graph_Representation.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 13.2 - Graph classification : <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap13/13_2_Graph_Classification.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 13.3 - Neighborhood sampling: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap13/13_3_Neighborhood_Sampling.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 13.4 - Graph attention: <a
|
||||||
|
href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap13/13_4_Graph_Attention_Networks.ipynb">ipynb/colab </a>
|
||||||
|
</li>
|
||||||
|
<li> Notebook 15.1 - GAN toy example: (coming soon)</li>
|
||||||
|
<li> Notebook 15.2 - Wasserstein distance: (coming soon)</li>
|
||||||
|
<li> Notebook 16.1 - 1D normalizing flows: (coming soon)</li>
|
||||||
|
<li> Notebook 16.2 - Autoregressive flows: (coming soon)</li>
|
||||||
|
<li> Notebook 16.3 - Contraction mappings: (coming soon)</li>
|
||||||
|
<li> Notebook 17.1 - Latent variable models: (coming soon)</li>
|
||||||
|
<li> Notebook 17.2 - Reparameterization trick: (coming soon)</li>
|
||||||
|
<li> Notebook 17.3 - Importance sampling: (coming soon)</li>
|
||||||
|
<li> Notebook 18.1 - Diffusion encoder: (coming soon)</li>
|
||||||
|
<li> Notebook 18.2 - 1D diffusion model: (coming soon)</li>
|
||||||
|
<li> Notebook 18.3 - Reparameterized model: (coming soon)</li>
|
||||||
|
<li> Notebook 18.4 - Families of diffusion models: (coming soon)</li>
|
||||||
|
<li> Notebook 19.1 - Markov decision processes: (coming soon)</li>
|
||||||
|
<li> Notebook 19.2 - Dynamic programming: (coming soon)</li>
|
||||||
|
<li> Notebook 19.3 - Monte-Carlo methods: (coming soon)</li>
|
||||||
|
<li> Notebook 19.4 - Temporal difference methods: (coming soon)</li>
|
||||||
|
<li> Notebook 19.5 - Control variates: (coming soon)</li>
|
||||||
|
<li> Notebook 20.1 - Random data: (coming soon)</li>
|
||||||
|
<li> Notebook 20.2 - Full-batch gradient descent: (coming soon)</li>
|
||||||
|
<li> Notebook 20.3 - Lottery tickets: (coming soon)</li>
|
||||||
|
<li> Notebook 20.4 - Adversarial attacks: (coming soon)</li>
|
||||||
|
<li> Notebook 21.1 - Bias mitigation: (coming soon)</li>
|
||||||
|
<li> Notebook 21.2 - Explainability: (coming soon)</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
|
||||||
<h2>Resources for instructors </h2>
|
<br>
|
||||||
|
<h2>Citation</h2>
|
||||||
<p></p>Instructor answer booklet available with proof of credentials via <a href="https://mitpress.mit.edu/9780262048644/understanding-deep-learning"/> MIT Press</a></p>
|
<pre><code>
|
||||||
|
|
||||||
<p></p>Figures in PDF (vector) / SVG (vector) / Powerpoint (images):
|
|
||||||
|
|
||||||
<ul>
|
|
||||||
<li> Chapter 1 - Introduction: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap1PDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1udnl5pUOAc8DcAQ7HQwyzP9pwL95ynnv"> SVG Figures</a> / <a href="https://docs.google.com/presentation/d/1IjTqIUvWCJc71b5vEJYte-Dwujcp7rvG/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint Figures</a>
|
|
||||||
<li> Chapter 2 - Supervised learning: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap2PDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1VSxcU5y1qNFlmd3Lb3uOWyzILuOj1Dla"> SVG Figures</a> / <a href="https://docs.google.com/presentation/d/1Br7R01ROtRWPlNhC_KOommeHAWMBpWtz/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint Figures</a>
|
|
||||||
<li> Chapter 3 - Shallow neural networks: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap3PDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=19kZFWlXhzN82Zx02ByMmSZOO4T41fmqI"> SVG Figures</a> / <a href="https://docs.google.com/presentation/d/1e9M3jB5I9qZ4dCBY90Q3Hwft_i068QVQ/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint Figures</a>
|
|
||||||
<li> Chapter 4 - Deep neural networks: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap4PDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1ojr0ebsOhzvS04ItAflX2cVmYqHQHZUa"> SVG Figures</a> / <a href="https://docs.google.com/presentation/d/1LTSsmY4mMrJbqXVvoTOCkQwHrRKoYnJj/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint Figures</a>
|
|
||||||
<li> Chapter 5 - Loss functions: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap5PDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=17MJO7fiMpFZVqKeqXTbQ36AMpmR4GizZ"> SVG Figures</a> / <a href="https://docs.google.com/presentation/d/1gcpC_3z9oRp87eMkoco-kdLD-MM54Puk/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint Figures</a>
|
|
||||||
<li> Chapter 6 - Training models: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap6PDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1VPdhFRnCr9_idTrX0UdHKGAw2shUuwhK"> SVG Figures</a> / <a href="https://docs.google.com/presentation/d/1AKoeggAFBl9yLC7X5tushAGzCCxmB7EY/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint Figures</a>
|
|
||||||
<li> Chapter 7 - Gradients and initialization: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap7PDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1TTl4gvrTvNbegnml4CoGoKOOd6O8-PGs"> SVG Figures</a> / <a href="https://docs.google.com/presentation/d/11zhB6PI-Dp6Ogmr4IcI6fbvbqNqLyYcz/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint Figures</a>
|
|
||||||
<li> Chapter 8 - Measuring performance: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap8PDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=19eQOnygd_l0DzgtJxXuYnWa4z7QKJrJx"> SVG Figures</a> / <a href="https://docs.google.com/presentation/d/1SHRmJscDLUuQrG7tmysnScb3ZUAqVMZo/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint Figures</a>
|
|
||||||
<li> Chapter 9 - Regularization: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap9PDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1LprgnUGL7xAM9-jlGZC9LhMPeefjY0r0"> SVG Figures</a> / <a href="https://docs.google.com/presentation/d/1VwIfvjpdfTny6sEfu4ZETwCnw6m8Eg-5/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint Figures</a>
|
|
||||||
<li> Chapter 10 - Convolutional networks: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap10PDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1-Wb3VzaSvVeRzoUzJbI2JjZE0uwqupM9"> SVG Figures</a> / <a href="https://docs.google.com/presentation/d/1MtfKBC4Y9hWwGqeP6DVwUNbi1j5ncQCg/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint Figures</a>
|
|
||||||
<li> Chapter 11 - Residual networks: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap11PDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1Mr58jzEVseUAfNYbGWCQyDtEDwvfHRi1"> SVG Figures</a> / <a href="https://docs.google.com/presentation/d/1saY8Faz0KTKAAifUrbkQdLA2qkyEjOPI/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint Figures</a>
|
|
||||||
<li> Chapter 12 - Transformers: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap12PDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1txzOVNf8-jH4UfJ6SLnrtOfPd1Q3ebzd"> SVG Figures</a> / <a href="https://docs.google.com/presentation/d/1GVNvYWa0WJA6oKg89qZre-UZEhABfm0l/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint Figures</a>
|
|
||||||
<li> Chapter 13 - Graph neural networks: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap13PDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1lQIV6nRp6LVfaMgpGFhuwEXG-lTEaAwe"> SVG Figures</a> / <a href="https://docs.google.com/presentation/d/1YwF3U82c1mQ74c1WqHVTzLZ0j7GgKaWP/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint Figures</a>
|
|
||||||
<li> Chapter 14 - Unsupervised learning: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap14PDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1aMbI6iCuUvOywqk5pBOmppJu1L1anqsM"> SVG Figures</a> / <a href="https://docs.google.com/presentation/d/1A-lBGv3NHl4L32NvfFgy1EKeSwY-0UeB/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true"> Powerpoint Figures</a>
|
|
||||||
<li> Chapter 15 - Generative adversarial networks: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap15PDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1EErnlZCOlXc3HK7m83T2Jh_0NzIUHvtL"> SVG Figures</a> / <a href="https://docs.google.com/presentation/d/10Ernk41ShOTf4IYkMD-l4dJfKATkXH4w/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint Figures</a>
|
|
||||||
<li> Chapter 16 - Normalizing flows: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap16PDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1B9bxtmdugwtg-b7Y4AdQKAIEVWxjx8l3"> SVG Figures</a> / <a href="https://docs.google.com/presentation/d/1nLLzqb9pdfF_h6i1HUDSyp7kSMIkSUUA/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint Figures</a>
|
|
||||||
<li> Chapter 17 - Variational autoencoders: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap17PDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1SNtNIY7khlHQYMtaOH-FosSH3kWwL4b7"> SVG Figures</a> / <a href="https://docs.google.com/presentation/d/1lQE4Bu7-LgvV2VlJOt_4dQT-kusYl7Vo/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint Figures</a>
|
|
||||||
<li> Chapter 18 - Diffusion models: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap18PDF.zip">PDF Figures</a> / <a href="https://docs.google.com/presentation/d/1x_ufIBtVPzWUvRieKMkpw5SdRjXWwdfR/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true"> PowerPoint Figures</a>
|
|
||||||
<li> Chapter 19 - Deep reinforcement learning: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap19PDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1a5WUoF7jeSgwC_PVdckJi1Gny46fCqh0"> SVG Figures</a> / <a href="https://docs.google.com/presentation/d/1TnYmVbFNhmMFetbjyfXGmkxp1EHauMqr/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true"> PowerPoint Figures </a>
|
|
||||||
<li> Chapter 20 - Why does deep learning work?: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap20PDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1M2d0DHEgddAQoIedKSDTTt7m1ZdmBLQ3"> SVG Figures</a> / <a href="https://docs.google.com/presentation/d/1coxF4IsrCzDTLrNjRagHvqB_FBy10miA/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true"> PowerPoint Figures</a>
|
|
||||||
<li> Chapter 21 - Deep learning and ethics: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap21PDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1jixmFfwmZkW_UVYzcxmDcMsdFFtnZ0bU"> SVG Figures</a>/ <a href="https://docs.google.com/presentation/d/1EtfzanZYILvi9_-Idm28zD94I_6OrN9R/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint Figures</a>
|
|
||||||
<li> Appendices - <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLAppendixPDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1k2j7hMN40ISPSg9skFYWFL3oZT7r8v-l"> SVG Figures</a> / <a href="https://docs.google.com/presentation/d/1_2cJHRnsoQQHst0rwZssv-XH4o5SEHks/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">Powerpoint Figures</a>
|
|
||||||
</ul>
|
|
||||||
|
|
||||||
Instructions for editing figures / equations can be found <a href="https://drive.google.com/uc?export=download&id=1T_MXXVR4AfyMnlEFI-UVDh--FXI5deAp/">here</a>.</p>
|
|
||||||
|
|
||||||
<h2>Resources for students</h2>
|
|
||||||
|
|
||||||
<p>Answers to selected questions: <a href="https://github.com/udlbook/udlbook/raw/main/UDL_Answer_Booklet_Students.pdf">PDF</a></p>
|
|
||||||
<p>Python notebooks:</p>
|
|
||||||
|
|
||||||
<ul>
|
|
||||||
<li> Chapter 1 - Introduction: <a href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap01/1_1_BackgroundMathematics.ipynb">1.1</a>
|
|
||||||
<li> Chapter 2 - Supervised learning <a href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap02/2_1_Supervised_Learning.ipynb">2.1</a>
|
|
||||||
<li> Chapter 3 - Shallow neural networks <a href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap03/3_1_Shallow_Networks_I.ipynb">3.1</a>, <a href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap03/3_2_Shallow_Networks_II.ipynb">3.2</a>, <a href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap03/3_3_Shallow_Network_Regions.ipynb">3.3</a>, <a href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap03/3_4_Activation_Functions.ipynb">3.4</a>
|
|
||||||
<li> Chapter 4 - Deep neural networks <a href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap04/4_1_Composing_Networks.ipynb">4.1</a>, 4.2, 4.3 (coming soon)
|
|
||||||
<li> Chapter 5 - Loss functions (coming soon)
|
|
||||||
<li> Chapter 6 - Training models (coming soon)
|
|
||||||
<li> Chapter 7 - Gradients and initialization (coming soon)
|
|
||||||
<li> Chapter 8 - Measuring performance (coming soon)
|
|
||||||
<li> Chapter 9 - Regularization (coming soon)
|
|
||||||
<li> Chapter 10 - Convolutional networks (coming soon)
|
|
||||||
<li> Chapter 11 - Residual networks (coming soon)
|
|
||||||
<li> Chapter 12 - Transformers (coming soon)
|
|
||||||
<li> Chapter 13 - Graph neural networks (coming soon)
|
|
||||||
<li> Chapter 14 - Unsupervised learning (coming soon)
|
|
||||||
<li> Chapter 15 - Generative adversarial networks (coming soon)
|
|
||||||
<li> Chapter 16 - Normalizing flows (coming soon)
|
|
||||||
<li> Chapter 17 - Variational autoencoders (coming soon)
|
|
||||||
<li> Chapter 18 - Diffusion models (coming soon)
|
|
||||||
<li> Chapter 19 - Deep reinforcement learning (coming soon)
|
|
||||||
<li> Chapter 20 - Why does deep learning work? (coming soon)
|
|
||||||
<li> Chapter 21 - Deep learning and ethics (coming soon)
|
|
||||||
</ul>
|
|
||||||
|
|
||||||
<br>
|
|
||||||
<h2>Citation:</h2>
|
|
||||||
<pre><code>
|
|
||||||
@book{prince2023understanding,
|
@book{prince2023understanding,
|
||||||
author = "Simon J.D. Prince",
|
author = "Simon J.D. Prince",
|
||||||
title = "Understanding Deep Learning",
|
title = "Understanding Deep Learning",
|
||||||
@@ -115,4 +378,6 @@ Instructions for editing figures / equations can be found <a href="https://drive
|
|||||||
year = 2023,
|
year = 2023,
|
||||||
url = "http://udlbook.com"
|
url = "http://udlbook.com"
|
||||||
}
|
}
|
||||||
</code></pre>
|
</code></pre>
|
||||||
|
</div>
|
||||||
|
</body>
|
||||||
23
style.css
Normal file
23
style.css
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
body {
|
||||||
|
font-size: 17px;
|
||||||
|
margin: 2% 10%;
|
||||||
|
}
|
||||||
|
|
||||||
|
#head {
|
||||||
|
display: flex;
|
||||||
|
flex-direction: row;
|
||||||
|
flex-wrap: wrap-reverse;
|
||||||
|
justify-content: space-between;
|
||||||
|
width: 100%;
|
||||||
|
}
|
||||||
|
|
||||||
|
#cover {
|
||||||
|
justify-content: center;
|
||||||
|
display: flex;
|
||||||
|
width: 30%;
|
||||||
|
}
|
||||||
|
|
||||||
|
#cover img {
|
||||||
|
width: 100%;
|
||||||
|
height: min-content;
|
||||||
|
}
|
||||||
Reference in New Issue
Block a user