Compare commits

..

50 Commits

Author SHA1 Message Date
udlbook
9f2449fcde Add files via upload 2024-02-15 16:51:27 -05:00
udlbook
025b677457 Merge pull request #150 from yrahal/main
Fix minor typos in Chapter 6 notebooks
2024-02-12 13:11:23 -05:00
Youcef Rahal
435971e3e2 Fix typos in 6_5_Adam.ipynb 2024-02-09 03:55:11 -05:00
Youcef Rahal
6e76cb9b96 Fix typos in 6_4_Momentum.ipynb 2024-02-07 20:17:49 -05:00
Youcef Rahal
732fc6f0b7 Fix issues typos in 6_3_Stochastic_Gradient_Descent.ipynb 2024-02-06 20:48:25 -05:00
udlbook
f2a3fab832 Created using Colaboratory 2024-02-06 18:45:05 -05:00
Youcef Rahal
8e3008673d Fix minor typos in 6_1_Line_Search.ipynb and 6_2_Gradient_Descent.ipynb 2024-02-04 11:03:14 -05:00
udlbook
07bcc98a85 Created using Colaboratory 2024-02-01 20:19:34 +00:00
udlbook
f4fa3e8397 Created using Colaboratory 2024-02-01 20:13:01 +00:00
udlbook
21cff37c72 Update index.html 2024-01-28 18:19:44 +00:00
udlbook
187c6a7352 Add files via upload 2024-01-28 10:01:17 +00:00
udlbook
8e4a0d4daf Add files via upload 2024-01-26 14:37:08 +00:00
udlbook
23b5affab3 Update 5_1_Least_Squares_Loss.ipynb 2024-01-25 16:01:23 +00:00
udlbook
4fb8ffe622 Merge pull request #144 from yrahal/main
Fix some typos in Notebooks/Chap05/5_1_Least_Squares_Loss.ipynb
2024-01-25 15:59:23 +00:00
Youcef Rahal
2adc1da566 Fix some typpos in Notebooks/Chap05/5_1_Least_Squares_Loss.ipynb 2024-01-25 10:16:46 -05:00
Youcef Rahal
6e4551a69f Fix some typpos in Notebooks/Chap05/5_1_Least_Squares_Loss.ipynb 2024-01-25 10:14:01 -05:00
udlbook
65c685706a Update 9_2_Implicit_Regularization.ipynb 2024-01-25 09:46:01 +00:00
udlbook
934f5f7748 Created using Colaboratory 2024-01-24 10:56:22 -05:00
udlbook
365cb41bba Update index.html 2024-01-23 10:54:43 +00:00
udlbook
4855761fb2 Update index.html 2024-01-19 15:04:48 -05:00
udlbook
37b4a76130 Update index.html 2024-01-16 18:16:41 -05:00
udlbook
ef791e5195 Add files via upload 2024-01-16 18:12:15 -05:00
udlbook
49945ff1c7 Update index.html 2024-01-16 16:37:43 -05:00
udlbook
cd8f08b2f3 Update index.html 2024-01-16 16:26:31 -05:00
udlbook
8e85e9111c Update 5_2_Binary_Cross_Entropy_Loss.ipynb 2024-01-16 15:41:24 -05:00
udlbook
be6a0a07fe Created using Colaboratory 2024-01-10 16:29:08 -05:00
udlbook
762ac337ae Created using Colaboratory 2024-01-10 16:14:09 -05:00
udlbook
e692fdd226 Update 4_3_Deep_Networks.ipynb 2024-01-10 16:04:22 -05:00
udlbook
a7d363fcf1 Created using Colaboratory 2024-01-10 16:03:48 -05:00
udlbook
69dffd8c79 Update 4_1_Composing_Networks.ipynb 2024-01-10 16:00:15 -05:00
udlbook
c81296d080 Add files via upload 2024-01-05 15:43:29 -05:00
udlbook
7ca2790c65 Add files via upload 2024-01-04 16:54:06 -05:00
udlbook
73d1a4d28e Add files via upload 2024-01-04 16:14:06 -05:00
udlbook
3f268ab9b9 Add files via upload 2024-01-02 15:10:13 -05:00
udlbook
a371b98529 Add files via upload 2024-01-02 13:16:51 -05:00
udlbook
9a683c502f Created using Colaboratory 2024-01-02 13:14:19 -05:00
udlbook
9a22703818 Add files via upload 2024-01-02 13:12:03 -05:00
udlbook
c19e2411c5 Add files via upload 2024-01-02 13:09:22 -05:00
udlbook
db836826f6 Add files via upload 2024-01-02 13:01:51 -05:00
udlbook
6f775910fe Add files via upload 2024-01-02 12:56:29 -05:00
udlbook
c11d57f313 Add files via upload 2024-01-02 12:51:50 -05:00
udlbook
67f102dd65 Add files via upload 2024-01-02 12:29:17 -05:00
udlbook
351199ec7e Add files via upload 2024-01-02 12:23:29 -05:00
udlbook
9409fbb447 Add files via upload 2024-01-02 12:12:20 -05:00
udlbook
707f93daae Add files via upload 2024-01-02 12:06:41 -05:00
udlbook
d2f885db37 Add files via upload 2024-01-02 11:58:12 -05:00
udlbook
8400d98b76 Created using Colaboratory 2024-01-02 09:45:25 -05:00
udlbook
adbf4322b8 Created using Colaboratory 2023-12-28 15:35:22 -05:00
udlbook
c87294176f Update 3_1_Shallow_Networks_I.ipynb 2023-12-24 17:50:50 -05:00
udlbook
b6b121cb1d Update index.html 2023-12-24 13:41:21 -05:00
35 changed files with 2698 additions and 2372 deletions

View File

@@ -0,0 +1,401 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"authorship_tag": "ABX9TyO6cFY1oR4CmbHL2QywgTXm",
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Blogs/BorealisGradientFlow.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"source": [
"# Gradient flow\n",
"\n",
"This notebook replicates some of the results in the the Borealis AI blog on gradient flow. \n"
],
"metadata": {
"id": "ucrRRJ4dq8_d"
}
},
{
"cell_type": "code",
"source": [
"# Import relevant libraries\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"from scipy.linalg import expm\n",
"from matplotlib import cm\n",
"from matplotlib.colors import ListedColormap"
],
"metadata": {
"id": "_IQFHZEMZE8T"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Create the three data points that are used to train the linear model in the blog. Each input point is a column in $\\mathbf{X}$ and consists of the $x$ position in the plot and the value 1, which is used to allow the model to fit bias terms neatly."
],
"metadata": {
"id": "NwgUP3MSriiJ"
}
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "cJNZ2VIcYsD8"
},
"outputs": [],
"source": [
"X = np.array([[0.2, 0.4, 0.8],[1,1,1]])\n",
"y = np.array([[-0.1],[0.15],[0.3]])\n",
"D = X.shape[0]\n",
"I = X.shape[1]\n",
"\n",
"print(\"X=\\n\",X)\n",
"print(\"y=\\n\",y)"
]
},
{
"cell_type": "code",
"source": [
"# Draw the three data points\n",
"fig, ax = plt.subplots()\n",
"ax.plot(X[0:1,:],y.T,'ro')\n",
"ax.set_xlim([0,1]); ax.set_ylim([-0.5,0.5])\n",
"ax.set_xlabel('x'); ax.set_ylabel('y')\n",
"plt.show()"
],
"metadata": {
"id": "FpFlD4nUZDRt"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Compute the evolution of the residuals, loss, and parameters as a function of time."
],
"metadata": {
"id": "H2LBR1DasQej"
}
},
{
"cell_type": "code",
"source": [
"# Discretized time to evaluate quantities at\n",
"t_all = np.arange(0,20,0.01)\n",
"nT = t_all.shape[0]\n",
"\n",
"# Initial parameters, and initial function output at training points\n",
"phi_0 = np.array([[-0.05],[-0.4]])\n",
"f_0 = X.T @ phi_0\n",
"\n",
"# Precompute pseudoinverse term (not a very sensible numerical implementation, but it works...)\n",
"XXTInvX = np.linalg.inv(X@X.T)@X\n",
"\n",
"# Create arrays to hold function at data points over time, residual over time, parameters over time\n",
"f_all = np.zeros((I,nT))\n",
"f_minus_y_all = np.zeros((I,nT))\n",
"phi_t_all = np.zeros((D,nT))\n",
"\n",
"# For each time, compute function, residual, and parameters at each time.\n",
"for t in range(len(t_all)):\n",
" f = y + expm(-X.T@X * t_all[t]) @ (f_0-y)\n",
" f_all[:,t:t+1] = f\n",
" f_minus_y_all[:,t:t+1] = f-y\n",
" phi_t_all[:,t:t+1] = phi_0 - XXTInvX @ (np.identity(3)-expm(-X.T@X * t_all[t])) @ (f_0-y)"
],
"metadata": {
"id": "wfF_oTS5Z4Wi"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Plot the results that were calculated in the previous cell"
],
"metadata": {
"id": "9jSjOOFutJUE"
}
},
{
"cell_type": "code",
"source": [
"# Plot function at data points\n",
"fig, ax = plt.subplots()\n",
"ax.plot(t_all,np.squeeze(f_all[0,:]),'r-', label='$f[x_{0},\\phi]$')\n",
"ax.plot(t_all,np.squeeze(f_all[1,:]),'g-', label='$f[x_{1},\\phi]$')\n",
"ax.plot(t_all,np.squeeze(f_all[2,:]),'b-', label='$f[x_{2},\\phi]$')\n",
"ax.set_xlim([0,np.max(t_all)]); ax.set_ylim([-0.5,0.5])\n",
"ax.set_xlabel('t'); ax.set_ylabel('f')\n",
"plt.legend(loc=\"lower right\")\n",
"plt.show()\n",
"\n",
"# Plot residual\n",
"fig, ax = plt.subplots()\n",
"ax.plot(t_all,np.squeeze(f_minus_y_all[0,:]),'r-', label='$f[x_{0},\\phi]-y_{0}$')\n",
"ax.plot(t_all,np.squeeze(f_minus_y_all[1,:]),'g-', label='$f[x_{1},\\phi]-y_{1}$')\n",
"ax.plot(t_all,np.squeeze(f_minus_y_all[2,:]),'b-', label='$f[x_{2},\\phi]-y_{2}$')\n",
"ax.set_xlim([0,np.max(t_all)]); ax.set_ylim([-0.5,0.5])\n",
"ax.set_xlabel('t'); ax.set_ylabel('f-y')\n",
"plt.legend(loc=\"lower right\")\n",
"plt.show()\n",
"\n",
"# Plot loss (sum of residuals)\n",
"fig, ax = plt.subplots()\n",
"square_error = 0.5 * np.sum(f_minus_y_all * f_minus_y_all, axis=0)\n",
"ax.plot(t_all, square_error,'k-')\n",
"ax.set_xlim([0,np.max(t_all)]); ax.set_ylim([-0.0,0.25])\n",
"ax.set_xlabel('t'); ax.set_ylabel('Loss')\n",
"plt.show()\n",
"\n",
"# Plot parameters\n",
"fig, ax = plt.subplots()\n",
"ax.plot(t_all, np.squeeze(phi_t_all[0,:]),'c-',label='$\\phi_{0}$')\n",
"ax.plot(t_all, np.squeeze(phi_t_all[1,:]),'m-',label='$\\phi_{1}$')\n",
"ax.set_xlim([0,np.max(t_all)]); ax.set_ylim([-1,1])\n",
"ax.set_xlabel('t'); ax.set_ylabel('$\\phi$')\n",
"plt.legend(loc=\"lower right\")\n",
"plt.show()"
],
"metadata": {
"id": "G9IwgwKltHz5"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Define the model and the loss function"
],
"metadata": {
"id": "N6VaUq2swa8D"
}
},
{
"cell_type": "code",
"source": [
"# Model is just a straight line with intercept phi[0] and slope phi[1]\n",
"def model(phi,x):\n",
" y_pred = phi[0]+phi[1] * x\n",
" return y_pred\n",
"\n",
"# Loss function is 0.5 times sum of squares of residuals for training data\n",
"def compute_loss(data_x, data_y, model, phi):\n",
" pred_y = model(phi, data_x)\n",
" loss = 0.5 * np.sum((pred_y-data_y)*(pred_y-data_y))\n",
" return loss"
],
"metadata": {
"id": "LGHEVUWWiB4f"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Draw the loss function"
],
"metadata": {
"id": "hr3hs7pKwo0g"
}
},
{
"cell_type": "code",
"source": [
"def draw_loss_function(compute_loss, X, y, model, phi_iters):\n",
" # Define pretty colormap\n",
" my_colormap_vals_hex =('2a0902', '2b0a03', '2c0b04', '2d0c05', '2e0c06', '2f0d07', '300d08', '310e09', '320f0a', '330f0b', '34100b', '35110c', '36110d', '37120e', '38120f', '39130f', '3a1410', '3b1411', '3c1511', '3d1612', '3e1613', '3f1713', '401714', '411814', '421915', '431915', '451a16', '461b16', '471b17', '481c17', '491d18', '4a1d18', '4b1e19', '4c1f19', '4d1f1a', '4e201b', '50211b', '51211c', '52221c', '53231d', '54231d', '55241e', '56251e', '57261f', '58261f', '592720', '5b2821', '5c2821', '5d2922', '5e2a22', '5f2b23', '602b23', '612c24', '622d25', '632e25', '652e26', '662f26', '673027', '683027', '693128', '6a3229', '6b3329', '6c342a', '6d342a', '6f352b', '70362c', '71372c', '72372d', '73382e', '74392e', '753a2f', '763a2f', '773b30', '783c31', '7a3d31', '7b3e32', '7c3e33', '7d3f33', '7e4034', '7f4134', '804235', '814236', '824336', '834437', '854538', '864638', '874739', '88473a', '89483a', '8a493b', '8b4a3c', '8c4b3c', '8d4c3d', '8e4c3e', '8f4d3f', '904e3f', '924f40', '935041', '945141', '955242', '965343', '975343', '985444', '995545', '9a5646', '9b5746', '9c5847', '9d5948', '9e5a49', '9f5a49', 'a05b4a', 'a15c4b', 'a35d4b', 'a45e4c', 'a55f4d', 'a6604e', 'a7614e', 'a8624f', 'a96350', 'aa6451', 'ab6552', 'ac6552', 'ad6653', 'ae6754', 'af6855', 'b06955', 'b16a56', 'b26b57', 'b36c58', 'b46d59', 'b56e59', 'b66f5a', 'b7705b', 'b8715c', 'b9725d', 'ba735d', 'bb745e', 'bc755f', 'bd7660', 'be7761', 'bf7862', 'c07962', 'c17a63', 'c27b64', 'c27c65', 'c37d66', 'c47e67', 'c57f68', 'c68068', 'c78169', 'c8826a', 'c9836b', 'ca846c', 'cb856d', 'cc866e', 'cd876f', 'ce886f', 'ce8970', 'cf8a71', 'd08b72', 'd18c73', 'd28d74', 'd38e75', 'd48f76', 'd59077', 'd59178', 'd69279', 'd7937a', 'd8957b', 'd9967b', 'da977c', 'da987d', 'db997e', 'dc9a7f', 'dd9b80', 'de9c81', 'de9d82', 'df9e83', 'e09f84', 'e1a185', 'e2a286', 'e2a387', 'e3a488', 'e4a589', 'e5a68a', 'e5a78b', 'e6a88c', 'e7aa8d', 'e7ab8e', 'e8ac8f', 'e9ad90', 'eaae91', 'eaaf92', 'ebb093', 'ecb295', 'ecb396', 'edb497', 'eeb598', 'eeb699', 'efb79a', 'efb99b', 'f0ba9c', 'f1bb9d', 'f1bc9e', 'f2bd9f', 'f2bfa1', 'f3c0a2', 'f3c1a3', 'f4c2a4', 'f5c3a5', 'f5c5a6', 'f6c6a7', 'f6c7a8', 'f7c8aa', 'f7c9ab', 'f8cbac', 'f8ccad', 'f8cdae', 'f9ceb0', 'f9d0b1', 'fad1b2', 'fad2b3', 'fbd3b4', 'fbd5b6', 'fbd6b7', 'fcd7b8', 'fcd8b9', 'fcdaba', 'fddbbc', 'fddcbd', 'fddebe', 'fddfbf', 'fee0c1', 'fee1c2', 'fee3c3', 'fee4c5', 'ffe5c6', 'ffe7c7', 'ffe8c9', 'ffe9ca', 'ffebcb', 'ffeccd', 'ffedce', 'ffefcf', 'fff0d1', 'fff2d2', 'fff3d3', 'fff4d5', 'fff6d6', 'fff7d8', 'fff8d9', 'fffada', 'fffbdc', 'fffcdd', 'fffedf', 'ffffe0')\n",
" my_colormap_vals_dec = np.array([int(element,base=16) for element in my_colormap_vals_hex])\n",
" r = np.floor(my_colormap_vals_dec/(256*256))\n",
" g = np.floor((my_colormap_vals_dec - r *256 *256)/256)\n",
" b = np.floor(my_colormap_vals_dec - r * 256 *256 - g * 256)\n",
" my_colormap = ListedColormap(np.vstack((r,g,b)).transpose()/255.0)\n",
"\n",
" # Make grid of intercept/slope values to plot\n",
" intercepts_mesh, slopes_mesh = np.meshgrid(np.arange(-1.0,1.0,0.005), np.arange(-1.0,1.0,0.005))\n",
" loss_mesh = np.zeros_like(slopes_mesh)\n",
" # Compute loss for every set of parameters\n",
" for idslope, slope in np.ndenumerate(slopes_mesh):\n",
" loss_mesh[idslope] = compute_loss(X, y, model, np.array([[intercepts_mesh[idslope]], [slope]]))\n",
"\n",
" fig,ax = plt.subplots()\n",
" fig.set_size_inches(8,8)\n",
" ax.contourf(intercepts_mesh,slopes_mesh,loss_mesh,256,cmap=my_colormap)\n",
" ax.contour(intercepts_mesh,slopes_mesh,loss_mesh,40,colors=['#80808080'])\n",
" ax.set_ylim([1,-1]); ax.set_xlim([-1,1])\n",
"\n",
" ax.plot(phi_iters[1,:], phi_iters[0,:],'g-')\n",
" ax.set_xlabel('Intercept'); ax.set_ylabel('Slope')\n",
" plt.show()"
],
"metadata": {
"id": "UCxa3tZ8a9kz"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"draw_loss_function(compute_loss, X[0:1,:], y.T, model, phi_t_all)"
],
"metadata": {
"id": "pXLLBaSaiI2A"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Draw the evolution of the function"
],
"metadata": {
"id": "ZsremHW-xFi5"
}
},
{
"cell_type": "code",
"source": [
"fig, ax = plt.subplots()\n",
"ax.plot(X[0:1,:],y.T,'ro')\n",
"x_vals = np.arange(0,1,0.001)\n",
"ax.plot(x_vals, phi_t_all[0,0]*x_vals + phi_t_all[1,0],'r-', label='t=0.00')\n",
"ax.plot(x_vals, phi_t_all[0,10]*x_vals + phi_t_all[1,10],'g-', label='t=0.10')\n",
"ax.plot(x_vals, phi_t_all[0,30]*x_vals + phi_t_all[1,30],'b-', label='t=0.30')\n",
"ax.plot(x_vals, phi_t_all[0,200]*x_vals + phi_t_all[1,200],'c-', label='t=2.00')\n",
"ax.plot(x_vals, phi_t_all[0,1999]*x_vals + phi_t_all[1,1999],'y-', label='t=20.0')\n",
"ax.set_xlim([0,1]); ax.set_ylim([-0.5,0.5])\n",
"ax.set_xlabel('x'); ax.set_ylabel('y')\n",
"plt.legend(loc=\"upper left\")\n",
"plt.show()"
],
"metadata": {
"id": "cv9ZrUoRkuhI"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Compute MAP and ML solutions\n",
"MLParams = np.linalg.inv(X@X.T)@X@y\n",
"sigma_sq_p = 3.0\n",
"sigma_sq = 0.05\n",
"MAPParams = np.linalg.inv(X@X.T+np.identity(X.shape[0])*sigma_sq/sigma_sq_p)@X@y"
],
"metadata": {
"id": "OU9oegSOof-o"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Finally, we predict both the mean and the uncertainty in the fitted model as a function of time"
],
"metadata": {
"id": "Ul__XvOgyYSA"
}
},
{
"cell_type": "code",
"source": [
"# Define x positions to make predictions (appending a 1 to each column)\n",
"x_predict = np.arange(0,1,0.01)[None,:]\n",
"x_predict = np.concatenate((x_predict,np.ones_like(x_predict)))\n",
"nX = x_predict.shape[1]\n",
"\n",
"# Create variables to store evolution of mean and variance of prediction over time\n",
"predict_mean_all = np.zeros((nT,nX))\n",
"predict_var_all = np.zeros((nT,nX))\n",
"\n",
"# Initial covariance\n",
"sigma_sq_p = 2.0\n",
"cov_init = sigma_sq_p * np.identity(2)\n",
"\n",
"# Run through each time computing a and b and hence mean and variance of prediction\n",
"for t in range(len(t_all)):\n",
" a = x_predict.T @(XXTInvX @ (np.identity(3)-expm(-X.T@X * t_all[t])) @ y)\n",
" b = x_predict.T -x_predict.T@XXTInvX @ (np.identity(3)-expm(-X.T@X * t_all[t])) @ X.T\n",
" predict_mean_all[t:t+1,:] = a.T\n",
" predict_cov = b@ cov_init @b.T\n",
" # We just want the diagonal of the covariance to plot the uncertainty\n",
" predict_var_all[t:t+1,:] = np.reshape(np.diag(predict_cov),(1,nX))"
],
"metadata": {
"id": "aMPADCuByKWr"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Plot the mean and variance at various times"
],
"metadata": {
"id": "PZTj93KK7QH6"
}
},
{
"cell_type": "code",
"source": [
"def plot_mean_var(X,y,x_predict, predict_mean_all, predict_var_all, this_t, sigma_sq = 0.00001):\n",
" fig, ax = plt.subplots()\n",
" ax.plot(X[0:1,:],y.T,'ro')\n",
" ax.plot(x_predict[0:1,:].T, predict_mean_all[this_t:this_t+1,:].T,'r-')\n",
" lower = np.squeeze(predict_mean_all[this_t:this_t+1,:].T-np.sqrt(predict_var_all[this_t:this_t+1,:].T+np.sqrt(sigma_sq)))\n",
" upper = np.squeeze(predict_mean_all[this_t:this_t+1,:].T+np.sqrt(predict_var_all[this_t:this_t+1,:].T+np.sqrt(sigma_sq)))\n",
" ax.fill_between(np.squeeze(x_predict[0:1,:]), lower, upper, color='lightgray')\n",
" ax.set_xlim([0,1]); ax.set_ylim([-0.5,0.5])\n",
" ax.set_xlabel('x'); ax.set_ylabel('y')\n",
" plt.show()\n",
"\n",
"plot_mean_var(X,y,x_predict, predict_mean_all, predict_var_all, this_t=0)\n",
"plot_mean_var(X,y,x_predict, predict_mean_all, predict_var_all, this_t=40)\n",
"plot_mean_var(X,y,x_predict, predict_mean_all, predict_var_all, this_t=80)\n",
"plot_mean_var(X,y,x_predict, predict_mean_all, predict_var_all, this_t=200)\n",
"plot_mean_var(X,y,x_predict, predict_mean_all, predict_var_all, this_t=500)\n",
"plot_mean_var(X,y,x_predict, predict_mean_all, predict_var_all, this_t=1000)"
],
"metadata": {
"id": "bYAFxgB880-v"
},
"execution_count": null,
"outputs": []
}
]
}

View File

@@ -1,16 +1,18 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
"colab_type": "text",
"id": "view-in-github"
},
"source": [
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap01/1_1_BackgroundMathematics.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "s5zzKSOusPOB"
@@ -39,6 +41,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "WV2Dl6owme2d"
@@ -46,11 +49,11 @@
"source": [
"**Linear functions**<br> We will be using the term *linear equation* to mean a weighted sum of inputs plus an offset. If there is just one input $x$, then this is a straight line:\n",
"\n",
"\\begin{equation}y=\\beta+\\omega x,\\end{equation} <br>\n",
"\\begin{equation}y=\\beta+\\omega x,\\end{equation} \n",
"\n",
"where $\\beta$ is the y-intercept of the linear and $\\omega$ is the slope of the line. When there are two inputs $x_{1}$ and $x_{2}$, then this becomes:\n",
"\n",
"\\begin{equation}y=\\beta+\\omega_1 x_1 + \\omega_2 x_2.\\end{equation} <br><br>\n",
"\\begin{equation}y=\\beta+\\omega_1 x_1 + \\omega_2 x_2.\\end{equation} \n",
"\n",
"Any other functions are by definition **non-linear**.\n",
"\n",
@@ -104,6 +107,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "AedfvD9dxShZ"
@@ -188,6 +192,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "i8tLwpls476R"
@@ -195,15 +200,15 @@
"source": [
"Often we will want to compute many linear functions at the same time. For example, we might have three inputs, $x_1$, $x_2$, and $x_3$ and want to compute two linear functions giving $y_1$ and $y_2$. Of course, we could do this by just running each equation separately,<br><br>\n",
"\n",
"\\begin{eqnarray}y_1 &=& \\beta_1 + \\omega_{11} x_1 + \\omega_{12} x_2 + \\omega_{13} x_3\\\\\n",
"\\begin{align}y_1 &=& \\beta_1 + \\omega_{11} x_1 + \\omega_{12} x_2 + \\omega_{13} x_3\\\\\n",
"y_2 &=& \\beta_2 + \\omega_{21} x_1 + \\omega_{22} x_2 + \\omega_{23} x_3.\n",
"\\end{eqnarray}<br>\n",
"\\end{align}\n",
"\n",
"However, we can write it more compactly with vectors and matrices:\n",
"\n",
"\\begin{equation}\n",
"\\begin{bmatrix} y_1\\\\ y_2 \\end{bmatrix} = \\begin{bmatrix}\\beta_{1}\\\\\\beta_{2}\\end{bmatrix}+ \\begin{bmatrix}\\omega_{11}&\\omega_{12}&\\omega_{13}\\\\\\omega_{21}&\\omega_{22}&\\omega_{23}\\end{bmatrix}\\begin{bmatrix}x_{1}\\\\x_{2}\\\\x_{3}\\end{bmatrix},\n",
"\\end{equation}<br>\n",
"\\end{equation}\n",
"or\n",
"\n",
"\\begin{equation}\n",
@@ -231,6 +236,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "fGzVJQ6N-mHJ"
@@ -273,6 +279,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "3LGRoTMLU8ZU"
@@ -286,6 +293,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "7Y5zdKtKZAB2"
@@ -295,7 +303,7 @@
"\n",
"Throughout the book, we'll be using some special functions (see Appendix B.1.3). The most important of these are the logarithm and exponential functions. Let's investigate their properties.\n",
"\n",
"We'll start with the exponential function $y=\\mbox{exp}[x]=e^x$ which maps the real line $[-\\infty,+\\infty]$ to non-negative numbers $[0,+\\infty]$."
"We'll start with the exponential function $y=\\exp[x]=e^x$ which maps the real line $[-\\infty,+\\infty]$ to non-negative numbers $[0,+\\infty]$."
]
},
{
@@ -321,6 +329,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "XyrT8257IWCu"
@@ -328,14 +337,15 @@
"source": [
"# Questions\n",
"\n",
"1. What is $\\mbox{exp}[0]$? \n",
"2. What is $\\mbox{exp}[1]$?\n",
"3. What is $\\mbox{exp}[-\\infty]$?\n",
"4. What is $\\mbox{exp}[+\\infty]$?\n",
"1. What is $\\exp[0]$? \n",
"2. What is $\\exp[1]$?\n",
"3. What is $\\exp[-\\infty]$?\n",
"4. What is $\\exp[+\\infty]$?\n",
"5. A function is convex if we can draw a straight line between any two points on the function, and this line always lies above the function. Similarly, a function is concave if a straight line between any two points always lies below the function. Is the exponential function convex or concave or neither?\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "R6A4e5IxIWCu"
@@ -367,6 +377,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "yYWrL5AXIWCv"
@@ -374,20 +385,20 @@
"source": [
"# Questions\n",
"\n",
"1. What is $\\mbox{log}[0]$? \n",
"2. What is $\\mbox{log}[1]$?\n",
"3. What is $\\mbox{log}[e]$?\n",
"4. What is $\\mbox{log}[\\exp[3]]$?\n",
"5. What is $\\mbox{exp}[\\log[4]]$?\n",
"6. What is $\\mbox{log}[-1]$?\n",
"1. What is $\\log[0]$? \n",
"2. What is $\\log[1]$?\n",
"3. What is $\\log[e]$?\n",
"4. What is $\\log[\\exp[3]]$?\n",
"5. What is $\\exp[\\log[4]]$?\n",
"6. What is $\\log[-1]$?\n",
"7. Is the logarithm function concave or convex?\n"
]
}
],
"metadata": {
"colab": {
"provenance": [],
"include_colab_link": true
"include_colab_link": true,
"provenance": []
},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",

File diff suppressed because one or more lines are too long

View File

@@ -4,7 +4,6 @@
"metadata": {
"colab": {
"provenance": [],
"authorship_tag": "ABX9TyNk2dAhwwRxGpfVSC3b2Owv",
"include_colab_link": true
},
"kernelspec": {
@@ -182,7 +181,7 @@
{
"cell_type": "markdown",
"source": [
"Now we'll extend this model to have two outputs $y_1$ and $y_2$, each of which can be visualized with a separate heatmap. You will now have sets of parameters $\\phi_{10}, \\phi_{11}, \\phi_{12}$, $\\phi_{13} and $\\phi_{20}, \\phi_{21}, \\phi_{22}$, \\phi_{23}$ that correspond to each of these outputs."
"Now we'll extend this model to have two outputs $y_1$ and $y_2$, each of which can be visualized with a separate heatmap. You will now have sets of parameters $\\phi_{10}, \\phi_{11}, \\phi_{12}, \\phi_{13}$ and $\\phi_{20}, \\phi_{21}, \\phi_{22}, \\phi_{23}$ that correspond to each of these outputs."
],
"metadata": {
"id": "Xl6LcrUyM7Lh"

View File

@@ -1,33 +1,22 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"authorship_tag": "ABX9TyOmxhh3ymYWX+1HdZ91I6zU",
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
}
},
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
"colab_type": "text",
"id": "view-in-github"
},
"source": [
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap03/3_4_Activation_Functions.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "Mn0F56yY8ohX"
},
"source": [
"# **Notebook 3.4 -- Activation functions**\n",
"\n",
@@ -36,10 +25,7 @@
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and write code to complete the functions. There are also questions interspersed in the text.\n",
"\n",
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
],
"metadata": {
"id": "Mn0F56yY8ohX"
}
]
},
{
"cell_type": "code",
@@ -57,6 +43,11 @@
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "AeHzflFt9Tgn"
},
"outputs": [],
"source": [
"# Plot the shallow neural network. We'll assume input in is range [0,1] and output [-1,1]\n",
"# If the plot_all flag is set to true, then we'll plot all the intermediate stages as in Figure 3.3\n",
@@ -94,15 +85,15 @@
" for i in range(len(x_data)):\n",
" ax.plot(x_data[i], y_data[i],)\n",
" plt.show()"
],
"metadata": {
"id": "AeHzflFt9Tgn"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "7qeIUrh19AkH"
},
"outputs": [],
"source": [
"# Define a shallow neural network with, one input, one output, and three hidden units\n",
"def shallow_1_1_3(x, activation_fn, phi_0,phi_1,phi_2,phi_3, theta_10, theta_11, theta_20, theta_21, theta_30, theta_31):\n",
@@ -123,38 +114,39 @@
"\n",
" # Return everything we have calculated\n",
" return y, pre_1, pre_2, pre_3, act_1, act_2, act_3, w_act_1, w_act_2, w_act_3"
],
"metadata": {
"id": "7qeIUrh19AkH"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "cwTp__Fk9YUx"
},
"outputs": [],
"source": [
"# Define the Rectified Linear Unit (ReLU) function\n",
"def ReLU(preactivation):\n",
" activation = preactivation.clip(0.0)\n",
" return activation"
],
"metadata": {
"id": "cwTp__Fk9YUx"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"First, let's run the network with a ReLU functions"
],
"metadata": {
"id": "INQkRzyn9kVC"
}
},
"source": [
"First, let's run the network with a ReLU functions"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "jT9QuKou9i0_"
},
"outputs": [],
"source": [
"# Now lets define some parameters and run the neural network\n",
"theta_10 = 0.3 ; theta_11 = -1.0\n",
@@ -170,15 +162,14 @@
" shallow_1_1_3(x, ReLU, phi_0,phi_1,phi_2,phi_3, theta_10, theta_11, theta_20, theta_21, theta_30, theta_31)\n",
"# And then plot it\n",
"plot_neural(x, y, pre_1, pre_2, pre_3, act_1, act_2, act_3, w_act_1, w_act_2, w_act_3, plot_all=True)"
],
"metadata": {
"id": "jT9QuKou9i0_"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "-I8N7r1o9HYf"
},
"source": [
"# Sigmoid activation function\n",
"\n",
@@ -189,13 +180,15 @@
"\\end{equation}\n",
"\n",
"(Note that the factor of 10 is not standard -- but it allow us to plot on the same axes as the ReLU examples)"
],
"metadata": {
"id": "-I8N7r1o9HYf"
}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "hgkioNyr975Y"
},
"outputs": [],
"source": [
"# Define the sigmoid function\n",
"def sigmoid(preactivation):\n",
@@ -204,15 +197,15 @@
" activation = np.zeros_like(preactivation);\n",
"\n",
" return activation"
],
"metadata": {
"id": "hgkioNyr975Y"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "94HIXKJH97ve"
},
"outputs": [],
"source": [
"# Make an array of inputs\n",
"z = np.arange(-1,1,0.01)\n",
@@ -224,24 +217,25 @@
"ax.set_xlim([-1,1]);ax.set_ylim([0,1])\n",
"ax.set_xlabel('z'); ax.set_ylabel('sig[z]')\n",
"plt.show()"
],
"metadata": {
"id": "94HIXKJH97ve"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"Let's see what happens when we use this activation function in a neural network"
],
"metadata": {
"id": "p3zQNXhj-J-o"
}
},
"source": [
"Let's see what happens when we use this activation function in a neural network"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "C1dASr9L-GNt"
},
"outputs": [],
"source": [
"theta_10 = 0.3 ; theta_11 = -1.0\n",
"theta_20 = -1.0 ; theta_21 = 2.0\n",
@@ -256,39 +250,41 @@
" shallow_1_1_3(x, sigmoid, phi_0,phi_1,phi_2,phi_3, theta_10, theta_11, theta_20, theta_21, theta_30, theta_31)\n",
"# And then plot it\n",
"plot_neural(x, y, pre_1, pre_2, pre_3, act_1, act_2, act_3, w_act_1, w_act_2, w_act_3, plot_all=True)"
],
"metadata": {
"id": "C1dASr9L-GNt"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"You probably notice that this gives nice smooth curves. So why don't we use this? Aha... it's not obvious right now, but we will get to it when we learn to fit models."
],
"metadata": {
"id": "Uuam_DewA9fH"
}
},
"source": [
"You probably notice that this gives nice smooth curves. So why don't we use this? Aha... it's not obvious right now, but we will get to it when we learn to fit models."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "C9WKkcMUABze"
},
"source": [
"# Heaviside activation function\n",
"\n",
"The Heaviside function is defined as:\n",
"\n",
"\\begin{equation}\n",
"\\mbox{heaviside}[z] = \\begin{cases} 0 & \\quad z <0 \\\\ 1 & \\quad z\\geq 0\\end{cases}\n",
"\\text{heaviside}[z] = \\begin{cases} 0 & \\quad z <0 \\\\ 1 & \\quad z\\geq 0\\end{cases}\n",
"\\end{equation}"
],
"metadata": {
"id": "C9WKkcMUABze"
}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "-1qFkdOL-NPc"
},
"outputs": [],
"source": [
"# Define the heaviside function\n",
"def heaviside(preactivation):\n",
@@ -299,15 +295,15 @@
"\n",
"\n",
" return activation"
],
"metadata": {
"id": "-1qFkdOL-NPc"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "mSPyp7iA-44H"
},
"outputs": [],
"source": [
"# Make an array of inputs\n",
"z = np.arange(-1,1,0.01)\n",
@@ -319,15 +315,15 @@
"ax.set_xlim([-1,1]);ax.set_ylim([-2,2])\n",
"ax.set_xlabel('z'); ax.set_ylabel('heaviside[z]')\n",
"plt.show()"
],
"metadata": {
"id": "mSPyp7iA-44H"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "t99K2lSl--Mq"
},
"outputs": [],
"source": [
"theta_10 = 0.3 ; theta_11 = -1.0\n",
"theta_20 = -1.0 ; theta_21 = 2.0\n",
@@ -342,39 +338,41 @@
" shallow_1_1_3(x, heaviside, phi_0,phi_1,phi_2,phi_3, theta_10, theta_11, theta_20, theta_21, theta_30, theta_31)\n",
"# And then plot it\n",
"plot_neural(x, y, pre_1, pre_2, pre_3, act_1, act_2, act_3, w_act_1, w_act_2, w_act_3, plot_all=True)"
],
"metadata": {
"id": "t99K2lSl--Mq"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"This can approximate any function, but the output is discontinuous, and there are also reasons not to use it that we will discover when we learn more about model fitting."
],
"metadata": {
"id": "T65MRtM-BCQA"
}
},
"source": [
"This can approximate any function, but the output is discontinuous, and there are also reasons not to use it that we will discover when we learn more about model fitting."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "RkB-XZMLBTaR"
},
"source": [
"# Linear activation functions\n",
"\n",
"Neural networks don't work if the activation function is linear. For example, consider what would happen if the activation function was:\n",
"\n",
"\\begin{equation}\n",
"\\mbox{lin}[z] = a + bz\n",
"\\text{lin}[z] = a + bz\n",
"\\end{equation}"
],
"metadata": {
"id": "RkB-XZMLBTaR"
}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Q59v3saj_jq1"
},
"outputs": [],
"source": [
"# Define the linear activation function\n",
"def lin(preactivation):\n",
@@ -384,15 +382,15 @@
" activation = a+b * preactivation\n",
" # Return\n",
" return activation"
],
"metadata": {
"id": "Q59v3saj_jq1"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "IwodsBr0BkDn"
},
"outputs": [],
"source": [
"# TODO\n",
"# 1. The linear activation function above just returns the input: (0+1*z) = z\n",
@@ -415,12 +413,23 @@
" shallow_1_1_3(x, lin, phi_0,phi_1,phi_2,phi_3, theta_10, theta_11, theta_20, theta_21, theta_30, theta_31)\n",
"# And then plot it\n",
"plot_neural(x, y, pre_1, pre_2, pre_3, act_1, act_2, act_3, w_act_1, w_act_2, w_act_3, plot_all=True)"
]
}
],
"metadata": {
"id": "IwodsBr0BkDn"
"colab": {
"authorship_tag": "ABX9TyOmxhh3ymYWX+1HdZ91I6zU",
"include_colab_link": true,
"provenance": []
},
"execution_count": null,
"outputs": []
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"name": "python"
}
]
},
"nbformat": 4,
"nbformat_minor": 0
}

View File

@@ -4,7 +4,6 @@
"metadata": {
"colab": {
"provenance": [],
"authorship_tag": "ABX9TyPEQEGetZqWnLRNn99Q2aaT",
"include_colab_link": true
},
"kernelspec": {
@@ -220,7 +219,7 @@
"source": [
"# TODO\n",
"# Take a piece of paper and draw what you think will happen when we feed the\n",
"# output of the first network into the second one now that we have changed it. Draw the relationship between\n",
"# output of the first network into the modified second network. Draw the relationship between\n",
"# the input of the first network and the output of the second one."
],
"metadata": {
@@ -261,7 +260,7 @@
"source": [
"# TODO\n",
"# Take a piece of paper and draw what you think will happen when we feed the\n",
"# output of the first network now we have changed it into the original second network. Draw the relationship between\n",
"# output of the modified first network into the original second network. Draw the relationship between\n",
"# the input of the first network and the output of the second one."
],
"metadata": {
@@ -302,7 +301,7 @@
"source": [
"# TODO\n",
"# Take a piece of paper and draw what you think will happen when we feed the\n",
"# output of the first network into the original second network. Draw the relationship between\n",
"# output of the first network into the a copy of itself. Draw the relationship between\n",
"# the input of the first network and the output of the second one."
],
"metadata": {
@@ -350,7 +349,7 @@
"# network (blue curve above)\n",
"\n",
"# Take away conclusion: with very few parameters, we can make A LOT of linear regions, but\n",
"# they depend on one another in complex ways that quickly become to difficult to understand intuitively."
"# they depend on one another in complex ways that quickly become too difficult to understand intuitively."
],
"metadata": {
"id": "HqzePCLOVQK7"

View File

@@ -118,7 +118,7 @@
{
"cell_type": "markdown",
"source": [
"Let's define a networks. We'll just consider the inputs and outputs over the range [-1,1]. If you set the \"plot_all\" flat to True, you can see the details of how it was created."
"Let's define a network. We'll just consider the inputs and outputs over the range [-1,1]. If you set the \"plot_all\" flat to True, you can see the details of how it was created."
],
"metadata": {
"id": "LxBJCObC-NTY"

View File

@@ -185,7 +185,7 @@
{
"cell_type": "code",
"source": [
"# Return probability under normal distribution for input x\n",
"# Return probability under normal distribution\n",
"def normal_distribution(y, mu, sigma):\n",
" # TODO-- write in the equation for the normal distribution\n",
" # Equation 5.7 from the notes (you will need np.sqrt() and np.exp(), and math.pi)\n",
@@ -329,7 +329,7 @@
"mu_pred = shallow_nn(x_train, beta_0, omega_0, beta_1, omega_1)\n",
"# Set the standard deviation to something reasonable\n",
"sigma = 0.2\n",
"# Compute the log likelihood\n",
"# Compute the negative log likelihood\n",
"nll = compute_negative_log_likelihood(y_train, mu_pred, sigma)\n",
"# Let's double check we get the right answer before proceeding\n",
"print(\"Correct answer = %9.9f, Your answer = %9.9f\"%(11.452419564,nll))"
@@ -388,7 +388,7 @@
{
"cell_type": "markdown",
"source": [
"Now let's investigate finding the maximum likelihood / minimum log likelihood / least squares solution. For simplicity, we'll assume that all the parameters are correct except one and look at how the likelihood, log likelihood, and sum of squares change as we manipulate the last parameter. We'll start with overall y offset, beta_1 (formerly phi_0)"
"Now let's investigate finding the maximum likelihood / minimum negative log likelihood / least squares solution. For simplicity, we'll assume that all the parameters are correct except one and look at how the likelihood, negative log likelihood, and sum of squares change as we manipulate the last parameter. We'll start with overall y offset, beta_1 (formerly phi_0)"
],
"metadata": {
"id": "OgcRojvPWh4V"
@@ -431,7 +431,7 @@
{
"cell_type": "code",
"source": [
"# Now let's plot the likelihood, negative log likelihood, and least squares as a function the value of the offset beta1\n",
"# Now let's plot the likelihood, negative log likelihood, and least squares as a function of the value of the offset beta1\n",
"fig, ax = plt.subplots(1,2)\n",
"fig.set_size_inches(10.5, 5.5)\n",
"fig.tight_layout(pad=10.0)\n",
@@ -530,7 +530,7 @@
{
"cell_type": "code",
"source": [
"# Now let's plot the likelihood, negative log likelihood, and least squares as a function the value of the standard divation sigma\n",
"# Now let's plot the likelihood, negative log likelihood, and least squares as a function of the value of the standard deviation sigma\n",
"fig, ax = plt.subplots(1,2)\n",
"fig.set_size_inches(10.5, 5.5)\n",
"fig.tight_layout(pad=10.0)\n",
@@ -581,7 +581,7 @@
{
"cell_type": "markdown",
"source": [
"Obviously, to fit the full neural model we would vary all of the 10 parameters of the network in $\\boldsymbol\\beta_{0},\\boldsymbol\\omega_{0},\\boldsymbol\\beta_{1},\\boldsymbol\\omega_{1}$ (and maybe $\\sigma$) until we find the combination that have the maximum likelihood / minimum negative log likelihood / least squares.<br><br>\n",
"Obviously, to fit the full neural model we would vary all of the 10 parameters of the network in $\\boldsymbol\\beta_{0},\\boldsymbol\\Omega_{0},\\boldsymbol\\beta_{1},\\boldsymbol\\Omega_{1}$ (and maybe $\\sigma$) until we find the combination that have the maximum likelihood / minimum negative log likelihood / least squares.<br><br>\n",
"\n",
"Here we just varied one at a time as it is easier to see what is going on. This is known as **coordinate descent**.\n"
],

View File

@@ -4,7 +4,6 @@
"metadata": {
"colab": {
"provenance": [],
"authorship_tag": "ABX9TyOSb+W2AOFVQm8FZcHAb2Jq",
"include_colab_link": true
},
"kernelspec": {
@@ -66,7 +65,7 @@
" return activation\n",
"\n",
"# Define a shallow neural network\n",
"def shallow_nn(x, beta_0, omega_0, beta_1, omaga_1):\n",
"def shallow_nn(x, beta_0, omega_0, beta_1, omega_1):\n",
" # Make sure that input data is (1 x n_data) array\n",
" n_data = x.size\n",
" x = np.reshape(x,(1,n_data))\n",
@@ -199,7 +198,7 @@
{
"cell_type": "markdown",
"source": [
"The left is model output and the right is the model output after the sigmoid has been applied, so it now lies in the range [0,1] and represents the probability, that y=1. The black dots show the training data. We'll compute the the likelihood and the negative log likelihood."
"The left is model output and the right is the model output after the sigmoid has been applied, so it now lies in the range [0,1] and represents the probability, that y=1. The black dots show the training data. We'll compute the likelihood and the negative log likelihood."
],
"metadata": {
"id": "MvVX6tl9AEXF"
@@ -208,7 +207,7 @@
{
"cell_type": "code",
"source": [
"# Return probability under Bernoulli distribution for input x\n",
"# Return probability under Bernoulli distribution for observed class y\n",
"def bernoulli_distribution(y, lambda_param):\n",
" # TODO-- write in the equation for the Bernoulli distribution\n",
" # Equation 5.17 from the notes (you will need np.power)\n",
@@ -269,7 +268,7 @@
"source": [
"# Let's test this\n",
"beta_0, omega_0, beta_1, omega_1 = get_parameters()\n",
"# Use our neural network to predict the mean of the Gaussian\n",
"# Use our neural network to predict the Bernoulli parameter lambda\n",
"model_out = shallow_nn(x_train, beta_0, omega_0, beta_1, omega_1)\n",
"lambda_train = sigmoid(model_out)\n",
"# Compute the likelihood\n",
@@ -336,7 +335,7 @@
{
"cell_type": "markdown",
"source": [
"Now let's investigate finding the maximum likelihood / minimum negative log likelihood solution. For simplicity, we'll assume that all the parameters are fixed except one and look at how the likelihood and log likelihood change as we manipulate the last parameter. We'll start with overall y_offset, beta_1 (formerly phi_0)"
"Now let's investigate finding the maximum likelihood / minimum negative log likelihood solution. For simplicity, we'll assume that all the parameters are fixed except one and look at how the likelihood and negative log likelihood change as we manipulate the last parameter. We'll start with overall y_offset, beta_1 (formerly phi_0)"
],
"metadata": {
"id": "OgcRojvPWh4V"
@@ -359,7 +358,7 @@
" # Run the network with new parameters\n",
" model_out = shallow_nn(x_train, beta_0, omega_0, beta_1, omega_1)\n",
" lambda_train = sigmoid(model_out)\n",
" # Compute and store the three values\n",
" # Compute and store the two values\n",
" likelihoods[count] = compute_likelihood(y_train,lambda_train)\n",
" nlls[count] = compute_negative_log_likelihood(y_train, lambda_train)\n",
" # Draw the model for every 20th parameter setting\n",
@@ -378,7 +377,7 @@
{
"cell_type": "code",
"source": [
"# Now let's plot the likelihood, negative log likelihood, and least squares as a function the value of the offset beta1\n",
"# Now let's plot the likelihood and negative log likelihood as a function of the value of the offset beta1\n",
"fig, ax = plt.subplots()\n",
"fig.tight_layout(pad=5.0)\n",
"likelihood_color = 'tab:red'\n",
@@ -430,7 +429,7 @@
"source": [
"They both give the same answer. But you can see from the likelihood above that the likelihood is very small unless the parameters are almost correct. So in practice, we would work with the negative log likelihood.<br><br>\n",
"\n",
"Again, to fit the full neural model we would vary all of the 10 parameters of the network in the $\\boldsymbol\\beta_{0},\\boldsymbol\\omega_{0},\\boldsymbol\\beta_{1},\\boldsymbol\\omega_{1}$ until we find the combination that have the maximum likelihood / minimum negative log likelihood.<br><br>\n",
"Again, to fit the full neural model we would vary all of the 10 parameters of the network in the $\\boldsymbol\\beta_{0},\\boldsymbol\\Omega_{0},\\boldsymbol\\beta_{1},\\boldsymbol\\Omega_{1}$ until we find the combination that have the maximum likelihood / minimum negative log likelihood.<br><br>\n",
"\n"
],
"metadata": {

View File

@@ -1,20 +1,4 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"authorship_tag": "ABX9TyOPv/l+ToaApJV7Nz+8AtpV",
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
}
},
"cells": [
{
"cell_type": "markdown",
@@ -28,6 +12,9 @@
},
{
"cell_type": "markdown",
"metadata": {
"id": "jSlFkICHwHQF"
},
"source": [
"# **Notebook 5.3 Multiclass Cross-Entropy Loss**\n",
"\n",
@@ -36,10 +23,7 @@
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
"\n",
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
],
"metadata": {
"id": "jSlFkICHwHQF"
}
]
},
{
"cell_type": "code",
@@ -61,6 +45,11 @@
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Fv7SZR3tv7mV"
},
"outputs": [],
"source": [
"# Define the Rectified Linear Unit (ReLU) function\n",
"def ReLU(preactivation):\n",
@@ -77,15 +66,15 @@
" h1 = ReLU(np.matmul(beta_0,np.ones((1,n_data))) + np.matmul(omega_0,x))\n",
" model_out = np.matmul(beta_1,np.ones((1,n_data))) + np.matmul(omega_1,h1)\n",
" return model_out"
],
"metadata": {
"id": "Fv7SZR3tv7mV"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "pUT9Ain_HRim"
},
"outputs": [],
"source": [
"# Get parameters for model -- we can call this function to easily reset them\n",
"def get_parameters():\n",
@@ -103,15 +92,15 @@
" omega_1[2,0] = 16.0; omega_1[2,1] = -8.0; omega_1[2,2] =-8\n",
"\n",
" return beta_0, omega_0, beta_1, omega_1"
],
"metadata": {
"id": "pUT9Ain_HRim"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "NRR67ri_1TzN"
},
"outputs": [],
"source": [
"# Utility function for plotting data\n",
"def plot_multiclass_classification(x_model, out_model, lambda_model, x_data = None, y_data = None, title= None):\n",
@@ -148,26 +137,26 @@
" if y_data[i] ==2:\n",
" ax[1].plot(x_data[i],-0.05, 'b.')\n",
" plt.show()"
],
"metadata": {
"id": "NRR67ri_1TzN"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "PsgLZwsPxauP"
},
"source": [
"# Multiclass classification\n",
"\n",
"For multiclass classification, the network must predict the probability of $K$ classes, using $K$ outputs. However, these probability must be non-negative and sum to one, and the network outputs can take arbitrary values. Hence, we pass the outputs through a softmax function which maps $K$ arbitrary values to $K$ non-negative values that sum to one."
],
"metadata": {
"id": "PsgLZwsPxauP"
}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "uFb8h-9IXnIe"
},
"outputs": [],
"source": [
"# Softmax function that maps a vector of arbitrary values to a vector of values that are positive and sum to one.\n",
"def softmax(model_out):\n",
@@ -184,15 +173,15 @@
" softmax_model_out = np.ones_like(model_out)/ exp_model_out.shape[0]\n",
"\n",
" return softmax_model_out"
],
"metadata": {
"id": "uFb8h-9IXnIe"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "VWzNOt1swFVd"
},
"outputs": [],
"source": [
"\n",
"# Let's create some 1D training data\n",
@@ -214,62 +203,62 @@
"model_out= shallow_nn(x_model, beta_0, omega_0, beta_1, omega_1)\n",
"lambda_model = softmax(model_out)\n",
"plot_multiclass_classification(x_model, model_out, lambda_model, x_train, y_train)\n"
],
"metadata": {
"id": "VWzNOt1swFVd"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "markdown",
"source": [
"The left is model output and the right is the model output after the softmax has been applied, so it now lies in the range [0,1] and represents the probability, that y=0 (red), 1 (green) and 2 (blue) The dots at the bottom show the training data with the same color scheme. So we want the red curve to be high where there are red dots, the green curve to be high where there are green dots, and the blue curve to be high where there are blue dots We'll compute the the likelihood and the negative log likelihood."
],
"metadata": {
"id": "MvVX6tl9AEXF"
}
},
"source": [
"The left is model output and the right is the model output after the softmax has been applied, so it now lies in the range [0,1] and represents the probability, that y=0 (red), 1 (green) and 2 (blue). The dots at the bottom show the training data with the same color scheme. So we want the red curve to be high where there are red dots, the green curve to be high where there are green dots, and the blue curve to be high where there are blue dots We'll compute the the likelihood and the negative log likelihood."
]
},
{
"cell_type": "code",
"source": [
"# Return probability under Categorical distribution for input x\n",
"# Just take value from row k of lambda param where y =k,\n",
"def categorical_distribution(y, lambda_param):\n",
" return np.array([lambda_param[row, i] for i, row in enumerate (y)])"
],
"execution_count": null,
"metadata": {
"id": "YaLdRlEX0FkU"
},
"execution_count": null,
"outputs": []
"outputs": [],
"source": [
"# Return probability under categorical distribution for observed class y\n",
"# Just take value from row k of lambda param where y =k,\n",
"def categorical_distribution(y, lambda_param):\n",
" return np.array([lambda_param[row, i] for i, row in enumerate (y)])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "4TSL14dqHHbV"
},
"outputs": [],
"source": [
"# Let's double check we get the right answer before proceeding\n",
"print(\"Correct answer = %3.3f, Your answer = %3.3f\"%(0.2,categorical_distribution(np.array([[0]]),np.array([[0.2],[0.5],[0.3]]))))\n",
"print(\"Correct answer = %3.3f, Your answer = %3.3f\"%(0.5,categorical_distribution(np.array([[1]]),np.array([[0.2],[0.5],[0.3]]))))\n",
"print(\"Correct answer = %3.3f, Your answer = %3.3f\"%(0.3,categorical_distribution(np.array([[2]]),np.array([[0.2],[0.5],[0.3]]))))\n",
"\n"
],
"metadata": {
"id": "4TSL14dqHHbV"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "markdown",
"source": [
"Now let's compute the likelihood using this function"
],
"metadata": {
"id": "R5z_0dzQMF35"
}
},
"source": [
"Now let's compute the likelihood using this function"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "zpS7o6liCx7f"
},
"outputs": [],
"source": [
"# Return the likelihood of all of the data under the model\n",
"def compute_likelihood(y_train, lambda_param):\n",
@@ -280,93 +269,93 @@
" likelihood = 0\n",
"\n",
" return likelihood"
],
"metadata": {
"id": "zpS7o6liCx7f"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "1hQxBLoVNlr2"
},
"outputs": [],
"source": [
"# Let's test this\n",
"beta_0, omega_0, beta_1, omega_1 = get_parameters()\n",
"# Use our neural network to predict the mean of the Gaussian\n",
"# Use our neural network to predict the parameters of the categorical distribution\n",
"model_out = shallow_nn(x_train, beta_0, omega_0, beta_1, omega_1)\n",
"lambda_train = softmax(model_out)\n",
"# Compute the likelihood\n",
"likelihood = compute_likelihood(y_train, lambda_train)\n",
"# Let's double check we get the right answer before proceeding\n",
"print(\"Correct answer = %9.9f, Your answer = %9.9f\"%(0.000000041,likelihood))"
],
"metadata": {
"id": "1hQxBLoVNlr2"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "HzphKgPfOvlk"
},
"source": [
"You can see that this gives a very small answer, even for this small 1D dataset, and with the model fitting quite well. This is because it is the product of several probabilities, which are all quite small themselves.\n",
"This will get out of hand pretty quickly with real datasets -- the likelihood will get so small that we can't represent it with normal finite-precision math\n",
"\n",
"This is why we use negative log likelihood"
],
"metadata": {
"id": "HzphKgPfOvlk"
}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "dsT0CWiKBmTV"
},
"outputs": [],
"source": [
"# Return the negative log likelihood of the data under the model\n",
"def compute_negative_log_likelihood(y_train, lambda_param):\n",
" # TODO -- compute the likelihood of the data -- don't use the likelihood function above -- compute the negative sum of the log probabilities\n",
" # TODO -- compute the negative log likelihood of the data -- don't use the likelihood function above -- compute the negative sum of the log probabilities\n",
" # You will need np.sum(), np.log()\n",
" # Replace the line below\n",
" nll = 0\n",
"\n",
" return nll"
],
"metadata": {
"id": "dsT0CWiKBmTV"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"source": [
"# Let's test this\n",
"beta_0, omega_0, beta_1, omega_1 = get_parameters()\n",
"# Use our neural network to predict the mean of the Gaussian\n",
"model_out = shallow_nn(x_train, beta_0, omega_0, beta_1, omega_1)\n",
"# Pass the outputs through the softmax function\n",
"lambda_train = softmax(model_out)\n",
"# Compute the log likelihood\n",
"nll = compute_negative_log_likelihood(y_train, lambda_train)\n",
"# Let's double check we get the right answer before proceeding\n",
"print(\"Correct answer = %9.9f, Your answer = %9.9f\"%(17.015457867,nll))"
],
"execution_count": null,
"metadata": {
"id": "nVxUXg9rQmwI"
},
"execution_count": null,
"outputs": []
"outputs": [],
"source": [
"# Let's test this\n",
"beta_0, omega_0, beta_1, omega_1 = get_parameters()\n",
"# Use our neural network to predict the parameters of the categorical distribution\n",
"model_out = shallow_nn(x_train, beta_0, omega_0, beta_1, omega_1)\n",
"# Pass the outputs through the softmax function\n",
"lambda_train = softmax(model_out)\n",
"# Compute the negative log likelihood\n",
"nll = compute_negative_log_likelihood(y_train, lambda_train)\n",
"# Let's double check we get the right answer before proceeding\n",
"print(\"Correct answer = %9.9f, Your answer = %9.9f\"%(17.015457867,nll))"
]
},
{
"cell_type": "markdown",
"source": [
"Now let's investigate finding the maximum likelihood / minimum log likelihood solution. For simplicity, we'll assume that all the parameters are fixed except one and look at how the likelihood and log likelihood change as we manipulate the last parameter. We'll start with overall y_offset, beta_1 (formerly phi_0)"
],
"metadata": {
"id": "OgcRojvPWh4V"
}
},
"source": [
"Now let's investigate finding the maximum likelihood / minimum negative log likelihood solution. For simplicity, we'll assume that all the parameters are fixed except one and look at how the likelihood and negative log likelihood change as we manipulate the last parameter. We'll start with overall y_offset, $\\beta_1$ (formerly $\\phi_0$)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "pFKtDaAeVU4U"
},
"outputs": [],
"source": [
"# Define a range of values for the parameter\n",
"beta_1_vals = np.arange(-2,6.0,0.1)\n",
@@ -382,7 +371,7 @@
" # Run the network with new parameters\n",
" model_out = shallow_nn(x_train, beta_0, omega_0, beta_1, omega_1)\n",
" lambda_train = softmax(model_out)\n",
" # Compute and store the three values\n",
" # Compute and store the two values\n",
" likelihoods[count] = compute_likelihood(y_train,lambda_train)\n",
" nlls[count] = compute_negative_log_likelihood(y_train, lambda_train)\n",
" # Draw the model for every 20th parameter setting\n",
@@ -391,17 +380,17 @@
" model_out = shallow_nn(x_model, beta_0, omega_0, beta_1, omega_1)\n",
" lambda_model = softmax(model_out)\n",
" plot_multiclass_classification(x_model, model_out, lambda_model, x_train, y_train, title=\"beta1[0,0]=%3.3f\"%(beta_1[0,0]))\n"
],
"metadata": {
"id": "pFKtDaAeVU4U"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "UHXeTa9MagO6"
},
"outputs": [],
"source": [
"# Now let's plot the likelihood, negative log likelihood, and least squares as a function the value of the offset beta1\n",
"# Now let's plot the likelihood and negative log likelihood as a function of the value of the offset beta1\n",
"fig, ax = plt.subplots()\n",
"fig.tight_layout(pad=5.0)\n",
"likelihood_color = 'tab:red'\n",
@@ -421,15 +410,15 @@
"plt.axvline(x = beta_1_vals[np.argmax(likelihoods)], linestyle='dotted')\n",
"\n",
"plt.show()"
],
"metadata": {
"id": "UHXeTa9MagO6"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "aDEPhddNdN4u"
},
"outputs": [],
"source": [
"# Hopefully, you can see that the maximum of the likelihood fn is at the same position as the minimum negative log likelihood solution\n",
"# Let's check that:\n",
@@ -441,24 +430,34 @@
"model_out = shallow_nn(x_model, beta_0, omega_0, beta_1, omega_1)\n",
"lambda_model = softmax(model_out)\n",
"plot_multiclass_classification(x_model, model_out, lambda_model, x_train, y_train, title=\"beta1[0,0]=%3.3f\"%(beta_1[0,0]))\n"
],
"metadata": {
"id": "aDEPhddNdN4u"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "771G8N1Vk5A2"
},
"source": [
"They both give the same answer. But you can see from the likelihood above that the likelihood is very small unless the parameters are almost correct. So in practice, we would work with the negative log likelihood.<br><br>\n",
"\n",
"Again, to fit the full neural model we would vary all of the 16 parameters of the network in the $\\boldsymbol\\beta_{0},\\boldsymbol\\omega_{0},\\boldsymbol\\beta_{1},\\boldsymbol\\omega_{1}$ until we find the combination that have the maximum likelihood / minimum negative log likelihood.<br><br>\n",
"Again, to fit the full neural model we would vary all of the 16 parameters of the network in the $\\boldsymbol\\beta_{0},\\boldsymbol\\Omega_{0},\\boldsymbol\\beta_{1},\\boldsymbol\\Omega_{1}$ until we find the combination that have the maximum likelihood / minimum negative log likelihood.<br><br>\n",
"\n"
]
}
],
"metadata": {
"id": "771G8N1Vk5A2"
"colab": {
"provenance": [],
"include_colab_link": true
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"name": "python"
}
}
]
},
"nbformat": 4,
"nbformat_minor": 0
}

View File

@@ -113,7 +113,7 @@
" b = 0.33\n",
" c = 0.66\n",
" d = 1.0\n",
" n_iter =0;\n",
" n_iter = 0\n",
"\n",
" # While we haven't found the minimum closely enough\n",
" while np.abs(b-c) > thresh and n_iter < max_iter:\n",
@@ -131,8 +131,7 @@
"\n",
" print('Iter %d, a=%3.3f, b=%3.3f, c=%3.3f, d=%3.3f'%(n_iter, a,b,c,d))\n",
"\n",
" # Rule #1 If the HEIGHT at point A is less the HEIGHT at points B, C, and D then halve values of B, C, and D\n",
" # i.e. bring them closer to the original point\n",
" # Rule #1 If the HEIGHT at point A is less than the HEIGHT at points B, C, and D then halve values of B, C, and D\n",
" # i.e. bring them closer to the original point\n",
" # TODO REPLACE THE BLOCK OF CODE BELOW WITH THIS RULE\n",
" if (0):\n",
@@ -140,7 +139,7 @@
"\n",
"\n",
" # Rule #2 If the HEIGHT at point b is less than the HEIGHT at point c then\n",
" # then point d becomes point c, and\n",
" # point d becomes point c, and\n",
" # point b becomes 1/3 between a and new d\n",
" # point c becomes 2/3 between a and new d\n",
" # TODO REPLACE THE BLOCK OF CODE BELOW WITH THIS RULE\n",
@@ -148,7 +147,7 @@
" continue;\n",
"\n",
" # Rule #3 If the HEIGHT at point c is less than the HEIGHT at point b then\n",
" # then point a becomes point b, and\n",
" # point a becomes point b, and\n",
" # point b becomes 1/3 between new a and d\n",
" # point c becomes 2/3 between new a and d\n",
" # TODO REPLACE THE BLOCK OF CODE BELOW WITH THIS RULE\n",

View File

@@ -1,32 +1,22 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
}
},
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
"colab_type": "text",
"id": "view-in-github"
},
"source": [
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap06/6_2_Gradient_Descent.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "el8l05WQEO46"
},
"source": [
"# **Notebook 6.2 Gradient descent**\n",
"\n",
@@ -36,10 +26,7 @@
"\n",
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
"\n"
],
"metadata": {
"id": "el8l05WQEO46"
}
]
},
{
"cell_type": "code",
@@ -58,34 +45,39 @@
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "4cRkrh9MZ58Z"
},
"outputs": [],
"source": [
"# Let's create our training data 12 pairs {x_i, y_i}\n",
"# We'll try to fit the straight line model to these data\n",
"data = np.array([[0.03,0.19,0.34,0.46,0.78,0.81,1.08,1.18,1.39,1.60,1.65,1.90],\n",
" [0.67,0.85,1.05,1.00,1.40,1.50,1.30,1.54,1.55,1.68,1.73,1.60]])"
],
"metadata": {
"id": "4cRkrh9MZ58Z"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "WQUERmb2erAe"
},
"outputs": [],
"source": [
"# Let's define our model -- just a straight line with intercept phi[0] and slope phi[1]\n",
"def model(phi,x):\n",
" y_pred = phi[0]+phi[1] * x\n",
" return y_pred"
],
"metadata": {
"id": "WQUERmb2erAe"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "qFRe9POHF2le"
},
"outputs": [],
"source": [
"# Draw model\n",
"def draw_model(data,model,phi,title=None):\n",
@@ -101,39 +93,40 @@
" if title is not None:\n",
" ax.set_title(title)\n",
" plt.show()"
],
"metadata": {
"id": "qFRe9POHF2le"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "TXx1Tpd1Tl-I"
},
"outputs": [],
"source": [
"# Initialize the parameters to some arbitrary values and draw the model\n",
"phi = np.zeros((2,1))\n",
"phi[0] = 0.6 # Intercept\n",
"phi[1] = -0.2 # Slope\n",
"draw_model(data,model,phi, \"Initial parameters\")\n"
],
"metadata": {
"id": "TXx1Tpd1Tl-I"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"Now lets create compute the sum of squares loss for the training data"
],
"metadata": {
"id": "QU5mdGvpTtEG"
}
},
"source": [
"Now let's compute the sum of squares loss for the training data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "I7dqTY2Gg7CR"
},
"outputs": [],
"source": [
"def compute_loss(data_x, data_y, model, phi):\n",
" # TODO -- Write this function -- replace the line below\n",
@@ -144,45 +137,47 @@
" loss = 0\n",
"\n",
" return loss"
],
"metadata": {
"id": "I7dqTY2Gg7CR"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"Let's just test that we got that right"
],
"metadata": {
"id": "eB5DQvU5hYNx"
}
},
"source": [
"Let's just test that we got that right"
]
},
{
"cell_type": "code",
"source": [
"loss = compute_loss(data[0,:],data[1,:],model,np.array([[0.6],[-0.2]]))\n",
"print('Your loss = %3.3f, Correct loss = %3.3f'%(loss, 12.367))"
],
"execution_count": null,
"metadata": {
"id": "Ty05UtEEg9tc"
},
"execution_count": null,
"outputs": []
"outputs": [],
"source": [
"loss = compute_loss(data[0,:],data[1,:],model,np.array([[0.6],[-0.2]]))\n",
"print('Your loss = %3.3f, Correct loss = %3.3f'%(loss, 12.367))"
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"Now let's plot the whole loss function"
],
"metadata": {
"id": "F3trnavPiHpH"
}
},
"source": [
"Now let's plot the whole loss function"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "K-NTHpAAHlCl"
},
"outputs": [],
"source": [
"def draw_loss_function(compute_loss, data, model, phi_iters = None):\n",
" # Define pretty colormap\n",
@@ -209,39 +204,40 @@
" ax.set_ylim([1,-1])\n",
" ax.set_xlabel('Intercept $\\phi_{0}$'); ax.set_ylabel('Slope, $\\phi_{1}$')\n",
" plt.show()"
],
"metadata": {
"id": "K-NTHpAAHlCl"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"source": [
"draw_loss_function(compute_loss, data, model)"
],
"execution_count": null,
"metadata": {
"id": "l8HbvIupnTME"
},
"execution_count": null,
"outputs": []
"outputs": [],
"source": [
"draw_loss_function(compute_loss, data, model)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "s9Duf05WqqSC"
},
"source": [
"Now let's compute the gradient vector for a given set of parameters:\n",
"\n",
"\\begin{equation}\n",
"\\frac{\\partial L}{\\partial \\boldsymbol\\phi} = \\begin{bmatrix}\\frac{\\partial L}{\\partial \\phi_0} \\\\\\frac{\\partial L}{\\partial \\phi_1} \\end{bmatrix}.\n",
"\\end{equation}"
],
"metadata": {
"id": "s9Duf05WqqSC"
}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "UpswmkL2qwBT"
},
"outputs": [],
"source": [
"# These are in the lecture slides and notes, but worth trying to calculate them yourself to\n",
"# check that you get them right. Write out the expression for the sum of squares loss and take the\n",
@@ -253,31 +249,32 @@
"\n",
" # Return the gradient\n",
" return np.array([[dl_dphi0],[dl_dphi1]])"
],
"metadata": {
"id": "UpswmkL2qwBT"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "RS1nEcYVuEAM"
},
"source": [
"We can check we got this right using a trick known as **finite differences**. If we evaluate the function and then change one of the parameters by a very small amount and normalize by that amount, we get an approximation to the gradient, so:\n",
"\n",
"\\begin{eqnarray}\n",
"\\begin{align}\n",
"\\frac{\\partial L}{\\partial \\phi_{0}}&\\approx & \\frac{L[\\phi_0+\\delta, \\phi_1]-L[\\phi_0, \\phi_1]}{\\delta}\\\\\n",
"\\frac{\\partial L}{\\partial \\phi_{1}}&\\approx & \\frac{L[\\phi_0, \\phi_1+\\delta]-L[\\phi_0, \\phi_1]}{\\delta}\n",
"\\end{eqnarray}\n",
"\\end{align}\n",
"\n",
"We can't do this when there are many parameters; for a million parameters, we would have to evaluate the loss function two million times, and usually computing the gradients directly is much more efficient."
],
"metadata": {
"id": "RS1nEcYVuEAM"
}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "QuwAHN7yt-gi"
},
"outputs": [],
"source": [
"# Compute the gradient using your function\n",
"gradient = compute_gradient(data[0,:],data[1,:], phi)\n",
@@ -290,24 +287,25 @@
" compute_loss(data[0,:],data[1,:],model,phi))/delta\n",
"print(\"Approx gradients: (%3.3f,%3.3f)\"%(dl_dphi0_est,dl_dphi1_est))\n",
"# There might be small differences in the last significant figure because finite gradients is an approximation\n"
],
"metadata": {
"id": "QuwAHN7yt-gi"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"Now we are ready to perform gradient descent. We'll need to use our line search routine from notebook 6.1, which I've reproduced here plus the helper function loss_function_1D that maps the search along the negative gradient direction in 2D space to a 1D problem (distance along this direction)"
],
"metadata": {
"id": "5EIjMM9Fw2eT"
}
},
"source": [
"Now we are ready to perform gradient descent. We'll need to use our line search routine from notebook 6.1, which I've reproduced here plus the helper function loss_function_1D that maps the search along the negative gradient direction in 2D space to a 1D problem (distance along this direction)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "XrJ2gQjfw1XP"
},
"outputs": [],
"source": [
"def loss_function_1D(dist_prop, data, model, phi_start, search_direction):\n",
" # Return the loss after moving this far\n",
@@ -319,7 +317,7 @@
" b = 0.33 * max_dist\n",
" c = 0.66 * max_dist\n",
" d = 1.0 * max_dist\n",
" n_iter =0;\n",
" n_iter = 0\n",
"\n",
" # While we haven't found the minimum closely enough\n",
" while np.abs(b-c) > thresh and n_iter < max_iter:\n",
@@ -343,7 +341,7 @@
" continue;\n",
"\n",
" # Rule #2 If point b is less than point c then\n",
" # then point d becomes point c, and\n",
" # point d becomes point c, and\n",
" # point b becomes 1/3 between a and new d\n",
" # point c becomes 2/3 between a and new d\n",
" if lossb < lossc:\n",
@@ -353,7 +351,7 @@
" continue\n",
"\n",
" # Rule #2 If point c is less than point b then\n",
" # then point a becomes point b, and\n",
" # point a becomes point b, and\n",
" # point b becomes 1/3 between new a and d\n",
" # point c becomes 2/3 between new a and d\n",
" a = b\n",
@@ -362,15 +360,15 @@
"\n",
" # Return average of two middle points\n",
" return (b+c)/2.0"
],
"metadata": {
"id": "XrJ2gQjfw1XP"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "YVq6rmaWRD2M"
},
"outputs": [],
"source": [
"def gradient_descent_step(phi, data, model):\n",
" # TODO -- update Phi with the gradient descent step (equation 6.3)\n",
@@ -379,15 +377,15 @@
" # 3. Update the parameters phi based on the gradient and the step size alpha.\n",
"\n",
" return phi"
],
"metadata": {
"id": "YVq6rmaWRD2M"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "tOLd0gtdRLLS"
},
"outputs": [],
"source": [
"# Initialize the parameters and draw the model\n",
"n_steps = 10\n",
@@ -409,12 +407,22 @@
"\n",
"# Draw the trajectory on the loss function\n",
"draw_loss_function(compute_loss, data, model,phi_all)\n"
]
}
],
"metadata": {
"id": "tOLd0gtdRLLS"
"colab": {
"include_colab_link": true,
"provenance": []
},
"execution_count": null,
"outputs": []
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"name": "python"
}
]
},
"nbformat": 4,
"nbformat_minor": 0
}

View File

@@ -1,33 +1,22 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"authorship_tag": "ABX9TyNk5FN4qlw3pk8BwDVWw1jN",
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
}
},
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
"colab_type": "text",
"id": "view-in-github"
},
"source": [
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap06/6_3_Stochastic_Gradient_Descent.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "el8l05WQEO46"
},
"source": [
"# **Notebook 6.3: Stochastic gradient descent**\n",
"\n",
@@ -39,10 +28,7 @@
"\n",
"\n",
"\n"
],
"metadata": {
"id": "el8l05WQEO46"
}
]
},
{
"cell_type": "code",
@@ -61,8 +47,13 @@
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "4cRkrh9MZ58Z"
},
"outputs": [],
"source": [
"# Let's create our training data 30 pairs {x_i, y_i}\n",
"# Let's create our training data of 30 pairs {x_i, y_i}\n",
"# We'll try to fit the Gabor model to these data\n",
"data = np.array([[-1.920e+00,-1.422e+01,1.490e+00,-1.940e+00,-2.389e+00,-5.090e+00,\n",
" -8.861e+00,3.578e+00,-6.010e+00,-6.995e+00,3.634e+00,8.743e-01,\n",
@@ -74,15 +65,15 @@
" -2.365e-02,5.098e-01,-2.777e-01,3.367e-01,1.927e-01,-2.222e-01,\n",
" 6.352e-02,6.888e-03,3.224e-02,1.091e-02,-5.706e-01,-5.258e-02,\n",
" -3.666e-02,1.709e-01,-4.805e-02,2.008e-01,-1.904e-01,5.952e-01]])"
],
"metadata": {
"id": "4cRkrh9MZ58Z"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "WQUERmb2erAe"
},
"outputs": [],
"source": [
"# Let's define our model\n",
"def model(phi,x):\n",
@@ -90,15 +81,15 @@
" gauss_component = np.exp(-(phi[0] + 0.06 * phi[1] * x) * (phi[0] + 0.06 * phi[1] * x) / 32)\n",
" y_pred= sin_component * gauss_component\n",
" return y_pred"
],
"metadata": {
"id": "WQUERmb2erAe"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "qFRe9POHF2le"
},
"outputs": [],
"source": [
"# Draw model\n",
"def draw_model(data,model,phi,title=None):\n",
@@ -113,39 +104,40 @@
" if title is not None:\n",
" ax.set_title(title)\n",
" plt.show()"
],
"metadata": {
"id": "qFRe9POHF2le"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "TXx1Tpd1Tl-I"
},
"outputs": [],
"source": [
"# Initialize the parameters and draw the model\n",
"phi = np.zeros((2,1))\n",
"phi[0] = -5 # Horizontal offset\n",
"phi[1] = 25 # Frequency\n",
"draw_model(data,model,phi, \"Initial parameters\")\n"
],
"metadata": {
"id": "TXx1Tpd1Tl-I"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"Now lets create compute the sum of squares loss for the training data"
],
"metadata": {
"id": "QU5mdGvpTtEG"
}
},
"source": [
"Now let's compute the sum of squares loss for the training data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "I7dqTY2Gg7CR"
},
"outputs": [],
"source": [
"def compute_loss(data_x, data_y, model, phi):\n",
" # TODO -- Write this function -- replace the line below\n",
@@ -155,45 +147,47 @@
" loss = 0\n",
"\n",
" return loss"
],
"metadata": {
"id": "I7dqTY2Gg7CR"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"Let's just test that we got that right"
],
"metadata": {
"id": "eB5DQvU5hYNx"
}
},
"source": [
"Let's just test that we got that right"
]
},
{
"cell_type": "code",
"source": [
"loss = compute_loss(data[0,:],data[1,:],model,np.array([[0.6],[-0.2]]))\n",
"print('Your loss = %3.3f, Correct loss = %3.3f'%(loss, 16.419))"
],
"execution_count": null,
"metadata": {
"id": "Ty05UtEEg9tc"
},
"execution_count": null,
"outputs": []
"outputs": [],
"source": [
"loss = compute_loss(data[0,:],data[1,:],model,np.array([[0.6],[-0.2]]))\n",
"print('Your loss = %3.3f, Correct loss = %3.3f'%(loss, 16.419))"
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"Now let's plot the whole loss function"
],
"metadata": {
"id": "F3trnavPiHpH"
}
},
"source": [
"Now let's plot the whole loss function"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "K-NTHpAAHlCl"
},
"outputs": [],
"source": [
"def draw_loss_function(compute_loss, data, model, phi_iters = None):\n",
" # Define pretty colormap\n",
@@ -204,7 +198,7 @@
" b = np.floor(my_colormap_vals_dec - r * 256 *256 - g * 256)\n",
" my_colormap = ListedColormap(np.vstack((r,g,b)).transpose()/255.0)\n",
"\n",
" # Make grid of intercept/slope values to plot\n",
" # Make grid of offset/frequency values to plot\n",
" offsets_mesh, freqs_mesh = np.meshgrid(np.arange(-10,10.0,0.1), np.arange(2.5,22.5,0.1))\n",
" loss_mesh = np.zeros_like(freqs_mesh)\n",
" # Compute loss for every set of parameters\n",
@@ -220,39 +214,40 @@
" ax.set_ylim([2.5,22.5])\n",
" ax.set_xlabel('Offset $\\phi_{0}$'); ax.set_ylabel('Frequency, $\\phi_{1}$')\n",
" plt.show()"
],
"metadata": {
"id": "K-NTHpAAHlCl"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"source": [
"draw_loss_function(compute_loss, data, model)"
],
"execution_count": null,
"metadata": {
"id": "l8HbvIupnTME"
},
"execution_count": null,
"outputs": []
"outputs": [],
"source": [
"draw_loss_function(compute_loss, data, model)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "s9Duf05WqqSC"
},
"source": [
"Now let's compute the gradient vector for a given set of parameters:\n",
"\n",
"\\begin{equation}\n",
"\\frac{\\partial L}{\\partial \\boldsymbol\\phi} = \\begin{bmatrix}\\frac{\\partial L}{\\partial \\phi_0} \\\\\\frac{\\partial L}{\\partial \\phi_1} \\end{bmatrix}.\n",
"\\end{equation}"
],
"metadata": {
"id": "s9Duf05WqqSC"
}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "UpswmkL2qwBT"
},
"outputs": [],
"source": [
"# These came from writing out the expression for the sum of squares loss and taking the\n",
"# derivative with respect to phi0 and phi1. It was a lot of hassle to get it right!\n",
@@ -281,31 +276,32 @@
" dl_dphi1 = gabor_deriv_phi1(data_x, data_y, phi[0],phi[1])\n",
" # Return the gradient\n",
" return np.array([[dl_dphi0],[dl_dphi1]])"
],
"metadata": {
"id": "UpswmkL2qwBT"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "RS1nEcYVuEAM"
},
"source": [
"We can check we got this right using a trick known as **finite differences**. If we evaluate the function and then change one of the parameters by a very small amount and normalize by that amount, we get an approximation to the gradient, so:\n",
"\n",
"\\begin{eqnarray}\n",
"\\begin{align}\n",
"\\frac{\\partial L}{\\partial \\phi_{0}}&\\approx & \\frac{L[\\phi_0+\\delta, \\phi_1]-L[\\phi_0, \\phi_1]}{\\delta}\\\\\n",
"\\frac{\\partial L}{\\partial \\phi_{1}}&\\approx & \\frac{L[\\phi_0, \\phi_1+\\delta]-L[\\phi_0, \\phi_1]}{\\delta}\n",
"\\end{eqnarray}\n",
"\\end{align}\n",
"\n",
"We can't do this when there are many parameters; for a million parameters, we would have to evaluate the loss function two million times, and usually computing the gradients directly is much more efficient."
],
"metadata": {
"id": "RS1nEcYVuEAM"
}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "QuwAHN7yt-gi"
},
"outputs": [],
"source": [
"# Compute the gradient using your function\n",
"gradient = compute_gradient(data[0,:],data[1,:], phi)\n",
@@ -317,24 +313,25 @@
"dl_dphi1_est = (compute_loss(data[0,:],data[1,:],model,phi+np.array([[0],[delta]])) - \\\n",
" compute_loss(data[0,:],data[1,:],model,phi))/delta\n",
"print(\"Approx gradients: (%3.3f,%3.3f)\"%(dl_dphi0_est,dl_dphi1_est))\n"
],
"metadata": {
"id": "QuwAHN7yt-gi"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"Now we are ready to perform gradient descent. We'll need to use our line search routine from Notebook 6.1, which I've reproduced here plus the helper function loss_function_1D that converts from a 2D problem to a 1D problem"
],
"metadata": {
"id": "5EIjMM9Fw2eT"
}
},
"source": [
"Now we are ready to perform gradient descent. We'll need to use our line search routine from Notebook 6.1, which I've reproduced here plus the helper function loss_function_1D that converts from a 2D problem to a 1D problem"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "XrJ2gQjfw1XP"
},
"outputs": [],
"source": [
"def loss_function_1D(dist_prop, data, model, phi_start, gradient):\n",
" # Return the loss after moving this far\n",
@@ -346,7 +343,7 @@
" b = 0.33 * max_dist\n",
" c = 0.66 * max_dist\n",
" d = 1.0 * max_dist\n",
" n_iter =0;\n",
" n_iter = 0\n",
"\n",
" # While we haven't found the minimum closely enough\n",
" while np.abs(b-c) > thresh and n_iter < max_iter:\n",
@@ -370,7 +367,7 @@
" continue;\n",
"\n",
" # Rule #2 If point b is less than point c then\n",
" # then point d becomes point c, and\n",
" # point d becomes point c, and\n",
" # point b becomes 1/3 between a and new d\n",
" # point c becomes 2/3 between a and new d\n",
" if lossb < lossc:\n",
@@ -380,7 +377,7 @@
" continue\n",
"\n",
" # Rule #2 If point c is less than point b then\n",
" # then point a becomes point b, and\n",
" # point a becomes point b, and\n",
" # point b becomes 1/3 between new a and d\n",
" # point c becomes 2/3 between new a and d\n",
" a = b\n",
@@ -389,15 +386,15 @@
"\n",
" # Return average of two middle points\n",
" return (b+c)/2.0"
],
"metadata": {
"id": "XrJ2gQjfw1XP"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "YVq6rmaWRD2M"
},
"outputs": [],
"source": [
"def gradient_descent_step(phi, data, model):\n",
" # Step 1: Compute the gradient\n",
@@ -406,15 +403,15 @@
" alpha = line_search(data, model, phi, gradient*-1, max_dist = 2.0)\n",
" phi = phi - alpha * gradient\n",
" return phi"
],
"metadata": {
"id": "YVq6rmaWRD2M"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "tOLd0gtdRLLS"
},
"outputs": [],
"source": [
"# Initialize the parameters\n",
"n_steps = 21\n",
@@ -435,41 +432,41 @@
" draw_model(data,model,phi_all[:,c_step+1], \"Iteration %d, loss = %f\"%(c_step+1,loss))\n",
"\n",
"draw_loss_function(compute_loss, data, model,phi_all)\n"
],
"metadata": {
"id": "tOLd0gtdRLLS"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"source": [
"# TODO Experiment with starting the optimization in the previous cell in different places\n",
"# and show that it heads to a local minimum if we don't start it in the right valley"
],
"execution_count": null,
"metadata": {
"id": "Oi8ZlH0ptLqA"
},
"execution_count": null,
"outputs": []
"outputs": [],
"source": [
"# TODO Experiment with starting the optimization in the previous cell in different places\n",
"# and show that it heads to a local minimum if we don't start it in the right valley"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "4l-ueLk-oAxV"
},
"outputs": [],
"source": [
"def gradient_descent_step_fixed_learning_rate(phi, data, alpha):\n",
" # TODO -- fill in this routine so that we take a fixed size step of size alpha without using line search\n",
"\n",
" return phi"
],
"metadata": {
"id": "4l-ueLk-oAxV"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "oi9MX_GRpM41"
},
"outputs": [],
"source": [
"# Initialize the parameters\n",
"n_steps = 21\n",
@@ -490,28 +487,28 @@
" draw_model(data,model,phi_all[:,c_step+1], \"Iteration %d, loss = %f\"%(c_step+1,loss))\n",
"\n",
"draw_loss_function(compute_loss, data, model,phi_all)\n"
],
"metadata": {
"id": "oi9MX_GRpM41"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "In6sQ5YCpMqn"
},
"outputs": [],
"source": [
"# TODO Experiment with the learning rate, alpha.\n",
"# What happens if you set it too large?\n",
"# What happens if you set it too small?"
],
"metadata": {
"id": "In6sQ5YCpMqn"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "VKTC9-1Gpm3N"
},
"outputs": [],
"source": [
"def stochastic_gradient_descent_step(phi, data, alpha, batch_size):\n",
" # TODO -- fill in this routine so that we take a fixed size step of size alpha but only using a subset (batch) of the data\n",
@@ -522,15 +519,15 @@
"\n",
"\n",
" return phi"
],
"metadata": {
"id": "VKTC9-1Gpm3N"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "469OP_UHskJ4"
},
"outputs": [],
"source": [
"# Set the random number generator so you always get same numbers (disable if you don't want this)\n",
"np.random.seed(1)\n",
@@ -553,34 +550,45 @@
" draw_model(data,model,phi_all[:,c_step+1], \"Iteration %d, loss = %f\"%(c_step+1,loss))\n",
"\n",
"draw_loss_function(compute_loss, data, model,phi_all)"
],
"metadata": {
"id": "469OP_UHskJ4"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"source": [
"# TODO -- Experiment with different learning rates, starting points, batch sizes, number of steps. Get a feel for this."
],
"execution_count": null,
"metadata": {
"id": "LxE2kTa3s29p"
},
"execution_count": null,
"outputs": []
"outputs": [],
"source": [
"# TODO -- Experiment with different learning rates, starting points, batch sizes, number of steps. Get a feel for this."
]
},
{
"cell_type": "code",
"source": [
"# TODO -- Add a learning rate schedule. Reduce the learning rate by a factor of beta every M iterations"
],
"execution_count": null,
"metadata": {
"id": "lw4QPOaQTh5e"
},
"execution_count": null,
"outputs": []
}
"outputs": [],
"source": [
"# TODO -- Add a learning rate schedule. Reduce the learning rate by a factor of beta every M iterations"
]
}
],
"metadata": {
"colab": {
"authorship_tag": "ABX9TyNk5FN4qlw3pk8BwDVWw1jN",
"include_colab_link": true,
"provenance": []
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

View File

@@ -61,7 +61,7 @@
{
"cell_type": "code",
"source": [
"# Let's create our training data 30 pairs {x_i, y_i}\n",
"# Let's create our training data of 30 pairs {x_i, y_i}\n",
"# We'll try to fit the Gabor model to these data\n",
"data = np.array([[-1.920e+00,-1.422e+01,1.490e+00,-1.940e+00,-2.389e+00,-5.090e+00,\n",
" -8.861e+00,3.578e+00,-6.010e+00,-6.995e+00,3.634e+00,8.743e-01,\n",
@@ -137,7 +137,7 @@
{
"cell_type": "markdown",
"source": [
"Now lets compute the sum of squares loss for the training data and plot the loss function"
"Now let's compute the sum of squares loss for the training data and plot the loss function"
],
"metadata": {
"id": "QU5mdGvpTtEG"
@@ -160,7 +160,7 @@
" b = np.floor(my_colormap_vals_dec - r * 256 *256 - g * 256)\n",
" my_colormap = ListedColormap(np.vstack((r,g,b)).transpose()/255.0)\n",
"\n",
" # Make grid of intercept/slope values to plot\n",
" # Make grid of offset/frequency values to plot\n",
" offsets_mesh, freqs_mesh = np.meshgrid(np.arange(-10,10.0,0.1), np.arange(2.5,22.5,0.1))\n",
" loss_mesh = np.zeros_like(freqs_mesh)\n",
" # Compute loss for every set of parameters\n",
@@ -365,7 +365,6 @@
"\n",
" # Update the parameters\n",
" phi_all[:,c_step+1:c_step+2] = phi_all[:,c_step:c_step+1] - alpha * momentum\n",
" # Measure loss and draw model every 8th step\n",
"\n",
"loss = compute_loss(data[0,:], data[1,:], model, phi_all[:,c_step+1:c_step+2])\n",
"draw_model(data,model,phi_all[:,c_step+1], \"Iteration %d, loss = %f\"%(c_step+1,loss))\n",

View File

@@ -110,7 +110,7 @@
" ax.plot(opt_path[0,:], opt_path[1,:],'-', color='#a0d9d3ff')\n",
" ax.plot(opt_path[0,:], opt_path[1,:],'.', color='#a0d9d3ff',markersize=10)\n",
" ax.set_xlabel(\"$\\phi_{0}$\")\n",
" ax.set_ylabel(\"$\\phi_1}$\")\n",
" ax.set_ylabel(\"$\\phi_{1}$\")\n",
" plt.show()"
],
"metadata": {
@@ -169,7 +169,7 @@
{
"cell_type": "markdown",
"source": [
"Because the function changes much faster in $\\phi_1$ than in $\\phi_0$, there is no great step size to choose. If we set the step size so that it makes sensible progress in the $\\phi_1$, then it takes many iterations to converge. If we set the step size tso that we make sensible progress in the $\\phi_{0}$ direction, then the path oscillates in the $\\phi_1$ direction. \n",
"Because the function changes much faster in $\\phi_1$ than in $\\phi_0$, there is no great step size to choose. If we set the step size so that it makes sensible progress in the $\\phi_1$ direction, then it takes many iterations to converge. If we set the step size so that we make sensible progress in the $\\phi_{0}$ direction, then the path oscillates in the $\\phi_1$ direction. \n",
"\n",
"This motivates Adam. At the core of Adam is the idea that we should just determine which way is downhill along each axis (i.e. left/right for $\\phi_0$ or up/down for $\\phi_1$) and move a fixed distance in that direction."
],

View File

@@ -1,33 +1,22 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"authorship_tag": "ABX9TyOjXmTmoff61y15VqEB5sDW",
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
}
},
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
"colab_type": "text",
"id": "view-in-github"
},
"source": [
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap07/7_1_Backpropagation_in_Toy_Model.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "pOZ6Djz0dhoy"
},
"source": [
"# **Notebook 7.1: Backpropagation in Toy Model**\n",
"\n",
@@ -36,68 +25,67 @@
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
"\n",
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
],
"metadata": {
"id": "pOZ6Djz0dhoy"
}
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "1DmMo2w63CmT"
},
"source": [
"We're going to investigate how to take the derivatives of functions where one operation is composed with another, which is composed with a third and so on. For example, consider the model:\n",
"\n",
"\\begin{equation}\n",
" \\mbox{f}[x,\\boldsymbol\\phi] = \\beta_3+\\omega_3\\cdot\\cos\\Bigl[\\beta_2+\\omega_2\\cdot\\exp\\bigl[\\beta_1+\\omega_1\\cdot\\sin[\\beta_0+\\omega_0x]\\bigr]\\Bigr],\n",
" \\text{f}[x,\\boldsymbol\\phi] = \\beta_3+\\omega_3\\cdot\\cos\\Bigl[\\beta_2+\\omega_2\\cdot\\exp\\bigl[\\beta_1+\\omega_1\\cdot\\sin[\\beta_0+\\omega_0x]\\bigr]\\Bigr],\n",
"\\end{equation}\n",
"\n",
"with parameters $\\boldsymbol\\phi=\\{\\beta_0,\\omega_0,\\beta_1,\\omega_1,\\beta_2,\\omega_2,\\beta_3,\\omega_3\\}$.<br>\n",
"\n",
"This is a composition of the functions $\\cos[\\bullet],\\exp[\\bullet],\\sin[\\bullet]$. I chose these just because you probably already know the derivatives of these functions:\n",
"\n",
"\\begin{eqnarray*}\n",
"\\begin{align}\n",
" \\frac{\\partial \\cos[z]}{\\partial z} = -\\sin[z] \\quad\\quad \\frac{\\partial \\exp[z]}{\\partial z} = \\exp[z] \\quad\\quad \\frac{\\partial \\sin[z]}{\\partial z} = \\cos[z].\n",
"\\end{eqnarray*}\n",
"\\end{align}\n",
"\n",
"Suppose that we have a least squares loss function:\n",
"\n",
"\\begin{equation*}\n",
"\\ell_i = (\\mbox{f}[x_i,\\boldsymbol\\phi]-y_i)^2,\n",
"\\ell_i = (\\text{f}[x_i,\\boldsymbol\\phi]-y_i)^2,\n",
"\\end{equation*}\n",
"\n",
"Assume that we know the current values of $\\beta_{0},\\beta_{1},\\beta_{2},\\beta_{3},\\omega_{0},\\omega_{1},\\omega_{2},\\omega_{3}$, $x_i$ and $y_i$. We could obviously calculate $\\ell_i$. But we also want to know how $\\ell_i$ changes when we make a small change to $\\beta_{0},\\beta_{1},\\beta_{2},\\beta_{3},\\omega_{0},\\omega_{1},\\omega_{2}$, or $\\omega_{3}$. In other words, we want to compute the eight derivatives:\n",
"\n",
"\\begin{eqnarray*}\n",
"\\frac{\\partial \\ell_i}{\\partial \\beta_{0}}, \\quad \\frac{\\partial \\ell_i}{\\partial \\beta_{1}}, \\quad \\frac{\\partial \\ell_i}{\\partial \\beta_{2}}, \\quad \\frac{\\partial \\ell_i }{\\partial \\beta_{3}}, \\quad \\frac{\\partial \\ell_i}{\\partial \\omega_{0}}, \\quad \\frac{\\partial \\ell_i}{\\partial \\omega_{1}}, \\quad \\frac{\\partial \\ell_i}{\\partial \\omega_{2}}, \\quad\\mbox{and} \\quad \\frac{\\partial \\ell_i}{\\partial \\omega_{3}}.\n",
"\\end{eqnarray*}"
],
"metadata": {
"id": "1DmMo2w63CmT"
}
"\\begin{align}\n",
"\\frac{\\partial \\ell_i}{\\partial \\beta_{0}}, \\quad \\frac{\\partial \\ell_i}{\\partial \\beta_{1}}, \\quad \\frac{\\partial \\ell_i}{\\partial \\beta_{2}}, \\quad \\frac{\\partial \\ell_i }{\\partial \\beta_{3}}, \\quad \\frac{\\partial \\ell_i}{\\partial \\omega_{0}}, \\quad \\frac{\\partial \\ell_i}{\\partial \\omega_{1}}, \\quad \\frac{\\partial \\ell_i}{\\partial \\omega_{2}}, \\quad\\text{and} \\quad \\frac{\\partial \\ell_i}{\\partial \\omega_{3}}.\n",
"\\end{align}"
]
},
{
"cell_type": "code",
"source": [
"# import library\n",
"import numpy as np"
],
"execution_count": null,
"metadata": {
"id": "RIPaoVN834Lj"
},
"execution_count": 1,
"outputs": []
"outputs": [],
"source": [
"# import library\n",
"import numpy as np"
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"Let's first define the original function for $y$ and the loss term:"
],
"metadata": {
"id": "32-ufWhc3v2c"
}
},
"source": [
"Let's first define the original function for $y$ and the loss term:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"metadata": {
"id": "AakK_qen3BpU"
},
@@ -112,121 +100,129 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"Now we'll choose some values for the betas and the omegas and x and compute the output of the function:"
],
"metadata": {
"id": "y7tf0ZMt5OXt"
}
},
"source": [
"Now we'll choose some values for the betas and the omegas and x and compute the output of the function:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "pwvOcCxr41X_",
"outputId": "9541922c-dfc4-4b2e-dfa3-3298812155ce"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"l_i=0.139\n"
]
}
],
"source": [
"beta0 = 1.0; beta1 = 2.0; beta2 = -3.0; beta3 = 0.4\n",
"omega0 = 0.1; omega1 = -0.4; omega2 = 2.0; omega3 = 3.0\n",
"x = 2.3; y =2.0\n",
"l_i_func = loss(x,y,beta0,beta1,beta2,beta3,omega0,omega1,omega2,omega3)\n",
"print('l_i=%3.3f'%l_i_func)"
],
"metadata": {
"id": "pwvOcCxr41X_",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "9541922c-dfc4-4b2e-dfa3-3298812155ce"
},
"execution_count": 3,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"l_i=0.139\n"
]
}
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "u5w69NeT64yV"
},
"source": [
"# Computing derivatives by hand\n",
"\n",
"We could compute expressions for the derivatives by hand and write code to compute them directly but some have very complex expressions, even for this relatively simple original equation. For example:\n",
"\n",
"\\begin{eqnarray*}\n",
"\\begin{align}\n",
"\\frac{\\partial \\ell_i}{\\partial \\omega_{0}} &=& -2 \\left( \\beta_3+\\omega_3\\cdot\\cos\\Bigl[\\beta_2+\\omega_2\\cdot\\exp\\bigl[\\beta_1+\\omega_1\\cdot\\sin[\\beta_0+\\omega_0\\cdot x_i]\\bigr]\\Bigr]-y_i\\right)\\nonumber \\\\\n",
"&&\\hspace{0.5cm}\\cdot \\omega_1\\omega_2\\omega_3\\cdot x_i\\cdot\\cos[\\beta_0+\\omega_0 \\cdot x_i]\\cdot\\exp\\Bigl[\\beta_1 + \\omega_1 \\cdot \\sin[\\beta_0+\\omega_0\\cdot x_i]\\Bigr]\\nonumber\\\\\n",
"&& \\hspace{1cm}\\cdot \\sin\\biggl[\\beta_2+\\omega_2\\cdot \\exp\\Bigl[\\beta_1 + \\omega_1 \\cdot \\sin[\\beta_0+\\omega_0\\cdot x_i]\\Bigr]\\biggr].\n",
"\\end{eqnarray*}"
],
"metadata": {
"id": "u5w69NeT64yV"
}
"\\end{align}"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "7t22hALp5zkq"
},
"outputs": [],
"source": [
"dldbeta3_func = 2 * (beta3 +omega3 * np.cos(beta2 + omega2 * np.exp(beta1+omega1 * np.sin(beta0+omega0 * x)))-y)\n",
"dldomega0_func = -2 *(beta3 +omega3 * np.cos(beta2 + omega2 * np.exp(beta1+omega1 * np.sin(beta0+omega0 * x)))-y) * \\\n",
" omega1 * omega2 * omega3 * x * np.cos(beta0 + omega0 * x) * np.exp(beta1 +omega1 * np.sin(beta0 + omega0 * x)) *\\\n",
" np.sin(beta2 + omega2 * np.exp(beta1+ omega1* np.sin(beta0+omega0 * x)))"
],
"metadata": {
"id": "7t22hALp5zkq"
},
"execution_count": 4,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"Let's make sure this is correct using finite differences:"
],
"metadata": {
"id": "iRh4hnu3-H3n"
}
},
"source": [
"Let's make sure this is correct using finite differences:"
]
},
{
"cell_type": "code",
"source": [
"dldomega0_fd = (loss(x,y,beta0,beta1,beta2,beta3,omega0+0.00001,omega1,omega2,omega3)-loss(x,y,beta0,beta1,beta2,beta3,omega0,omega1,omega2,omega3))/0.00001\n",
"\n",
"print('dydomega0: Function value = %3.3f, Finite difference value = %3.3f'%(dldomega0_func,dldomega0_fd))"
],
"execution_count": null,
"metadata": {
"id": "1O3XmXMx-HlZ",
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "1O3XmXMx-HlZ",
"outputId": "389ed78e-9d8d-4e8b-9e6b-5f20c21407e8"
},
"execution_count": 5,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"output_type": "stream",
"text": [
"dydomega0: Function value = 5.246, Finite difference value = 5.246\n"
]
}
],
"source": [
"dldomega0_fd = (loss(x,y,beta0,beta1,beta2,beta3,omega0+0.00001,omega1,omega2,omega3)-loss(x,y,beta0,beta1,beta2,beta3,omega0,omega1,omega2,omega3))/0.00001\n",
"\n",
"print('dydomega0: Function value = %3.3f, Finite difference value = %3.3f'%(dldomega0_func,dldomega0_fd))"
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"The code to calculate $\\partial l_i/ \\partial \\omega_0$ is a bit of a nightmare. It's easy to make mistakes, and you can see that some parts of it are repeated (for example, the $\\sin[\\bullet]$ term), which suggests some kind of redundancy in the calculations. The goal of this practical is to compute the derivatives in a much simpler way. There will be three steps:"
],
"metadata": {
"id": "wS4IPjZAKWTN"
}
},
"source": [
"The code to calculate $\\partial l_i/ \\partial \\omega_0$ is a bit of a nightmare. It's easy to make mistakes, and you can see that some parts of it are repeated (for example, the $\\sin[\\bullet]$ term), which suggests some kind of redundancy in the calculations. The goal of this practical is to compute the derivatives in a much simpler way. There will be three steps:"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "8UWhvDeNDudz"
},
"source": [
"**Step 1:** Write the original equations as a series of intermediate calculations.\n",
"\n",
"\\begin{eqnarray}\n",
"\\begin{align}\n",
"f_{0} &=& \\beta_{0} + \\omega_{0} x_i\\nonumber\\\\\n",
"h_{1} &=& \\sin[f_{0}]\\nonumber\\\\\n",
"f_{1} &=& \\beta_{1} + \\omega_{1}h_{1}\\nonumber\\\\\n",
@@ -235,16 +231,18 @@
"h_{3} &=& \\cos[f_{2}]\\nonumber\\\\\n",
"f_{3} &=& \\beta_{3} + \\omega_{3}h_{3}\\nonumber\\\\\n",
"l_i &=& (f_3-y_i)^2\n",
"\\end{eqnarray}\n",
"\\end{align}\n",
"\n",
"and compute and store the values of all of these intermediate values. We'll need them to compute the derivatives.<br> This is called the **forward pass**."
],
"metadata": {
"id": "8UWhvDeNDudz"
}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ZWKAq6HC90qV"
},
"outputs": [],
"source": [
"# TODO compute all the f_k and h_k terms\n",
"# Replace the code below\n",
@@ -257,38 +255,22 @@
"h3 = 0\n",
"f3 = 0\n",
"l_i = 0\n"
],
"metadata": {
"id": "ZWKAq6HC90qV"
},
"execution_count": 6,
"outputs": []
]
},
{
"cell_type": "code",
"source": [
"# Let's check we got that right:\n",
"print(\"f0: true value = %3.3f, your value = %3.3f\"%(1.230, f0))\n",
"print(\"h1: true value = %3.3f, your value = %3.3f\"%(0.942, h1))\n",
"print(\"f1: true value = %3.3f, your value = %3.3f\"%(1.623, f1))\n",
"print(\"h2: true value = %3.3f, your value = %3.3f\"%(5.068, h2))\n",
"print(\"f2: true value = %3.3f, your value = %3.3f\"%(7.137, f2))\n",
"print(\"h3: true value = %3.3f, your value = %3.3f\"%(0.657, h3))\n",
"print(\"f3: true value = %3.3f, your value = %3.3f\"%(2.372, f3))\n",
"print(\"like original = %3.3f, like from forward pass = %3.3f\"%(l_i_func, l_i))\n"
],
"execution_count": null,
"metadata": {
"id": "ibxXw7TUW4Sx",
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "ibxXw7TUW4Sx",
"outputId": "4575e3eb-2b16-4e0b-c84e-9c22b443c3ce"
},
"execution_count": 7,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"output_type": "stream",
"text": [
"f0: true value = 1.230, your value = 0.000\n",
"h1: true value = 0.942, your value = 0.000\n",
@@ -300,17 +282,32 @@
"like original = 0.139, like from forward pass = 0.000\n"
]
}
],
"source": [
"# Let's check we got that right:\n",
"print(\"f0: true value = %3.3f, your value = %3.3f\"%(1.230, f0))\n",
"print(\"h1: true value = %3.3f, your value = %3.3f\"%(0.942, h1))\n",
"print(\"f1: true value = %3.3f, your value = %3.3f\"%(1.623, f1))\n",
"print(\"h2: true value = %3.3f, your value = %3.3f\"%(5.068, h2))\n",
"print(\"f2: true value = %3.3f, your value = %3.3f\"%(7.137, f2))\n",
"print(\"h3: true value = %3.3f, your value = %3.3f\"%(0.657, h3))\n",
"print(\"f3: true value = %3.3f, your value = %3.3f\"%(2.372, f3))\n",
"print(\"like original = %3.3f, like from forward pass = %3.3f\"%(l_i_func, l_i))\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "jay8NYWdFHuZ"
},
"source": [
"**Step 2:** Compute the derivatives of $\\ell_i$ with respect to the intermediate quantities that we just calculated, but in reverse order:\n",
"\n",
"\\begin{eqnarray}\n",
"\\begin{align}\n",
"\\quad \\frac{\\partial \\ell_i}{\\partial f_3}, \\quad \\frac{\\partial \\ell_i}{\\partial h_3}, \\quad \\frac{\\partial \\ell_i}{\\partial f_2}, \\quad\n",
"\\frac{\\partial \\ell_i}{\\partial h_2}, \\quad \\frac{\\partial \\ell_i}{\\partial f_1}, \\quad \\frac{\\partial \\ell_i}{\\partial h_1}, \\quad\\mbox{and} \\quad \\frac{\\partial \\ell_i}{\\partial f_0}.\n",
"\\end{eqnarray}\n",
"\\frac{\\partial \\ell_i}{\\partial h_2}, \\quad \\frac{\\partial \\ell_i}{\\partial f_1}, \\quad \\frac{\\partial \\ell_i}{\\partial h_1}, \\quad\\text{and} \\quad \\frac{\\partial \\ell_i}{\\partial f_0}.\n",
"\\end{align}\n",
"\n",
"The first of these derivatives is straightforward:\n",
"\n",
@@ -328,7 +325,7 @@
"\n",
"We can continue in this way, computing the derivatives of the output with respect to these intermediate quantities:\n",
"\n",
"\\begin{eqnarray}\n",
"\\begin{align}\n",
"\\frac{\\partial \\ell_i}{\\partial f_{2}} &=& \\frac{\\partial h_{3}}{\\partial f_{2}}\\left(\n",
"\\frac{\\partial f_{3}}{\\partial h_{3}}\\frac{\\partial \\ell_i}{\\partial f_{3}} \\right)\n",
"\\nonumber \\\\\n",
@@ -336,16 +333,18 @@
"\\frac{\\partial \\ell_i}{\\partial f_{1}} &=& \\frac{\\partial h_{2}}{\\partial f_{1}}\\left( \\frac{\\partial f_{2}}{\\partial h_{2}}\\frac{\\partial h_{3}}{\\partial f_{2}}\\frac{\\partial f_{3}}{\\partial h_{3}}\\frac{\\partial \\ell_i}{\\partial f_{3}} \\right)\\nonumber \\\\\n",
"\\frac{\\partial \\ell_i}{\\partial h_{1}} &=& \\frac{\\partial f_{1}}{\\partial h_{1}}\\left(\\frac{\\partial h_{2}}{\\partial f_{1}} \\frac{\\partial f_{2}}{\\partial h_{2}}\\frac{\\partial h_{3}}{\\partial f_{2}}\\frac{\\partial f_{3}}{\\partial h_{3}}\\frac{\\partial \\ell_i}{\\partial f_{3}} \\right)\\nonumber \\\\\n",
"\\frac{\\partial \\ell_i}{\\partial f_{0}} &=& \\frac{\\partial h_{1}}{\\partial f_{0}}\\left(\\frac{\\partial f_{1}}{\\partial h_{1}}\\frac{\\partial h_{2}}{\\partial f_{1}} \\frac{\\partial f_{2}}{\\partial h_{2}}\\frac{\\partial h_{3}}{\\partial f_{2}}\\frac{\\partial f_{3}}{\\partial h_{3}}\\frac{\\partial \\ell_i}{\\partial f_{3}} \\right).\n",
"\\end{eqnarray}\n",
"\\end{align}\n",
"\n",
"In each case, we have already computed all of the terms except the last one in the previous step, and the last term is simple to evaluate. This is called the **backward pass**."
],
"metadata": {
"id": "jay8NYWdFHuZ"
}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "gCQJeI--Egdl"
},
"outputs": [],
"source": [
"# TODO -- Compute the derivatives of the output with respect\n",
"# to the intermediate computations h_k and f_k (i.e, run the backward pass)\n",
@@ -358,37 +357,22 @@
"dldf1 = 1\n",
"dldh1 = 1\n",
"dldf0 = 1\n"
],
"metadata": {
"id": "gCQJeI--Egdl"
},
"execution_count": 8,
"outputs": []
]
},
{
"cell_type": "code",
"source": [
"# Let's check we got that right\n",
"print(\"dldf3: true value = %3.3f, your value = %3.3f\"%(0.745, dldf3))\n",
"print(\"dldh3: true value = %3.3f, your value = %3.3f\"%(2.234, dldh3))\n",
"print(\"dldf2: true value = %3.3f, your value = %3.3f\"%(-1.683, dldf2))\n",
"print(\"dldh2: true value = %3.3f, your value = %3.3f\"%(-3.366, dldh2))\n",
"print(\"dldf1: true value = %3.3f, your value = %3.3f\"%(-17.060, dldf1))\n",
"print(\"dldh1: true value = %3.3f, your value = %3.3f\"%(6.824, dldh1))\n",
"print(\"dldf0: true value = %3.3f, your value = %3.3f\"%(2.281, dldf0))"
],
"execution_count": null,
"metadata": {
"id": "dS1OrLtlaFr7",
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "dS1OrLtlaFr7",
"outputId": "414f0862-ae36-4a0e-b68f-4758835b0e23"
},
"execution_count": 9,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"output_type": "stream",
"text": [
"dldf3: true value = 0.745, your value = -4.000\n",
"dldh3: true value = 2.234, your value = -12.000\n",
@@ -399,33 +383,25 @@
"dldf0: true value = 2.281, your value = 1.000\n"
]
}
],
"source": [
"# Let's check we got that right\n",
"print(\"dldf3: true value = %3.3f, your value = %3.3f\"%(0.745, dldf3))\n",
"print(\"dldh3: true value = %3.3f, your value = %3.3f\"%(2.234, dldh3))\n",
"print(\"dldf2: true value = %3.3f, your value = %3.3f\"%(-1.683, dldf2))\n",
"print(\"dldh2: true value = %3.3f, your value = %3.3f\"%(-3.366, dldh2))\n",
"print(\"dldf1: true value = %3.3f, your value = %3.3f\"%(-17.060, dldf1))\n",
"print(\"dldh1: true value = %3.3f, your value = %3.3f\"%(6.824, dldh1))\n",
"print(\"dldf0: true value = %3.3f, your value = %3.3f\"%(2.281, dldf0))"
]
},
{
"cell_type": "markdown",
"source": [
"**Step 3:** Finally, we consider how the loss~$\\ell_{i}$ changes when we change the parameters $\\beta_{\\bullet}$ and $\\omega_{\\bullet}$. Once more, we apply the chain rule:\n",
"\n",
"\n",
"\n",
"\n",
"\\begin{eqnarray}\n",
"\\frac{\\partial \\ell_i}{\\partial \\beta_{k}} &=& \\frac{\\partial f_{k}}{\\partial \\beta_{k}}\\frac{\\partial \\ell_i}{\\partial f_{k}}\\nonumber \\\\\n",
"\\frac{\\partial \\ell_i}{\\partial \\omega_{k}} &=& \\frac{\\partial f_{k}}{\\partial \\omega_{k}}\\frac{\\partial \\ell_i}{\\partial f_{k}}.\n",
"\\end{eqnarray}\n",
"\n",
"\\noindent In each case, the second term on the right-hand side was computed in step 2. When $k>0$, we have~$f_{k}=\\beta_{k}+\\omega_k \\cdot h_{k}$, so:\n",
"\n",
"\\begin{eqnarray}\n",
"\\frac{\\partial f_{k}}{\\partial \\beta_{k}} = 1 \\quad\\quad\\mbox{and}\\quad \\quad \\frac{\\partial f_{k}}{\\partial \\omega_{k}} &=& h_{k}.\n",
"\\end{eqnarray}"
],
"metadata": {
"id": "FlzlThQPGpkU"
}
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "1I2BhqZhGMK6"
},
"outputs": [],
"source": [
"# TODO -- Calculate the final derivatives with respect to the beta and omega terms\n",
"\n",
@@ -437,38 +413,22 @@
"dldomega1 = 1\n",
"dldbeta0 = 1\n",
"dldomega0 = 1\n"
],
"metadata": {
"id": "1I2BhqZhGMK6"
},
"execution_count": 10,
"outputs": []
]
},
{
"cell_type": "code",
"source": [
"# Let's check we got them right\n",
"print('dldbeta3: Your value = %3.3f, True value = %3.3f'%(dldbeta3, 0.745))\n",
"print('dldomega3: Your value = %3.3f, True value = %3.3f'%(dldomega3, 0.489))\n",
"print('dldbeta2: Your value = %3.3f, True value = %3.3f'%(dldbeta2, -1.683))\n",
"print('dldomega2: Your value = %3.3f, True value = %3.3f'%(dldomega2, -8.530))\n",
"print('dldbeta1: Your value = %3.3f, True value = %3.3f'%(dldbeta1, -17.060))\n",
"print('dldomega1: Your value = %3.3f, True value = %3.3f'%(dldomega1, -16.079))\n",
"print('dldbeta0: Your value = %3.3f, True value = %3.3f'%(dldbeta0, 2.281))\n",
"print('dldomega0: Your value = %3.3f, Function value = %3.3f, Finite difference value = %3.3f'%(dldomega0, dldomega0_func, dldomega0_fd))"
],
"execution_count": null,
"metadata": {
"id": "38eiOn2aHgHI",
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "38eiOn2aHgHI",
"outputId": "1a67a636-e832-471e-e771-54824363158a"
},
"execution_count": 11,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"output_type": "stream",
"text": [
"dldbeta3: Your value = 1.000, True value = 0.745\n",
"dldomega3: Your value = 1.000, True value = 0.489\n",
@@ -480,16 +440,44 @@
"dldomega0: Your value = 1.000, Function value = 5.246, Finite difference value = 5.246\n"
]
}
],
"source": [
"# Let's check we got them right\n",
"print('dldbeta3: Your value = %3.3f, True value = %3.3f'%(dldbeta3, 0.745))\n",
"print('dldomega3: Your value = %3.3f, True value = %3.3f'%(dldomega3, 0.489))\n",
"print('dldbeta2: Your value = %3.3f, True value = %3.3f'%(dldbeta2, -1.683))\n",
"print('dldomega2: Your value = %3.3f, True value = %3.3f'%(dldomega2, -8.530))\n",
"print('dldbeta1: Your value = %3.3f, True value = %3.3f'%(dldbeta1, -17.060))\n",
"print('dldomega1: Your value = %3.3f, True value = %3.3f'%(dldomega1, -16.079))\n",
"print('dldbeta0: Your value = %3.3f, True value = %3.3f'%(dldbeta0, 2.281))\n",
"print('dldomega0: Your value = %3.3f, Function value = %3.3f, Finite difference value = %3.3f'%(dldomega0, dldomega0_func, dldomega0_fd))"
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"Using this method, we can compute the derivatives quite easily without needing to compute very complicated expressions. In the next practical, we'll apply this same method to a deep neural network."
],
"metadata": {
"id": "N2ZhrR-2fNa1"
}
}
},
"source": [
"Using this method, we can compute the derivatives quite easily without needing to compute very complicated expressions. In the next practical, we'll apply this same method to a deep neural network."
]
}
],
"metadata": {
"colab": {
"authorship_tag": "ABX9TyN7JeDgslwtZcwRCOuGuPFt",
"include_colab_link": true,
"provenance": []
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

View File

@@ -4,7 +4,7 @@
"metadata": {
"colab": {
"provenance": [],
"authorship_tag": "ABX9TyOlKB4TrCJnt91TnHOrfRSJ",
"authorship_tag": "ABX9TyM2kkHLr00J4Jeypw41sTkQ",
"include_colab_link": true
},
"kernelspec": {
@@ -311,10 +311,16 @@
" network_output_2, *_ = compute_network_output(net_input, all_weights, all_biases)\n",
" dl_dbias[row] = (least_squares_loss(network_output_1, y) - least_squares_loss(network_output_2,y))/delta_fd\n",
" all_dl_dbiases_fd[layer] = np.array(dl_dbias)\n",
" print(\"-----------------------------------------------\")\n",
" print(\"Bias %d, derivatives from backprop:\"%(layer))\n",
" print(all_dl_dbiases[layer])\n",
" print(\"Bias %d, derivatives from finite differences\"%(layer))\n",
" print(all_dl_dbiases_fd[layer])\n",
" if np.allclose(all_dl_dbiases_fd[layer],all_dl_dbiases[layer],rtol=1e-05, atol=1e-08, equal_nan=False):\n",
" print(\"Success! Derivatives match.\")\n",
" else:\n",
" print(\"Failure! Derivatives different.\")\n",
"\n",
"\n",
"\n",
"# Test the derivatives of the weights matrices\n",
@@ -330,10 +336,15 @@
" network_output_2, *_ = compute_network_output(net_input, all_weights, all_biases)\n",
" dl_dweight[row][col] = (least_squares_loss(network_output_1, y) - least_squares_loss(network_output_2,y))/delta_fd\n",
" all_dl_dweights_fd[layer] = np.array(dl_dweight)\n",
" print(\"-----------------------------------------------\")\n",
" print(\"Weight %d, derivatives from backprop:\"%(layer))\n",
" print(all_dl_dweights[layer])\n",
" print(\"Weight %d, derivatives from finite differences\"%(layer))\n",
" print(all_dl_dweights_fd[layer])"
" print(all_dl_dweights_fd[layer])\n",
" if np.allclose(all_dl_dweights_fd[layer],all_dl_dweights[layer],rtol=1e-05, atol=1e-08, equal_nan=False):\n",
" print(\"Success! Derivatives match.\")\n",
" else:\n",
" print(\"Failure! Derivatives different.\")"
],
"metadata": {
"id": "PK-UtE3hreAK"

View File

@@ -5,7 +5,7 @@
"colab": {
"provenance": [],
"gpuType": "T4",
"authorship_tag": "ABX9TyNLj3HOpVB87nRu7oSLuBaU",
"authorship_tag": "ABX9TyOuKMUcKfOIhIL2qTX9jJCy",
"include_colab_link": true
},
"kernelspec": {
@@ -84,7 +84,7 @@
"cell_type": "code",
"source": [
"args = mnist1d.data.get_dataset_args()\n",
"data = mnist1d.data.get_dataset(args, path='./mnist1d_data.pkl', download=False, regenerate=False)\n",
"data = mnist1d.data.get_dataset(args, path='./sample_data/mnist1d_data.pkl', download=False, regenerate=False)\n",
"\n",
"# The training and test input and outputs are in\n",
"# data['x'], data['y'], data['x_test'], and data['y_test']\n",

View File

@@ -310,7 +310,7 @@
"grad_path_tiny_lr = None ;\n",
"\n",
"\n",
"# TODO: Run the gradient descent on the modified loss\n",
"# TODO: Run the gradient descent on the unmodified loss\n",
"# function with 100 steps and a very small learning rate alpha of 0.05\n",
"# Replace this line:\n",
"grad_path_typical_lr = None ;\n",

View File

@@ -1,33 +1,22 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"authorship_tag": "ABX9TyMB8B4269DVmrcLoCWrhzKF",
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
}
},
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
"colab_type": "text",
"id": "view-in-github"
},
"source": [
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap09/9_4_Bayesian_Approach.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "el8l05WQEO46"
},
"source": [
"# **Notebook 9.4: Bayesian approach**\n",
"\n",
@@ -36,10 +25,7 @@
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
"\n",
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n"
],
"metadata": {
"id": "el8l05WQEO46"
}
]
},
{
"cell_type": "code",
@@ -58,20 +44,25 @@
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "3hpqmFyQNrbt"
},
"outputs": [],
"source": [
"# The true function that we are trying to estimate, defined on [0,1]\n",
"def true_function(x):\n",
" y = np.exp(np.sin(x*(2*3.1413)))\n",
" return y"
],
"metadata": {
"id": "3hpqmFyQNrbt"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "skZMM5TbNwq4"
},
"outputs": [],
"source": [
"# Generate some data points with or without noise\n",
"def generate_data(n_data, sigma_y=0.3):\n",
@@ -86,15 +77,15 @@
" y[i] = true_function(x[i])\n",
" y[i] += np.random.normal(0, sigma_y, 1)\n",
" return x,y"
],
"metadata": {
"id": "skZMM5TbNwq4"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ziwD_R7lN0DY"
},
"outputs": [],
"source": [
"# Draw the fitted function, together win uncertainty used to generate points\n",
"def plot_function(x_func, y_func, x_data=None,y_data=None, x_model = None, y_model =None, sigma_func = None, sigma_model=None):\n",
@@ -117,15 +108,15 @@
" ax.set_xlabel('Input, $x$')\n",
" ax.set_ylabel('Output, $y$')\n",
" plt.show()"
],
"metadata": {
"id": "ziwD_R7lN0DY"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "2CgKanwaN3NM"
},
"outputs": [],
"source": [
"# Generate true function\n",
"x_func = np.linspace(0, 1.0, 100)\n",
@@ -139,15 +130,15 @@
"\n",
"# Plot the function, data and uncertainty\n",
"plot_function(x_func, y_func, x_data, y_data, sigma_func=sigma_func)"
],
"metadata": {
"id": "2CgKanwaN3NM"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "gorZ6i97N7AR"
},
"outputs": [],
"source": [
"# Define model -- beta is a scalar and omega has size n_hidden,1\n",
"def network(x, beta, omega):\n",
@@ -165,15 +156,14 @@
" y = y + beta\n",
"\n",
" return y"
],
"metadata": {
"id": "gorZ6i97N7AR"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "i8T_QduzeBmM"
},
"source": [
"Now let's compute a probability distribution over the model parameters using Bayes's rule:\n",
"\n",
@@ -184,69 +174,73 @@
"We'll define the prior $Pr(\\boldsymbol\\phi)$ as:\n",
"\n",
"\\begin{equation}\n",
"Pr(\\boldsymbol\\phi) = \\mbox{Norm}_{\\boldsymbol\\phi}\\bigl[\\mathbf{0},\\sigma^2_p\\mathbf{I}\\bigr]\n",
"Pr(\\boldsymbol\\phi) = \\text{Norm}_{\\boldsymbol\\phi}\\bigl[\\mathbf{0},\\sigma^2_p\\mathbf{I}\\bigr]\n",
"\\end{equation}\n",
"\n",
"where $\\phi=[\\omega_1,\\omega_2\\ldots \\omega_n, \\beta]^T$ and $\\sigma^2_{p}$ is the prior variance.\n",
"\n",
"The likelihood term $\\prod_{i=1}^{I} Pr(\\mathbf{y}_{i}|\\mathbf{x}_{i},\\boldsymbol\\phi)$ is given by:\n",
"\n",
"\\begin{eqnarray}\n",
"\\prod_{i=1}^{I} Pr(\\mathbf{y}_{i}|\\mathbf{x}_{i},\\boldsymbol\\phi) &=& \\prod_{i=1}^{I} \\mbox{Norm}_{y_i}\\bigl[\\mbox{f}[\\mathbf{x}_{i},\\boldsymbol\\phi],\\sigma_d^2\\bigr]\\\\\n",
"&=& \\prod_{i=1}^{I} \\mbox{Norm}_{y_i}\\bigl[\\boldsymbol\\omega\\mathbf{h}_i+\\beta,\\sigma_d^2\\bigr]\\\\\n",
"&=& \\mbox{Norm}_{\\mathbf{y}}\\bigl[\\mathbf{H}^T\\boldsymbol\\phi,\\sigma^2\\mathbf{I}\\bigr].\n",
"\\end{eqnarray}\n",
"\\begin{align}\n",
"\\prod_{i=1}^{I} Pr(\\mathbf{y}_{i}|\\mathbf{x}_{i},\\boldsymbol\\phi) &=& \\prod_{i=1}^{I} \\text{Norm}_{y_i}\\bigl[\\text{f}[\\mathbf{x}_{i},\\boldsymbol\\phi],\\sigma_d^2\\bigr]\\\\\n",
"&=& \\prod_{i=1}^{I} \\text{Norm}_{y_i}\\bigl[\\boldsymbol\\omega\\mathbf{h}_i+\\beta,\\sigma_d^2\\bigr]\\\\\n",
"&=& \\text{Norm}_{\\mathbf{y}}\\bigl[\\mathbf{H}^T\\boldsymbol\\phi,\\sigma^2\\mathbf{I}\\bigr].\n",
"\\end{align}\n",
"\n",
"where $\\sigma^2$ is the measurement noise and $\\mathbf{h}_{i}$ is the column vector of hidden variables for the $i^{th}$ input. Here the vector $\\mathbf{y}$ and matrix $\\mathbf{H}$ are defined as:\n",
"\n",
"\\begin{equation}\n",
"\\mathbf{y} = \\begin{bmatrix}y_1\\\\y_2\\\\\\vdots\\\\y_{I}\\end{bmatrix}\\quad\\mbox{and}\\quad \\mathbf{H} = \\begin{bmatrix}\\mathbf{h}_{1}&\\mathbf{h}_{2}&\\cdots&\\mathbf{h}_{I}\\\\1&1&\\cdots &1\\end{bmatrix}.\n",
"\\mathbf{y} = \\begin{bmatrix}y_1\\\\y_2\\\\\\vdots\\\\y_{I}\\end{bmatrix}\\quad\\text{and}\\quad \\mathbf{H} = \\begin{bmatrix}\\mathbf{h}_{1}&\\mathbf{h}_{2}&\\cdots&\\mathbf{h}_{I}\\\\1&1&\\cdots &1\\end{bmatrix}.\n",
"\\end{equation}\n"
],
"metadata": {
"id": "i8T_QduzeBmM"
}
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "JojV6ueRk49G"
},
"source": [
"To make progress we use the change of variable relation (Appendix C.3.4 of the book) to rewrite the likelihood term as a normal distribution in the parameters $\\boldsymbol\\phi$:\n",
"\n",
"\\begin{eqnarray}\n",
"\\begin{align}\n",
"\\prod_{i=1}^{I} Pr(\\mathbf{y}_{i}|\\mathbf{x}_{i},\\boldsymbol\\phi+\\beta)\n",
"&=& \\mbox{Norm}_{\\mathbf{y}}\\bigl[\\mathbf{H}^T\\boldsymbol\\phi,\\sigma^2\\bigr]\\\\\n",
"&\\propto& \\mbox{Norm}_{\\boldsymbol\\phi}\\bigl[(\\mathbf{H}\\mathbf{H}^T)^{-1}\\mathbf{H}\\mathbf{y},\\sigma^2(\\mathbf{H}\\mathbf{H}^t)^{-1}\\bigr]\n",
"\\end{eqnarray}\n"
],
"metadata": {
"id": "JojV6ueRk49G"
}
"&=& \\text{Norm}_{\\mathbf{y}}\\bigl[\\mathbf{H}^T\\boldsymbol\\phi,\\sigma^2\\bigr]\\\\\n",
"&\\propto& \\text{Norm}_{\\boldsymbol\\phi}\\bigl[(\\mathbf{H}\\mathbf{H}^T)^{-1}\\mathbf{H}\\mathbf{y},\\sigma^2(\\mathbf{H}\\mathbf{H}^t)^{-1}\\bigr]\n",
"\\end{align}\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "YX0O_Ciwp4W1"
},
"source": [
"Finally, we can combine the likelihood and prior terms using the product of two normal distributions relation (Appendix C.3.3).\n",
"\n",
"\\begin{eqnarray}\n",
"\\begin{align}\n",
" Pr(\\boldsymbol\\phi|\\{\\mathbf{x}_{i},\\mathbf{y}_{i}\\}) &\\propto& \\prod_{i=1}^{I} Pr(\\mathbf{y}_{i}|\\mathbf{x}_{i},\\boldsymbol\\phi) Pr(\\boldsymbol\\phi)\\\\\n",
" &\\propto&\\mbox{Norm}_{\\boldsymbol\\phi}\\bigl[(\\mathbf{H}\\mathbf{H}^T)^{-1}\\mathbf{H}\\mathbf{y},\\sigma^2(\\mathbf{H}\\mathbf{H}^T)^{-1}\\bigr] \\mbox{Norm}_{\\boldsymbol\\phi}\\bigl[\\mathbf{0},\\sigma^2_p\\mathbf{I}\\bigr]\\\\\n",
" &\\propto&\\mbox{Norm}_{\\boldsymbol\\phi}\\biggl[\\frac{1}{\\sigma^2}\\left(\\frac{1}{\\sigma^2}\\mathbf{H}\\mathbf{H}^T+\\frac{1}{\\sigma_p^2}\\mathbf{I}\\right)^{-1}\\mathbf{H}\\mathbf{y},\\left(\\frac{1}{\\sigma^2}\\mathbf{H}\\mathbf{H}^T+\\frac{1}{\\sigma_p^2}\\mathbf{I}\\right)^{-1}\\biggr].\n",
"\\end{eqnarray}\n",
" &\\propto&\\text{Norm}_{\\boldsymbol\\phi}\\bigl[(\\mathbf{H}\\mathbf{H}^T)^{-1}\\mathbf{H}\\mathbf{y},\\sigma^2(\\mathbf{H}\\mathbf{H}^T)^{-1}\\bigr] \\text{Norm}_{\\boldsymbol\\phi}\\bigl[\\mathbf{0},\\sigma^2_p\\mathbf{I}\\bigr]\\\\\n",
" &\\propto&\\text{Norm}_{\\boldsymbol\\phi}\\biggl[\\frac{1}{\\sigma^2}\\left(\\frac{1}{\\sigma^2}\\mathbf{H}\\mathbf{H}^T+\\frac{1}{\\sigma_p^2}\\mathbf{I}\\right)^{-1}\\mathbf{H}\\mathbf{y},\\left(\\frac{1}{\\sigma^2}\\mathbf{H}\\mathbf{H}^T+\\frac{1}{\\sigma_p^2}\\mathbf{I}\\right)^{-1}\\biggr].\n",
"\\end{align}\n",
"\n",
"In fact, since this already a normal distribution, the constant of proportionality must be one and we can write\n",
"\n",
"\\begin{eqnarray}\n",
" Pr(\\boldsymbol\\phi|\\{\\mathbf{x}_{i},\\mathbf{y}_{i}\\}) &=& \\mbox{Norm}_{\\boldsymbol\\phi}\\biggl[\\frac{1}{\\sigma^2}\\left(\\frac{1}{\\sigma^2}\\mathbf{H}\\mathbf{H}^T+\\frac{1}{\\sigma_p^2}\\mathbf{I}\\right)^{-1}\\mathbf{H}\\mathbf{y},\\left(\\frac{1}{\\sigma^2}\\mathbf{H}\\mathbf{H}^T+\\frac{1}{\\sigma_p^2}\\mathbf{I}\\right)^{-1}\\biggr].\n",
"\\end{eqnarray}\n",
"\\begin{align}\n",
" Pr(\\boldsymbol\\phi|\\{\\mathbf{x}_{i},\\mathbf{y}_{i}\\}) &=& \\text{Norm}_{\\boldsymbol\\phi}\\biggl[\\frac{1}{\\sigma^2}\\left(\\frac{1}{\\sigma^2}\\mathbf{H}\\mathbf{H}^T+\\frac{1}{\\sigma_p^2}\\mathbf{I}\\right)^{-1}\\mathbf{H}\\mathbf{y},\\left(\\frac{1}{\\sigma^2}\\mathbf{H}\\mathbf{H}^T+\\frac{1}{\\sigma_p^2}\\mathbf{I}\\right)^{-1}\\biggr].\n",
"\\end{align}\n",
"\n",
"TODO -- On a piece of paper, use the relations in Appendix C.3.3 and C.3.4 to fill in the missing steps and establish that this is the correct formula for the posterior."
],
"metadata": {
"id": "YX0O_Ciwp4W1"
}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "nF1AcgNDwm4t"
},
"outputs": [],
"source": [
"def compute_H(x_data, n_hidden):\n",
" psi1 = np.ones((n_hidden+1,1));\n",
@@ -280,24 +274,25 @@
"\n",
"\n",
" return phi_mean, phi_covar"
],
"metadata": {
"id": "nF1AcgNDwm4t"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"Now we can draw samples from this distribution"
],
"metadata": {
"id": "GjPnlG4q0UFK"
}
},
"source": [
"Now we can draw samples from this distribution"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "K4vYc82D0BMq"
},
"outputs": [],
"source": [
"# Define parameters\n",
"n_hidden = 5\n",
@@ -313,15 +308,15 @@
"x_model = x_func\n",
"y_model_mean = network(x_model, phi_mean[-1], phi_mean[0:n_hidden])\n",
"plot_function(x_func, y_func, x_data, y_data, x_model, y_model_mean)"
],
"metadata": {
"id": "K4vYc82D0BMq"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "TVIjhubkSw-R"
},
"outputs": [],
"source": [
"# TODO Draw two samples from the normal distribution over the parameters\n",
"# Replace these lines\n",
@@ -336,37 +331,42 @@
"# Draw the two models\n",
"plot_function(x_func, y_func, x_data, y_data, x_model, y_model_sample1)\n",
"plot_function(x_func, y_func, x_data, y_data, x_model, y_model_sample2)"
],
"metadata": {
"id": "TVIjhubkSw-R"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "GiNg5EroUiUb"
},
"source": [
"Now we need to perform inference for a new data points $\\mathbf{x}^*$ with corresponding hidden values $\\mathbf{h}^*$. Instead of having a single estimate of the parameters, we have a distribution over the possible parameters. So we marginalize (integrate) over this distribution to account for all possible values:\n",
"\n",
"\\begin{eqnarray}\n",
"\\begin{align}\n",
"Pr(y^*|\\mathbf{x}^*) &=& \\int Pr(y^{*}|\\mathbf{x}^*,\\boldsymbol\\phi)Pr(\\boldsymbol\\phi|\\{\\mathbf{x}_{i},\\mathbf{y}_{i}\\}) d\\boldsymbol\\phi\\\\\n",
"&=& \\int \\mbox{Norm}_{y^*}\\bigl[\\begin{bmatrix}\\mathbf{h}^{*T}&1\\end{bmatrix}\\boldsymbol\\phi,\\sigma^2]\\cdot\\mbox{Norm}_{\\boldsymbol\\phi}\\biggl[\\frac{1}{\\sigma^2}\\left(\\frac{1}{\\sigma^2}\\mathbf{H}\\mathbf{H}^T+\\frac{1}{\\sigma_p^2}\\mathbf{I}\\right)^{-1}\\mathbf{H}\\mathbf{y},\\left(\\frac{1}{\\sigma^2}\\mathbf{H}\\mathbf{H}^T+\\frac{1}{\\sigma_p^2}\\mathbf{I}\\right)^{-1}\\biggr]d\\boldsymbol\\phi\\\\\n",
"&=& \\mbox{Norm}_{y^*}\\biggl[\\frac{1}{\\sigma^2} \\begin{bmatrix}\\mathbf{h}^{*T}&1\\end{bmatrix}\\left(\\frac{1}{\\sigma^2}\\mathbf{H}\\mathbf{H}^T+\\frac{1}{\\sigma_p^2}\\mathbf{I}\\right)^{-1}\\mathbf{H}\\mathbf{y}, \\begin{bmatrix}\\mathbf{h}^{*T}&1\\end{bmatrix}\\left(\\frac{1}{\\sigma^2}\\mathbf{H}\\mathbf{H}^T+\\frac{1}{\\sigma_p^2}\\mathbf{I}\\right)^{-1}\n",
"\\begin{bmatrix}\\mathbf{h}^*\\\\1\\end{bmatrix}\\biggr]\n",
"\\end{eqnarray}\n",
"&=& \\int \\text{Norm}_{y^*}\\bigl[[\\mathbf{h}^{*T},1]\\boldsymbol\\phi,\\sigma^2\\bigr]\\cdot\\text{Norm}_{\\boldsymbol\\phi}\\biggl[\\frac{1}{\\sigma^2}\\left(\\frac{1}{\\sigma^2}\\mathbf{H}\\mathbf{H}^T+\\frac{1}{\\sigma_p^2}\\mathbf{I}\\right)^{-1}\\mathbf{H}\\mathbf{y},\\left(\\frac{1}{\\sigma^2}\\mathbf{H}\\mathbf{H}^T+\\frac{1}{\\sigma_p^2}\\mathbf{I}\\right)^{-1}\\biggr]d\\boldsymbol\\phi\\\\\n",
"&=& \\text{Norm}_{y^*}\\biggl[\\frac{1}{\\sigma^2} [\\mathbf{h}^{*T},1]\\left(\\frac{1}{\\sigma^2}\\mathbf{H}\\mathbf{H}^T+\\frac{1}{\\sigma_p^2}\\mathbf{I}\\right)^{-1}\\mathbf{H}\\mathbf{y}, [\\mathbf{h}^{*T},1]\\left(\\frac{1}{\\sigma^2}\\mathbf{H}\\mathbf{H}^T+\\frac{1}{\\sigma_p^2}\\mathbf{I}\\right)^{-1}\n",
"[\\mathbf{h}^*;1]\\biggr]\n",
"\\end{align}\n",
"\n",
"\n",
"\n",
"\n",
"To compute this, we reformulated the integrand using the relations from appendices\n",
"C.3.3 and C.3.4 as the product of a normal distribution in $\\boldsymbol\\phi$ and a constant with respect\n",
"to $\\boldsymbol\\phi$. The integral of the normal distribution must be one, and so the final result is just the constant. This constant is itself a normal distribution in $y^*$. <br>\n",
"\n",
"If you feel so inclined you can work through the math of this yourself."
],
"metadata": {
"id": "GiNg5EroUiUb"
}
"If you feel so inclined you can work through the math of this yourself.\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ILxT4EfW2lUm"
},
"outputs": [],
"source": [
"# Predict mean and variance of y_star from x_star\n",
"def inference(x_star, x_data, y_data, sigma_sq, sigma_p_sq, n_hidden):\n",
@@ -381,15 +381,15 @@
" y_star_var = 1\n",
"\n",
" return y_star_mean, y_star_var"
],
"metadata": {
"id": "ILxT4EfW2lUm"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "87cjUjMaixHZ"
},
"outputs": [],
"source": [
"x_model = x_func\n",
"y_model = np.zeros_like(x_model)\n",
@@ -401,24 +401,36 @@
"\n",
"# Draw the model\n",
"plot_function(x_func, y_func, x_data, y_data, x_model, y_model, sigma_model=y_model_std)\n"
],
"metadata": {
"id": "87cjUjMaixHZ"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "8Hcbe_16sK0F"
},
"source": [
"TODO:\n",
"\n",
"1. Experiment running this again with different numbers of hidden units. Make a prediction for what will happen when you increase / decrease them.\n",
"2. Experiment with what happens if you make the prior variance $\\sigma^2_p$ to a smaller value like 1. How do you explain the results?"
]
}
],
"metadata": {
"id": "8Hcbe_16sK0F"
"colab": {
"authorship_tag": "ABX9TyMB8B4269DVmrcLoCWrhzKF",
"include_colab_link": true,
"provenance": []
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"name": "python"
}
}
]
},
"nbformat": 4,
"nbformat_minor": 0
}

View File

@@ -4,7 +4,7 @@
"metadata": {
"colab": {
"provenance": [],
"authorship_tag": "ABX9TyOdSkjfQnSZXnffGsZVM7r5",
"authorship_tag": "ABX9TyO/wJ4N9w01f04mmrs/ZSHY",
"include_colab_link": true
},
"kernelspec": {
@@ -185,10 +185,10 @@
"np.set_printoptions(precision=3)\n",
"output = graph_attention(X, omega, beta, phi, A);\n",
"print(\"Correct answer is:\")\n",
"print(\"[[1.796 1.346 0.569 1.703 1.298 1.224 1.24 1.234]\")\n",
"print(\" [0.768 0.672 0. 0.529 3.841 4.749 5.376 4.761]\")\n",
"print(\" [0.305 0.129 0. 0.341 0.785 1.014 1.113 1.024]\")\n",
"print(\" [0. 0. 0. 0. 0.35 0.864 1.098 0.871]]]\")\n",
"print(\"[[0. 0.028 0.37 0. 0.97 0. 0. 0.698]\")\n",
"print(\" [0. 0. 0. 0. 1.184 0. 2.654 0. ]\")\n",
"print(\" [1.13 0.564 0. 1.298 0.268 0. 0. 0.779]\")\n",
"print(\" [0.825 0. 0. 1.175 0. 0. 0. 0. ]]]\")\n",
"\n",
"\n",
"print(\"Your answer is:\")\n",

View File

@@ -1,33 +1,22 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"authorship_tag": "ABX9TyNeCWINUqqUGKMcxsqPFTAh",
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
}
},
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
"colab_type": "text",
"id": "view-in-github"
},
"source": [
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap16/16_3_Contraction_Mappings.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "t9vk9Elugvmi"
},
"source": [
"# **Notebook 16.3: Contraction mappings**\n",
"\n",
@@ -36,38 +25,40 @@
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
"\n",
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
],
"metadata": {
"id": "t9vk9Elugvmi"
}
]
},
{
"cell_type": "code",
"source": [
"import numpy as np\n",
"import matplotlib.pyplot as plt"
],
"execution_count": null,
"metadata": {
"id": "OLComQyvCIJ7"
},
"execution_count": null,
"outputs": []
"outputs": [],
"source": [
"import numpy as np\n",
"import matplotlib.pyplot as plt"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "4Pfz2KSghdVI"
},
"outputs": [],
"source": [
"# Define a function that is a contraction mapping\n",
"def f(z):\n",
" return 0.3 + 0.5 *z + 0.02 * np.sin(z*15)"
],
"metadata": {
"id": "4Pfz2KSghdVI"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "zEwCbIx0hpAI"
},
"outputs": [],
"source": [
"def draw_function(f, fixed_point=None):\n",
" z = np.arange(0,1,0.01)\n",
@@ -84,35 +75,36 @@
" ax.set_xlabel('Input, $z$')\n",
" ax.set_ylabel('Output, f$[z]$')\n",
" plt.show()"
],
"metadata": {
"id": "zEwCbIx0hpAI"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"source": [
"draw_function(f)"
],
"execution_count": null,
"metadata": {
"id": "k4e5Yu0fl8bz"
},
"execution_count": null,
"outputs": []
"outputs": [],
"source": [
"draw_function(f)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"Now let's find where $\\mbox{f}[z]=z$ using fixed point iteration"
],
"metadata": {
"id": "DfgKrpCAjnol"
}
},
"source": [
"Now let's find where $\\text{f}[z]=z$ using fixed point iteration"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "bAOBvZT-j3lv"
},
"outputs": [],
"source": [
"# Takes a function f and a starting point z\n",
"def fixed_point_iteration(f, z0):\n",
@@ -125,115 +117,117 @@
"\n",
"\n",
" return z_out"
],
"metadata": {
"id": "bAOBvZT-j3lv"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"Now let's test that and plot the solution"
],
"metadata": {
"id": "CAS0lgIomAa0"
}
},
"source": [
"Now let's test that and plot the solution"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "EYQZJdNPk8Lg"
},
"outputs": [],
"source": [
"# Now let's test that\n",
"z = fixed_point_iteration(f, 0.2)\n",
"draw_function(f, z)"
],
"metadata": {
"id": "EYQZJdNPk8Lg"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "4DipPiqVlnwJ"
},
"outputs": [],
"source": [
"# Let's define another function\n",
"def f2(z):\n",
" return 0.7 + -0.6 *z + 0.03 * np.sin(z*15)\n",
"draw_function(f2)"
],
"metadata": {
"id": "4DipPiqVlnwJ"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "tYOdbWcomdEE"
},
"outputs": [],
"source": [
"# Now let's test that\n",
"# TODO Before running this code, predict what you think will happen\n",
"z = fixed_point_iteration(f2, 0.9)\n",
"draw_function(f2, z)"
],
"metadata": {
"id": "tYOdbWcomdEE"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Mni37RUpmrIu"
},
"outputs": [],
"source": [
"# Let's define another function\n",
"# Define a function that is a contraction mapping\n",
"def f3(z):\n",
" return -0.2 + 1.5 *z + 0.1 * np.sin(z*15)\n",
"draw_function(f3)"
],
"metadata": {
"id": "Mni37RUpmrIu"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "agt5mfJrnM1O"
},
"outputs": [],
"source": [
"# Now let's test that\n",
"# TODO Before running this code, predict what you think will happen\n",
"z = fixed_point_iteration(f3, 0.7)\n",
"draw_function(f3, z)"
],
"metadata": {
"id": "agt5mfJrnM1O"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"Finally, let's invert a problem of the form $y = z+ f[z]$ for a given value of $y$. What is the $z$ that maps to it?"
],
"metadata": {
"id": "n6GI46-ZoQz6"
}
},
"source": [
"Finally, let's invert a problem of the form $y = z+ f[z]$ for a given value of $y$. What is the $z$ that maps to it?"
]
},
{
"cell_type": "code",
"source": [
"def f4(z):\n",
" return -0.3 + 0.5 *z + 0.02 * np.sin(z*15)"
],
"execution_count": null,
"metadata": {
"id": "dy6r3jr9rjPf"
},
"execution_count": null,
"outputs": []
"outputs": [],
"source": [
"def f4(z):\n",
" return -0.3 + 0.5 *z + 0.02 * np.sin(z*15)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "GMX64Iz0nl-B"
},
"outputs": [],
"source": [
"def fixed_point_iteration_z_plus_f(f, y, z0):\n",
" # TODO -- write this function\n",
@@ -241,15 +235,15 @@
" z_out = 1\n",
"\n",
" return z_out"
],
"metadata": {
"id": "GMX64Iz0nl-B"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "uXxKHad5qT8Y"
},
"outputs": [],
"source": [
"def draw_function2(f, y, fixed_point=None):\n",
" z = np.arange(0,1,0.01)\n",
@@ -267,15 +261,15 @@
" ax.set_xlabel('Input, $z$')\n",
" ax.set_ylabel('Output, z+f$[z]$')\n",
" plt.show()"
],
"metadata": {
"id": "uXxKHad5qT8Y"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "mNEBXC3Aqd_1"
},
"outputs": [],
"source": [
"# Test this out and draw\n",
"y = 0.8\n",
@@ -283,12 +277,23 @@
"draw_function2(f4,y,z)\n",
"# If you have done this correctly, the red dot should be\n",
"# where the cyan curve has a y value of 0.8"
]
}
],
"metadata": {
"id": "mNEBXC3Aqd_1"
"colab": {
"authorship_tag": "ABX9TyNeCWINUqqUGKMcxsqPFTAh",
"include_colab_link": true,
"provenance": []
},
"execution_count": null,
"outputs": []
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"name": "python"
}
]
},
"nbformat": 4,
"nbformat_minor": 0
}

View File

@@ -1,33 +1,22 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"authorship_tag": "ABX9TyOSEQVqxE5KrXmsZVh9M3gq",
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
}
},
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
"colab_type": "text",
"id": "view-in-github"
},
"source": [
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap17/17_1_Latent_Variable_Models.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "t9vk9Elugvmi"
},
"source": [
"# **Notebook 17.1: Latent variable models**\n",
"\n",
@@ -36,72 +25,76 @@
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
"\n",
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
],
"metadata": {
"id": "t9vk9Elugvmi"
}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "OLComQyvCIJ7"
},
"outputs": [],
"source": [
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"import scipy\n",
"from matplotlib.colors import ListedColormap\n",
"from matplotlib import cm"
],
"metadata": {
"id": "OLComQyvCIJ7"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "IyVn-Gi-p7wf"
},
"source": [
"We'll assume that our base distribution over the latent variables is a 1D standard normal so that\n",
"\n",
"\\begin{equation}\n",
"Pr(z) = \\mbox{Norm}_{z}[0,1]\n",
"Pr(z) = \\text{Norm}_{z}[0,1]\n",
"\\end{equation}\n",
"\n",
"As in figure 17.2, we'll assume that the output is two dimensional, we we need to define a function that maps from the 1D latent variable to two dimensions. Usually, we would use a neural network, but in this case, we'll just define an arbitrary relationship.\n",
"\n",
"\\begin{eqnarray}\n",
"\\begin{align}\n",
"x_{1} &=& 0.5\\cdot\\exp\\Bigl[\\sin\\bigl[2+ 3.675 z \\bigr]\\Bigr]\\\\\n",
"x_{2} &=& \\sin\\bigl[2+ 2.85 z \\bigr]\n",
"\\end{eqnarray}"
],
"metadata": {
"id": "IyVn-Gi-p7wf"
}
"\\end{align}"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ZIfQwhd-AV6L"
},
"outputs": [],
"source": [
"# The function that maps z to x1 and x2\n",
"def f(z):\n",
" x_1 = np.exp(np.sin(2+z*3.675)) * 0.5\n",
" x_2 = np.cos(2+z*2.85)\n",
" return x_1, x_2"
],
"metadata": {
"id": "ZIfQwhd-AV6L"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"Let's plot the 3D relation between the two observed variables $x_{1}$ and $x_{2}$ and the latent variables $z$ as in figure 17.2 of the book. We'll use the opacity to represent the prior probability $Pr(z)$."
],
"metadata": {
"id": "KB9FU34onW1j"
}
},
"source": [
"Let's plot the 3D relation between the two observed variables $x_{1}$ and $x_{2}$ and the latent variables $z$ as in figure 17.2 of the book. We'll use the opacity to represent the prior probability $Pr(z)$."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "lW08xqAgnP4q"
},
"outputs": [],
"source": [
"def draw_3d_projection(z,pr_z, x1,x2):\n",
" alpha = pr_z / np.max(pr_z)\n",
@@ -118,28 +111,28 @@
" ax.set_zlim(-1,1)\n",
" ax.set_box_aspect((3,1,1))\n",
" plt.show()"
],
"metadata": {
"id": "lW08xqAgnP4q"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "9DUTauMi6tPk"
},
"outputs": [],
"source": [
"# Compute the prior\n",
"def get_prior(z):\n",
" return scipy.stats.multivariate_normal.pdf(z)"
],
"metadata": {
"id": "9DUTauMi6tPk"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "PAzHq461VqvF"
},
"outputs": [],
"source": [
"# Define the latent variable values\n",
"z = np.arange(-3.0,3.0,0.01)\n",
@@ -149,40 +142,41 @@
"x1,x2 = f(z)\n",
"# Plot the function\n",
"draw_3d_projection(z,pr_z, x1,x2)"
],
"metadata": {
"id": "PAzHq461VqvF"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"The likelihood is defined as:\n",
"\\begin{eqnarray}\n",
" Pr(x_1,x_2|z) &=& \\mbox{Norm}_{[x_1,x_2]}\\Bigl[\\mathbf{f}[z],\\sigma^{2}\\mathbf{I}\\Bigr]\n",
"\\end{eqnarray}\n",
"\n",
"so we will also need to define the noise level $\\sigma^2$"
],
"metadata": {
"id": "sQg2gKR5zMrF"
}
},
"source": [
"The likelihood is defined as:\n",
"\\begin{align}\n",
" Pr(x_1,x_2|z) &=& \\text{Norm}_{[x_1,x_2]}\\Bigl[\\mathbf{f}[z],\\sigma^{2}\\mathbf{I}\\Bigr]\n",
"\\end{align}\n",
"\n",
"so we will also need to define the noise level $\\sigma^2$"
]
},
{
"cell_type": "code",
"source": [
"sigma_sq = 0.04"
],
"execution_count": null,
"metadata": {
"id": "In_Vg4_0nva3"
},
"execution_count": null,
"outputs": []
"outputs": [],
"source": [
"sigma_sq = 0.04"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "6P6d-AgAqxXZ"
},
"outputs": [],
"source": [
"# Draws a heatmap to represent a probability distribution, possibly with samples overlaed\n",
"def plot_heatmap(x1_mesh,x2_mesh,y_mesh, x1_samples=None, x2_samples=None, title=None):\n",
@@ -207,15 +201,15 @@
" ax.set_xlabel('$x_1$'); ax.set_ylabel('$x_2$')\n",
" plt.show()\n",
"\n"
],
"metadata": {
"id": "6P6d-AgAqxXZ"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "diYKb7_ZgjlJ"
},
"outputs": [],
"source": [
"# Returns the likelihood\n",
"def get_likelihood(x1_mesh, x2_mesh, z_val):\n",
@@ -226,24 +220,25 @@
" mn = scipy.stats.multivariate_normal([x1, x2], [[sigma_sq, 0], [0, sigma_sq]])\n",
" pr_x1_x2_given_z_val = mn.pdf(np.dstack((x1_mesh, x2_mesh)))\n",
" return pr_x1_x2_given_z_val"
],
"metadata": {
"id": "diYKb7_ZgjlJ"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"Now let's plot the likelihood $Pr(x_1,x_2|z)$ as in fig 17.3b in the book."
],
"metadata": {
"id": "0X4NwixzqxtZ"
}
},
"source": [
"Now let's plot the likelihood $Pr(x_1,x_2|z)$ as in fig 17.3b in the book."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "hWfqK-Oz5_DT"
},
"outputs": [],
"source": [
"# Choose some z value\n",
"z_val = 1.8\n",
@@ -256,30 +251,31 @@
"plot_heatmap(x1_mesh, x2_mesh, pr_x1_x2_given_z_val, title=\"Conditional distribution $Pr(x_1,x_2|z)$\")\n",
"\n",
"# TODO -- Experiment with different values of z and make sure that you understand the what is happening."
],
"metadata": {
"id": "hWfqK-Oz5_DT"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "25xqXnmFo-PH"
},
"source": [
"The data density is found by marginalizing over the latent variables $z$:\n",
"\n",
"\\begin{eqnarray}\n",
"\\begin{align}\n",
" Pr(x_1,x_2) &=& \\int Pr(x_1,x_2, z) dz \\nonumber \\\\\n",
" &=& \\int Pr(x_1,x_2 | z) \\cdot Pr(z)dz\\nonumber \\\\\n",
" &=& \\int \\mbox{Norm}_{[x_1,x_2]}\\Bigl[\\mathbf{f}[z],\\sigma^{2}\\mathbf{I}\\Bigr]\\cdot \\mbox{Norm}_{z}\\left[\\mathbf{0},\\mathbf{I}\\right]dz.\n",
"\\end{eqnarray}"
],
"metadata": {
"id": "25xqXnmFo-PH"
}
" &=& \\int \\text{Norm}_{[x_1,x_2]}\\Bigl[\\mathbf{f}[z],\\sigma^{2}\\mathbf{I}\\Bigr]\\cdot \\text{Norm}_{z}\\left[\\mathbf{0},\\mathbf{I}\\right]dz.\n",
"\\end{align}"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "H0Ijce9VzeCO"
},
"outputs": [],
"source": [
"# TODO Compute the data density\n",
"# We can't integrate this function in closed form\n",
@@ -293,24 +289,25 @@
"\n",
"# Plot the result\n",
"plot_heatmap(x1_mesh, x2_mesh, pr_x1_x2, title=\"Data density $Pr(x_1,x_2)$\")\n"
],
"metadata": {
"id": "H0Ijce9VzeCO"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"Now let's draw some samples from the model"
],
"metadata": {
"id": "W264N7By_h9y"
}
},
"source": [
"Now let's draw some samples from the model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Li3mK_I48k0k"
},
"outputs": [],
"source": [
"def draw_samples(n_sample):\n",
" # TODO Write this routine to draw n_sample samples from the model\n",
@@ -320,37 +317,38 @@
" x1_samples=0; x2_samples = 0;\n",
"\n",
" return x1_samples, x2_samples"
],
"metadata": {
"id": "Li3mK_I48k0k"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"Let's plot those samples on top of the heat map."
],
"metadata": {
"id": "D7N7oqLe-eJO"
}
},
"source": [
"Let's plot those samples on top of the heat map."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "XRmWv99B-BWO"
},
"outputs": [],
"source": [
"x1_samples, x2_samples = draw_samples(500)\n",
"# Plot the result\n",
"plot_heatmap(x1_mesh, x2_mesh, pr_x1_x2, x1_samples, x2_samples, title=\"Data density $Pr(x_1,x_2)$\")\n"
],
"metadata": {
"id": "XRmWv99B-BWO"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "PwOjzPD5_1OF"
},
"outputs": [],
"source": [
"# Return the posterior distribution\n",
"def get_posterior(x1,x2):\n",
@@ -364,15 +362,15 @@
"\n",
"\n",
" return z, pr_z_given_x1_x2"
],
"metadata": {
"id": "PwOjzPD5_1OF"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "PKFUY42K-Tp7"
},
"outputs": [],
"source": [
"x1 = 0.9; x2 = -0.9\n",
"z, pr_z_given_x1_x2 = get_posterior(x1,x2)\n",
@@ -385,12 +383,23 @@
"ax.set_xlim([-3,3])\n",
"ax.set_ylim([0,1.5 * np.max(pr_z_given_x1_x2)])\n",
"plt.show()"
]
}
],
"metadata": {
"id": "PKFUY42K-Tp7"
"colab": {
"authorship_tag": "ABX9TyOSEQVqxE5KrXmsZVh9M3gq",
"include_colab_link": true,
"provenance": []
},
"execution_count": null,
"outputs": []
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"name": "python"
}
]
},
"nbformat": 4,
"nbformat_minor": 0
}

View File

@@ -1,33 +1,22 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"authorship_tag": "ABX9TyOxO2/0DTH4n4zhC97qbagY",
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
}
},
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
"colab_type": "text",
"id": "view-in-github"
},
"source": [
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap17/17_2_Reparameterization_Trick.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "t9vk9Elugvmi"
},
"source": [
"# **Notebook 17.2: Reparameterization trick**\n",
"\n",
@@ -36,30 +25,31 @@
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
"\n",
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
],
"metadata": {
"id": "t9vk9Elugvmi"
}
]
},
{
"cell_type": "code",
"source": [
"import numpy as np\n",
"import matplotlib.pyplot as plt"
],
"execution_count": null,
"metadata": {
"id": "OLComQyvCIJ7"
},
"execution_count": null,
"outputs": []
"outputs": [],
"source": [
"import numpy as np\n",
"import matplotlib.pyplot as plt"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "paLz5RukZP1J"
},
"source": [
"The reparameterization trick computes the derivative of an expectation of a function $\\mbox{f}[x]$:\n",
"The reparameterization trick computes the derivative of an expectation of a function $\\text{f}[x]$:\n",
"\n",
"\\begin{equation}\n",
"\\frac{\\partial}{\\partial \\boldsymbol\\phi} \\mathbb{E}_{Pr(x|\\boldsymbol\\phi)}\\bigl[\\mbox{f}[x]\\bigr],\n",
"\\frac{\\partial}{\\partial \\boldsymbol\\phi} \\mathbb{E}_{Pr(x|\\boldsymbol\\phi)}\\bigl[\\text{f}[x]\\bigr],\n",
"\\end{equation}\n",
"\n",
"with respect to the parameters $\\boldsymbol\\phi$ of the distribution $Pr(x|\\boldsymbol\\phi)$ that the expectation is over.\n",
@@ -67,21 +57,23 @@
"Let's consider a simple concrete example, where:\n",
"\n",
"\\begin{equation}\n",
"Pr(x|\\phi) = \\mbox{Norm}_{x}\\Bigl[\\mu, \\sigma^2\\Bigr]=\\mbox{Norm}_{x}\\Bigl[\\phi^3,(\\exp[\\phi])^2\\Bigr]\n",
"Pr(x|\\phi) = \\text{Norm}_{x}\\Bigl[\\mu, \\sigma^2\\Bigr]=\\text{Norm}_{x}\\Bigl[\\phi^3,(\\exp[\\phi])^2\\Bigr]\n",
"\\end{equation}\n",
"\n",
"and\n",
"\n",
"\\begin{equation}\n",
"\\mbox{f}[x] = x^2+\\sin[x]\n",
"\\text{f}[x] = x^2+\\sin[x]\n",
"\\end{equation}"
],
"metadata": {
"id": "paLz5RukZP1J"
}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "FdEbMnDBY0i9"
},
"outputs": [],
"source": [
"# Let's approximate this expectation for a particular value of phi\n",
"def compute_expectation(phi, n_samples):\n",
@@ -96,15 +88,15 @@
"\n",
"\n",
" return expected_f_given_phi"
],
"metadata": {
"id": "FdEbMnDBY0i9"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "FTh7LJ0llNJZ"
},
"outputs": [],
"source": [
"# Set the seed so the random numbers are all the same\n",
"np.random.seed(0)\n",
@@ -119,24 +111,25 @@
"n_samples = 10000000\n",
"expected_f_given_phi2 = compute_expectation(phi2, n_samples)\n",
"print(\"Your value: \", expected_f_given_phi2, \", True value: 0.8176793102849222\")"
],
"metadata": {
"id": "FTh7LJ0llNJZ"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"Le't plot this expectation as a function of phi"
],
"metadata": {
"id": "r5Hl2QkimWx9"
}
},
"source": [
"Le't plot this expectation as a function of phi"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "05XxVLJxmkER"
},
"outputs": [],
"source": [
"phi_vals = np.arange(-1.5,1.5, 0.05)\n",
"expected_vals = np.zeros_like(phi_vals)\n",
@@ -149,15 +142,14 @@
"ax.set_xlabel('Parameter $\\phi$')\n",
"ax.set_ylabel('$\\mathbb{E}_{Pr(x|\\phi)}[f[x]]$')\n",
"plt.show()"
],
"metadata": {
"id": "05XxVLJxmkER"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "zTCykVeWqj_O"
},
"source": [
"It's this curve that we want to find the derivative of (so for example, we could run gradient descent and find the minimum.\n",
"\n",
@@ -166,28 +158,30 @@
"The answer is the reparameterization trick. We note that:\n",
"\n",
"\\begin{equation}\n",
"\\mbox{Norm}_{x}\\Bigl[\\mu, \\sigma^2\\Bigr]=\\mbox{Norm}_{x}\\Bigl[0, 1\\Bigr] \\times \\sigma + \\mu\n",
"\\text{Norm}_{x}\\Bigl[\\mu, \\sigma^2\\Bigr]=\\text{Norm}_{x}\\Bigl[0, 1\\Bigr] \\times \\sigma + \\mu\n",
"\\end{equation}\n",
"\n",
"and so:\n",
"\n",
"\\begin{equation}\n",
"\\mbox{Norm}_{x}\\Bigl[\\phi^3,(\\exp[\\phi])^2\\Bigr] = \\mbox{Norm}_{x}\\Bigl[0, 1\\Bigr] \\times \\exp[\\phi]+ \\phi^3\n",
"\\text{Norm}_{x}\\Bigl[\\phi^3,(\\exp[\\phi])^2\\Bigr] = \\text{Norm}_{x}\\Bigl[0, 1\\Bigr] \\times \\exp[\\phi]+ \\phi^3\n",
"\\end{equation}\n",
"\n",
"So, if we draw a sample $\\epsilon^*$ from $\\mbox{Norm}_{\\epsilon}[0, 1]$, then we can compute a sample $x^*$ as:\n",
"So, if we draw a sample $\\epsilon^*$ from $\\text{Norm}_{\\epsilon}[0, 1]$, then we can compute a sample $x^*$ as:\n",
"\n",
"\\begin{eqnarray*}\n",
"\\begin{align}\n",
"x^* &=& \\epsilon^* \\times \\sigma + \\mu \\\\\n",
"&=& \\epsilon^* \\times \\exp[\\phi]+ \\phi^3\n",
"\\end{eqnarray*}"
],
"metadata": {
"id": "zTCykVeWqj_O"
}
"\\end{align}"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "w13HVpi9q8nF"
},
"outputs": [],
"source": [
"def compute_df_dx_star(x_star):\n",
" # TODO Compute this derivative (function defined at the top)\n",
@@ -222,15 +216,15 @@
"\n",
"\n",
" return df_dphi"
],
"metadata": {
"id": "w13HVpi9q8nF"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ntQT4An79kAl"
},
"outputs": [],
"source": [
"# Set the seed so the random numbers are all the same\n",
"np.random.seed(0)\n",
@@ -241,15 +235,15 @@
"\n",
"deriv = compute_derivative_of_expectation(phi1, n_samples)\n",
"print(\"Your value: \", deriv, \", True value: 5.726338035051403\")"
],
"metadata": {
"id": "ntQT4An79kAl"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "t0Jqd_IN_lMU"
},
"outputs": [],
"source": [
"phi_vals = np.arange(-1.5,1.5, 0.05)\n",
"deriv_vals = np.zeros_like(phi_vals)\n",
@@ -262,37 +256,37 @@
"ax.set_xlabel('Parameter $\\phi$')\n",
"ax.set_ylabel('$\\partial/\\partial\\phi\\mathbb{E}_{Pr(x|\\phi)}[f[x]]$')\n",
"plt.show()"
],
"metadata": {
"id": "t0Jqd_IN_lMU"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"This should look plausibly like the derivative of the function we plotted above!"
],
"metadata": {
"id": "ASu4yKSwAEYI"
}
},
"source": [
"This should look plausibly like the derivative of the function we plotted above!"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "xoFR1wifc8-b"
},
"source": [
"The reparameterization trick computes the derivative of an expectation of a function $\\mbox{f}[x]$:\n",
"The reparameterization trick computes the derivative of an expectation of a function $\\text{f}[x]$:\n",
"\n",
"\\begin{equation}\n",
"\\frac{\\partial}{\\partial \\boldsymbol\\phi} \\mathbb{E}_{Pr(x|\\boldsymbol\\phi)}\\bigl[\\mbox{f}[x]\\bigr],\n",
"\\frac{\\partial}{\\partial \\boldsymbol\\phi} \\mathbb{E}_{Pr(x|\\boldsymbol\\phi)}\\bigl[\\text{f}[x]\\bigr],\n",
"\\end{equation}\n",
"\n",
"with respect to the parameters $\\boldsymbol\\phi$ of the distribution $Pr(x|\\boldsymbol\\phi)$ that the expectation is over. This derivative can also be computed as:\n",
"\n",
"\\begin{eqnarray}\n",
"\\frac{\\partial}{\\partial \\boldsymbol\\phi} \\mathbb{E}_{Pr(x|\\boldsymbol\\phi)}\\bigl[\\mbox{f}[x]\\bigr] &=& \\mathbb{E}_{Pr(x|\\boldsymbol\\phi)}\\left[\\mbox{f}[x]\\frac{\\partial}{\\partial \\boldsymbol\\phi} \\log\\bigl[ Pr(x|\\boldsymbol\\phi)\\bigr]\\right]\\nonumber \\\\\n",
"&\\approx & \\frac{1}{I}\\sum_{i=1}^{I}\\mbox{f}[x_i]\\frac{\\partial}{\\partial \\boldsymbol\\phi} \\log\\bigl[ Pr(x_i|\\boldsymbol\\phi)\\bigr].\n",
"\\end{eqnarray}\n",
"\\begin{align}\n",
"\\frac{\\partial}{\\partial \\boldsymbol\\phi} \\mathbb{E}_{Pr(x|\\boldsymbol\\phi)}\\bigl[\\text{f}[x]\\bigr] &=& \\mathbb{E}_{Pr(x|\\boldsymbol\\phi)}\\left[\\text{f}[x]\\frac{\\partial}{\\partial \\boldsymbol\\phi} \\log\\bigl[ Pr(x|\\boldsymbol\\phi)\\bigr]\\right]\\nonumber \\\\\n",
"&\\approx & \\frac{1}{I}\\sum_{i=1}^{I}\\text{f}[x_i]\\frac{\\partial}{\\partial \\boldsymbol\\phi} \\log\\bigl[ Pr(x_i|\\boldsymbol\\phi)\\bigr].\n",
"\\end{align}\n",
"\n",
"This method is known as the REINFORCE algorithm or score function estimator. Problem 17.5 asks you to prove this relation. Let's use this method to compute the gradient and compare.\n",
"\n",
@@ -301,13 +295,15 @@
"\\begin{equation}\n",
" Pr(x|\\mu,\\sigma^2) = \\frac{1}{\\sqrt{2\\pi\\sigma^{2}}}\\exp\\left[-\\frac{(x-\\mu)^{2}}{2\\sigma^{2}}\\right].\n",
"\\end{equation}\n"
],
"metadata": {
"id": "xoFR1wifc8-b"
}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "4TUaxiWvASla"
},
"outputs": [],
"source": [
"def d_log_pr_x_given_phi(x,phi):\n",
" # TODO -- fill in this function\n",
@@ -333,15 +329,15 @@
"\n",
"\n",
" return deriv"
],
"metadata": {
"id": "4TUaxiWvASla"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "0RSN32Rna_C_"
},
"outputs": [],
"source": [
"# Set the seed so the random numbers are all the same\n",
"np.random.seed(0)\n",
@@ -352,15 +348,15 @@
"\n",
"deriv = compute_derivative_of_expectation_score_function(phi1, n_samples)\n",
"print(\"Your value: \", deriv, \", True value: 5.724609927313369\")"
],
"metadata": {
"id": "0RSN32Rna_C_"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "EM_i5zoyElHR"
},
"outputs": [],
"source": [
"phi_vals = np.arange(-1.5,1.5, 0.05)\n",
"deriv_vals = np.zeros_like(phi_vals)\n",
@@ -373,24 +369,25 @@
"ax.set_xlabel('Parameter $\\phi$')\n",
"ax.set_ylabel('$\\partial/\\partial\\phi\\mathbb{E}_{Pr(x|\\phi)}[f[x]]$')\n",
"plt.show()"
],
"metadata": {
"id": "EM_i5zoyElHR"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"This should look the same as the derivative that we computed with the reparameterization trick. So, is there any advantage to one way or the other? Let's compare the variances of the estimates\n"
],
"metadata": {
"id": "1TWBiUC7bQSw"
}
},
"source": [
"This should look the same as the derivative that we computed with the reparameterization trick. So, is there any advantage to one way or the other? Let's compare the variances of the estimates\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "vV_Jx5bCbQGs"
},
"outputs": [],
"source": [
"n_estimate = 100\n",
"n_sample = 1000\n",
@@ -403,21 +400,33 @@
"\n",
"print(\"Variance of reparameterization estimator\", np.var(reparam_estimates))\n",
"print(\"Variance of score function estimator\", np.var(score_function_estimates))"
],
"metadata": {
"id": "vV_Jx5bCbQGs"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"The variance of the reparameterization estimator should be quite a bit lower than the score function estimator which is why it is preferred in this situation."
],
"metadata": {
"id": "d-0tntSYdKPR"
}
}
},
"source": [
"The variance of the reparameterization estimator should be quite a bit lower than the score function estimator which is why it is preferred in this situation."
]
}
],
"metadata": {
"colab": {
"authorship_tag": "ABX9TyOxO2/0DTH4n4zhC97qbagY",
"include_colab_link": true,
"provenance": []
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

View File

@@ -1,33 +1,22 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"authorship_tag": "ABX9TyNecz9/CDOggPSmy1LjT/Dv",
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
}
},
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
"colab_type": "text",
"id": "view-in-github"
},
"source": [
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap17/17_3_Importance_Sampling.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "t9vk9Elugvmi"
},
"source": [
"# **Notebook 17.3: Importance sampling**\n",
"\n",
@@ -36,25 +25,26 @@
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
"\n",
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
],
"metadata": {
"id": "t9vk9Elugvmi"
}
]
},
{
"cell_type": "code",
"source": [
"import numpy as np\n",
"import matplotlib.pyplot as plt"
],
"execution_count": null,
"metadata": {
"id": "OLComQyvCIJ7"
},
"execution_count": null,
"outputs": []
"outputs": [],
"source": [
"import numpy as np\n",
"import matplotlib.pyplot as plt"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "f7a6xqKjkmvT"
},
"source": [
"Let's approximate the expectation\n",
"\n",
@@ -65,7 +55,7 @@
"where\n",
"\n",
"\\begin{equation}\n",
"Pr(y)=\\mbox{Norm}_y[0,1]\n",
"Pr(y)=\\text{Norm}_y[0,1]\n",
"\\end{equation}\n",
"\n",
"by drawing $I$ samples $y_i$ and using the formula:\n",
@@ -73,13 +63,15 @@
"\\begin{equation}\n",
"\\mathbb{E}_{y}\\Bigl[\\exp\\bigl[- (y-1)^4\\bigr]\\Bigr] \\approx \\frac{1}{I} \\sum_{i=1}^I \\exp\\bigl[-(y-1)^4 \\bigr]\n",
"\\end{equation}"
],
"metadata": {
"id": "f7a6xqKjkmvT"
}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "VjkzRr8o2ksg"
},
"outputs": [],
"source": [
"def f(y):\n",
" return np.exp(-(y-1) *(y-1) *(y-1) * (y-1))\n",
@@ -95,15 +87,15 @@
"ax.set_xlabel(\"$y$\")\n",
"ax.legend()\n",
"plt.show()"
],
"metadata": {
"id": "VjkzRr8o2ksg"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "LGAKHjUJnWmy"
},
"outputs": [],
"source": [
"def compute_expectation(n_samples):\n",
" # TODO -- compute this expectation\n",
@@ -114,15 +106,15 @@
"\n",
"\n",
" return expectation"
],
"metadata": {
"id": "LGAKHjUJnWmy"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "nmvixMqgodIP"
},
"outputs": [],
"source": [
"# Set the seed so the random numbers are all the same\n",
"np.random.seed(0)\n",
@@ -131,26 +123,27 @@
"n_samples = 100000000\n",
"expected_f= compute_expectation(n_samples)\n",
"print(\"Your value: \", expected_f, \", True value: 0.43160702267383166\")"
],
"metadata": {
"id": "nmvixMqgodIP"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "Jr4UPcqmnXCS"
},
"source": [
"Let's investigate how the variance of this approximation decreases as we increase the number of samples $N$.\n",
"\n",
"\n"
],
"metadata": {
"id": "Jr4UPcqmnXCS"
}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "yrDp1ILUo08j"
},
"outputs": [],
"source": [
"def compute_mean_variance(n_sample):\n",
" n_estimate = 10000\n",
@@ -158,15 +151,15 @@
" for i in range(n_estimate):\n",
" estimates[i] = compute_expectation(n_sample.astype(int))\n",
" return np.mean(estimates), np.var(estimates)"
],
"metadata": {
"id": "yrDp1ILUo08j"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "BcUVsodtqdey"
},
"outputs": [],
"source": [
"# Compute the mean and variance for 1,2,... 20 samples\n",
"n_sample_all = np.array([1.,2,3,4,5,6,7,8,9,10,15,20,25,30,45,50,60,70,80,90,100,150,200,250,300,350,400,450,500])\n",
@@ -175,15 +168,15 @@
"for i in range(len(n_sample_all)):\n",
" print(\"Computing mean and variance for expectation with %d samples\"%(n_sample_all[i]))\n",
" mean_all[i],variance_all[i] = compute_mean_variance(n_sample_all[i])"
],
"metadata": {
"id": "BcUVsodtqdey"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "feXmyk0krpUi"
},
"outputs": [],
"source": [
"fig,ax = plt.subplots()\n",
"ax.semilogx(n_sample_all, mean_all,'r-',label='mean estimate')\n",
@@ -193,24 +186,24 @@
"ax.plot([0,500],[0.43160702267383166,0.43160702267383166],'k--',label='true value')\n",
"ax.legend()\n",
"plt.show()\n"
],
"metadata": {
"id": "feXmyk0krpUi"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"As you might expect, the more samples that we use to compute the approximate estimate, the lower the variance of the estimate."
],
"metadata": {
"id": "XTUpxFlSuOl7"
}
},
"source": [
"As you might expect, the more samples that we use to compute the approximate estimate, the lower the variance of the estimate."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "6hxsl3Pxo1TT"
},
"source": [
" Now consider the function\n",
" \\begin{equation}\n",
@@ -218,13 +211,15 @@
" \\end{equation}\n",
"\n",
"which decreases rapidly as we move away from the position $y=3$."
],
"metadata": {
"id": "6hxsl3Pxo1TT"
}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "znydVPW7sL4P"
},
"outputs": [],
"source": [
"def f2(y):\n",
" return 20.446*np.exp(- (y-3) *(y-3) *(y-3) * (y-3))\n",
@@ -236,46 +231,47 @@
"ax.set_xlabel(\"$y$\")\n",
"ax.legend()\n",
"plt.show()"
],
"metadata": {
"id": "znydVPW7sL4P"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "G9Xxo0OJsIqD"
},
"source": [
"Let's again, compute the expectation:\n",
"\n",
"\\begin{eqnarray}\n",
"\\mathbb{E}_{y}\\left[\\mbox{f}[y]\\right] &=& \\int \\mbox{f}[y] Pr(y) dy\\\\\n",
"&\\approx& \\frac{1}{I} \\mbox{f}[y]\n",
"\\end{eqnarray}\n",
"\\begin{align}\n",
"\\mathbb{E}_{y}\\left[\\text{f}[y]\\right] &=& \\int \\text{f}[y] Pr(y) dy\\\\\n",
"&\\approx& \\frac{1}{I} \\text{f}[y]\n",
"\\end{align}\n",
"\n",
"where $Pr(y)=\\mbox{Norm}_y[0,1]$ by approximating with samples $y_{i}$.\n"
],
"metadata": {
"id": "G9Xxo0OJsIqD"
}
"where $Pr(y)=\\text{Norm}_y[0,1]$ by approximating with samples $y_{i}$.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "l8ZtmkA2vH4y"
},
"outputs": [],
"source": [
"def compute_expectation2(n_samples):\n",
" y = np.random.normal(size=(n_samples,1))\n",
" expectation = np.mean(f2(y))\n",
"\n",
" return expectation"
],
"metadata": {
"id": "l8ZtmkA2vH4y"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "dfUQyJ-svZ6F"
},
"outputs": [],
"source": [
"# Set the seed so the random numbers are all the same\n",
"np.random.seed(0)\n",
@@ -284,26 +280,27 @@
"n_samples = 100000000\n",
"expected_f2= compute_expectation2(n_samples)\n",
"print(\"Expected value: \", expected_f2)"
],
"metadata": {
"id": "dfUQyJ-svZ6F"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "2sVDqP0BvxqM"
},
"source": [
"I deliberately chose this function, because it's expectation is roughly the same as for the previous function.\n",
"\n",
"Again, let's look at the mean and the variance of the estimates"
],
"metadata": {
"id": "2sVDqP0BvxqM"
}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "mHnILRkOv0Ir"
},
"outputs": [],
"source": [
"def compute_mean_variance2(n_sample):\n",
" n_estimate = 10000\n",
@@ -318,15 +315,15 @@
"for i in range(len(n_sample_all)):\n",
" print(\"Computing variance for expectation with %d samples\"%(n_sample_all[i]))\n",
" mean_all2[i], variance_all2[i] = compute_mean_variance2(n_sample_all[i])"
],
"metadata": {
"id": "mHnILRkOv0Ir"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "FkCX-hxxAnsw"
},
"outputs": [],
"source": [
"fig,ax1 = plt.subplots()\n",
"ax1.semilogx(n_sample_all, mean_all,'r-',label='mean estimate')\n",
@@ -348,39 +345,41 @@
"ax2.set_title(\"Second function\")\n",
"ax2.legend()\n",
"plt.show()"
],
"metadata": {
"id": "FkCX-hxxAnsw"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "EtBP6NeLwZqz"
},
"source": [
"You can see that the variance of the estimate of the second function is considerably worse than the estimate of the variance of estimate of the first function\n",
"\n",
"TODO: Think about why this is."
],
"metadata": {
"id": "EtBP6NeLwZqz"
}
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "_wuF-NoQu1--"
},
"source": [
" Now let's repeat this experiment with the second function, but this time use importance sampling with auxiliary distribution:\n",
"\n",
" \\begin{equation}\n",
" q(y)=\\mbox{Norm}_{y}[3,1]\n",
" q(y)=\\text{Norm}_{y}[3,1]\n",
" \\end{equation}\n"
],
"metadata": {
"id": "_wuF-NoQu1--"
}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "jPm0AVYVIDnn"
},
"outputs": [],
"source": [
"def q_y(y):\n",
" return (1/np.sqrt(2*np.pi)) * np.exp(-0.5 * (y-3) * (y-3))\n",
@@ -395,15 +394,15 @@
" expectation = 0\n",
"\n",
" return expectation"
],
"metadata": {
"id": "jPm0AVYVIDnn"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "No2ByVvOM2yQ"
},
"outputs": [],
"source": [
"# Set the seed so the random numbers are all the same\n",
"np.random.seed(0)\n",
@@ -412,15 +411,15 @@
"n_samples = 100000000\n",
"expected_f2= compute_expectation2b(n_samples)\n",
"print(\"Your value: \", expected_f2,\", True value: 0.43163734204459125 \")"
],
"metadata": {
"id": "No2ByVvOM2yQ"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "6v8Jc7z4M3Mk"
},
"outputs": [],
"source": [
"def compute_mean_variance2b(n_sample):\n",
" n_estimate = 10000\n",
@@ -435,15 +434,15 @@
"for i in range(len(n_sample_all)):\n",
" print(\"Computing variance for expectation with %d samples\"%(n_sample_all[i]))\n",
" mean_all2b[i], variance_all2b[i] = compute_mean_variance2b(n_sample_all[i])"
],
"metadata": {
"id": "6v8Jc7z4M3Mk"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "C0beD4sNNM3L"
},
"outputs": [],
"source": [
"fig,ax1 = plt.subplots()\n",
"ax1.semilogx(n_sample_all, mean_all,'r-',label='mean estimate')\n",
@@ -476,21 +475,33 @@
"ax2.set_title(\"Second function with importance sampling\")\n",
"ax2.legend()\n",
"plt.show()"
],
"metadata": {
"id": "C0beD4sNNM3L"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"You can see that the importance sampling technique has reduced the amount of variance for any given number of samples."
],
"metadata": {
"id": "y8rgge9MNiOc"
}
}
},
"source": [
"You can see that the importance sampling technique has reduced the amount of variance for any given number of samples."
]
}
],
"metadata": {
"colab": {
"authorship_tag": "ABX9TyNecz9/CDOggPSmy1LjT/Dv",
"include_colab_link": true,
"provenance": []
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

View File

@@ -1,33 +1,22 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"authorship_tag": "ABX9TyM4DdZDGoP1xGst+Nn+rwvt",
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
}
},
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
"colab_type": "text",
"id": "view-in-github"
},
"source": [
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap18/18_2_1D_Diffusion_Model.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "t9vk9Elugvmi"
},
"source": [
"# **Notebook 18.2: 1D Diffusion Model**\n",
"\n",
@@ -36,13 +25,15 @@
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
"\n",
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
],
"metadata": {
"id": "t9vk9Elugvmi"
}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "OLComQyvCIJ7"
},
"outputs": [],
"source": [
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
@@ -50,15 +41,15 @@
"from operator import itemgetter\n",
"from scipy import stats\n",
"from IPython.display import display, clear_output"
],
"metadata": {
"id": "OLComQyvCIJ7"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "4PM8bf6lO0VE"
},
"outputs": [],
"source": [
"#Create pretty colormap as in book\n",
"my_colormap_vals_hex =('2a0902', '2b0a03', '2c0b04', '2d0c05', '2e0c06', '2f0d07', '300d08', '310e09', '320f0a', '330f0b', '34100b', '35110c', '36110d', '37120e', '38120f', '39130f', '3a1410', '3b1411', '3c1511', '3d1612', '3e1613', '3f1713', '401714', '411814', '421915', '431915', '451a16', '461b16', '471b17', '481c17', '491d18', '4a1d18', '4b1e19', '4c1f19', '4d1f1a', '4e201b', '50211b', '51211c', '52221c', '53231d', '54231d', '55241e', '56251e', '57261f', '58261f', '592720', '5b2821', '5c2821', '5d2922', '5e2a22', '5f2b23', '602b23', '612c24', '622d25', '632e25', '652e26', '662f26', '673027', '683027', '693128', '6a3229', '6b3329', '6c342a', '6d342a', '6f352b', '70362c', '71372c', '72372d', '73382e', '74392e', '753a2f', '763a2f', '773b30', '783c31', '7a3d31', '7b3e32', '7c3e33', '7d3f33', '7e4034', '7f4134', '804235', '814236', '824336', '834437', '854538', '864638', '874739', '88473a', '89483a', '8a493b', '8b4a3c', '8c4b3c', '8d4c3d', '8e4c3e', '8f4d3f', '904e3f', '924f40', '935041', '945141', '955242', '965343', '975343', '985444', '995545', '9a5646', '9b5746', '9c5847', '9d5948', '9e5a49', '9f5a49', 'a05b4a', 'a15c4b', 'a35d4b', 'a45e4c', 'a55f4d', 'a6604e', 'a7614e', 'a8624f', 'a96350', 'aa6451', 'ab6552', 'ac6552', 'ad6653', 'ae6754', 'af6855', 'b06955', 'b16a56', 'b26b57', 'b36c58', 'b46d59', 'b56e59', 'b66f5a', 'b7705b', 'b8715c', 'b9725d', 'ba735d', 'bb745e', 'bc755f', 'bd7660', 'be7761', 'bf7862', 'c07962', 'c17a63', 'c27b64', 'c27c65', 'c37d66', 'c47e67', 'c57f68', 'c68068', 'c78169', 'c8826a', 'c9836b', 'ca846c', 'cb856d', 'cc866e', 'cd876f', 'ce886f', 'ce8970', 'cf8a71', 'd08b72', 'd18c73', 'd28d74', 'd38e75', 'd48f76', 'd59077', 'd59178', 'd69279', 'd7937a', 'd8957b', 'd9967b', 'da977c', 'da987d', 'db997e', 'dc9a7f', 'dd9b80', 'de9c81', 'de9d82', 'df9e83', 'e09f84', 'e1a185', 'e2a286', 'e2a387', 'e3a488', 'e4a589', 'e5a68a', 'e5a78b', 'e6a88c', 'e7aa8d', 'e7ab8e', 'e8ac8f', 'e9ad90', 'eaae91', 'eaaf92', 'ebb093', 'ecb295', 'ecb396', 'edb497', 'eeb598', 'eeb699', 'efb79a', 'efb99b', 'f0ba9c', 'f1bb9d', 'f1bc9e', 'f2bd9f', 'f2bfa1', 'f3c0a2', 'f3c1a3', 'f4c2a4', 'f5c3a5', 'f5c5a6', 'f6c6a7', 'f6c7a8', 'f7c8aa', 'f7c9ab', 'f8cbac', 'f8ccad', 'f8cdae', 'f9ceb0', 'f9d0b1', 'fad1b2', 'fad2b3', 'fbd3b4', 'fbd5b6', 'fbd6b7', 'fcd7b8', 'fcd8b9', 'fcdaba', 'fddbbc', 'fddcbd', 'fddebe', 'fddfbf', 'fee0c1', 'fee1c2', 'fee3c3', 'fee4c5', 'ffe5c6', 'ffe7c7', 'ffe8c9', 'ffe9ca', 'ffebcb', 'ffeccd', 'ffedce', 'ffefcf', 'fff0d1', 'fff2d2', 'fff3d3', 'fff4d5', 'fff6d6', 'fff7d8', 'fff8d9', 'fffada', 'fffbdc', 'fffcdd', 'fffedf', 'ffffe0')\n",
@@ -68,28 +59,28 @@
"b = np.floor(my_colormap_vals_dec - r * 256 *256 - g * 256)\n",
"my_colormap_vals = np.vstack((r,g,b)).transpose()/255.0\n",
"my_colormap = ListedColormap(my_colormap_vals)"
],
"metadata": {
"id": "4PM8bf6lO0VE"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ONGRaQscfIOo"
},
"outputs": [],
"source": [
"# Probability distribution for normal\n",
"def norm_pdf(x, mu, sigma):\n",
" return np.exp(-0.5 * (x-mu) * (x-mu) / (sigma * sigma)) / np.sqrt(2*np.pi*sigma*sigma)"
],
"metadata": {
"id": "ONGRaQscfIOo"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "gZvG0MKhfY8Y"
},
"outputs": [],
"source": [
"# True distribution is a mixture of four Gaussians\n",
"class TrueDataDistribution:\n",
@@ -110,15 +101,15 @@
" mu_list = list(itemgetter(*hidden)(self.mu))\n",
" sigma_list = list(itemgetter(*hidden)(self.sigma))\n",
" return mu_list + sigma_list * epsilon"
],
"metadata": {
"id": "gZvG0MKhfY8Y"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "iJu_uBiaeUVv"
},
"outputs": [],
"source": [
"# Define ground truth probability distribution that we will model\n",
"true_dist = TrueDataDistribution()\n",
@@ -133,25 +124,26 @@
"ax.set_ylim(0,1.0)\n",
"ax.set_xlim(-3,3)\n",
"plt.show()"
],
"metadata": {
"id": "iJu_uBiaeUVv"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "DRHUG_41i4t_"
},
"source": [
"To train the model to describe this distribution, we'll need to generate pairs of samples drawn from $Pr(z_t|x)$ (diffusion kernel) and $q(z_{t-1}|z_{t},x)$ (equation 18.15).\n",
"\n"
],
"metadata": {
"id": "DRHUG_41i4t_"
}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "x6B8t72Ukscd"
},
"outputs": [],
"source": [
"# The diffusion kernel returns the parameters of Pr(z_{t}|x)\n",
"def diffusion_kernel(x, t, beta):\n",
@@ -180,24 +172,25 @@
" z_tminus1 = np.random.normal(size=x_train.shape) * cd_std + cd_mean\n",
"\n",
" return z_t, z_tminus1"
],
"metadata": {
"id": "x6B8t72Ukscd"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"We also need models $\\mbox{f}_t[z_{t},\\phi_{t}]$ that map from $z_{t}$ to the mean of the distribution at time $z_{t-1}$. We're just going to use a very hacky non-parametric model (basically a lookup table) that tells you the result based on the (quantized) input."
],
"metadata": {
"id": "aSG_4uA8_zZ-"
}
},
"source": [
"We also need models $\\text{f}_t[z_{t},\\phi_{t}]$ that map from $z_{t}$ to the mean of the distribution at time $z_{t-1}$. We're just going to use a very hacky non-parametric model (basically a lookup table) that tells you the result based on the (quantized) input."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ZHViC0pL_yy5"
},
"outputs": [],
"source": [
"# This code is really ugly! Don't look too closely at it!\n",
"# All you need to know is that it is a model that trains from pairs zt, zt_minus1\n",
@@ -223,15 +216,15 @@
" bin_index = np.floor((zt+self.max_val)/self.inc)\n",
" bin_index = np.clip(bin_index,0, len(self.model)-1).astype('uint32')\n",
" return zt + self.model[bin_index]"
],
"metadata": {
"id": "ZHViC0pL_yy5"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "CzVFybWoBygu"
},
"outputs": [],
"source": [
"# Sample data from distribution (this would usually be our collected training set)\n",
"n_sample = 100000\n",
@@ -249,24 +242,25 @@
" all_models.append(NonParametricModel())\n",
" # The model at index t maps data from z_{t+1} to z_{t}\n",
" all_models[t].train(zt,zt_minus1)"
],
"metadata": {
"id": "CzVFybWoBygu"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"Now that we've learned the model, let's draw some samples from it. We start at $z_{100}$ and use the model to predict $z_{99}$, then $z_{98}$ and so on until finally we get to $z_{1}$ and then $x$ (represented as $z_{0}$ here). We'll store all of the intermediate stages as well, so we can plot the trajectories. See equations 18.16."
],
"metadata": {
"id": "ZPc9SEvtl14U"
}
},
"source": [
"Now that we've learned the model, let's draw some samples from it. We start at $z_{100}$ and use the model to predict $z_{99}$, then $z_{98}$ and so on until finally we get to $z_{1}$ and then $x$ (represented as $z_{0}$ here). We'll store all of the intermediate stages as well, so we can plot the trajectories. See equations 18.16."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "A-ZMFOvACIOw"
},
"outputs": [],
"source": [
"def sample(model, T, sigma_t, n_samples):\n",
" # Create the output array\n",
@@ -295,24 +289,25 @@
" samples[t-1,:] = samples[t-1,:]\n",
"\n",
" return samples"
],
"metadata": {
"id": "A-ZMFOvACIOw"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"Now let's run the diffusion process for a whole bunch of samples"
],
"metadata": {
"id": "ECAUfHNi9NVW"
}
},
"source": [
"Now let's run the diffusion process for a whole bunch of samples"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "M-TY5w9Q8LYW"
},
"outputs": [],
"source": [
"sigma_t=0.12288\n",
"n_samples = 100000\n",
@@ -329,24 +324,25 @@
"plt.hist(sampled_data, bins=bins, density =True)\n",
"ax.set_ylim(0, 0.8)\n",
"plt.show()"
],
"metadata": {
"id": "M-TY5w9Q8LYW"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"Let's, plot the evolution of a few of the paths as in figure 18.7 (paths are from bottom to top now)."
],
"metadata": {
"id": "jYrAW6tN-gJ4"
}
},
"source": [
"Let's, plot the evolution of a few of the paths as in figure 18.7 (paths are from bottom to top now)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "4XU6CDZC_kFo"
},
"outputs": [],
"source": [
"fig, ax = plt.subplots()\n",
"t_vals = np.arange(0,101,1)\n",
@@ -360,21 +356,33 @@
"ax.set_xlabel('value')\n",
"ax.set_ylabel('z_{t}')\n",
"plt.show()"
],
"metadata": {
"id": "4XU6CDZC_kFo"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"Notice that the samples have a tendency to move from positions that are near the center at time 100 to positions that are high in the true probability distribution at time 0"
],
"metadata": {
"id": "SGTYGGevAktz"
}
}
},
"source": [
"Notice that the samples have a tendency to move from positions that are near the center at time 100 to positions that are high in the true probability distribution at time 0"
]
}
],
"metadata": {
"colab": {
"authorship_tag": "ABX9TyM4DdZDGoP1xGst+Nn+rwvt",
"include_colab_link": true,
"provenance": []
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

View File

@@ -1,33 +1,22 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"authorship_tag": "ABX9TyNd+D0/IVWXtU2GKsofyk2d",
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
}
},
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
"colab_type": "text",
"id": "view-in-github"
},
"source": [
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap18/18_3_Reparameterized_Model.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "t9vk9Elugvmi"
},
"source": [
"# **Notebook 18.3: 1D Reparameterized model**\n",
"\n",
@@ -36,13 +25,15 @@
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
"\n",
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
],
"metadata": {
"id": "t9vk9Elugvmi"
}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "OLComQyvCIJ7"
},
"outputs": [],
"source": [
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
@@ -50,15 +41,15 @@
"from operator import itemgetter\n",
"from scipy import stats\n",
"from IPython.display import display, clear_output"
],
"metadata": {
"id": "OLComQyvCIJ7"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "4PM8bf6lO0VE"
},
"outputs": [],
"source": [
"#Create pretty colormap as in book\n",
"my_colormap_vals_hex =('2a0902', '2b0a03', '2c0b04', '2d0c05', '2e0c06', '2f0d07', '300d08', '310e09', '320f0a', '330f0b', '34100b', '35110c', '36110d', '37120e', '38120f', '39130f', '3a1410', '3b1411', '3c1511', '3d1612', '3e1613', '3f1713', '401714', '411814', '421915', '431915', '451a16', '461b16', '471b17', '481c17', '491d18', '4a1d18', '4b1e19', '4c1f19', '4d1f1a', '4e201b', '50211b', '51211c', '52221c', '53231d', '54231d', '55241e', '56251e', '57261f', '58261f', '592720', '5b2821', '5c2821', '5d2922', '5e2a22', '5f2b23', '602b23', '612c24', '622d25', '632e25', '652e26', '662f26', '673027', '683027', '693128', '6a3229', '6b3329', '6c342a', '6d342a', '6f352b', '70362c', '71372c', '72372d', '73382e', '74392e', '753a2f', '763a2f', '773b30', '783c31', '7a3d31', '7b3e32', '7c3e33', '7d3f33', '7e4034', '7f4134', '804235', '814236', '824336', '834437', '854538', '864638', '874739', '88473a', '89483a', '8a493b', '8b4a3c', '8c4b3c', '8d4c3d', '8e4c3e', '8f4d3f', '904e3f', '924f40', '935041', '945141', '955242', '965343', '975343', '985444', '995545', '9a5646', '9b5746', '9c5847', '9d5948', '9e5a49', '9f5a49', 'a05b4a', 'a15c4b', 'a35d4b', 'a45e4c', 'a55f4d', 'a6604e', 'a7614e', 'a8624f', 'a96350', 'aa6451', 'ab6552', 'ac6552', 'ad6653', 'ae6754', 'af6855', 'b06955', 'b16a56', 'b26b57', 'b36c58', 'b46d59', 'b56e59', 'b66f5a', 'b7705b', 'b8715c', 'b9725d', 'ba735d', 'bb745e', 'bc755f', 'bd7660', 'be7761', 'bf7862', 'c07962', 'c17a63', 'c27b64', 'c27c65', 'c37d66', 'c47e67', 'c57f68', 'c68068', 'c78169', 'c8826a', 'c9836b', 'ca846c', 'cb856d', 'cc866e', 'cd876f', 'ce886f', 'ce8970', 'cf8a71', 'd08b72', 'd18c73', 'd28d74', 'd38e75', 'd48f76', 'd59077', 'd59178', 'd69279', 'd7937a', 'd8957b', 'd9967b', 'da977c', 'da987d', 'db997e', 'dc9a7f', 'dd9b80', 'de9c81', 'de9d82', 'df9e83', 'e09f84', 'e1a185', 'e2a286', 'e2a387', 'e3a488', 'e4a589', 'e5a68a', 'e5a78b', 'e6a88c', 'e7aa8d', 'e7ab8e', 'e8ac8f', 'e9ad90', 'eaae91', 'eaaf92', 'ebb093', 'ecb295', 'ecb396', 'edb497', 'eeb598', 'eeb699', 'efb79a', 'efb99b', 'f0ba9c', 'f1bb9d', 'f1bc9e', 'f2bd9f', 'f2bfa1', 'f3c0a2', 'f3c1a3', 'f4c2a4', 'f5c3a5', 'f5c5a6', 'f6c6a7', 'f6c7a8', 'f7c8aa', 'f7c9ab', 'f8cbac', 'f8ccad', 'f8cdae', 'f9ceb0', 'f9d0b1', 'fad1b2', 'fad2b3', 'fbd3b4', 'fbd5b6', 'fbd6b7', 'fcd7b8', 'fcd8b9', 'fcdaba', 'fddbbc', 'fddcbd', 'fddebe', 'fddfbf', 'fee0c1', 'fee1c2', 'fee3c3', 'fee4c5', 'ffe5c6', 'ffe7c7', 'ffe8c9', 'ffe9ca', 'ffebcb', 'ffeccd', 'ffedce', 'ffefcf', 'fff0d1', 'fff2d2', 'fff3d3', 'fff4d5', 'fff6d6', 'fff7d8', 'fff8d9', 'fffada', 'fffbdc', 'fffcdd', 'fffedf', 'ffffe0')\n",
@@ -68,28 +59,28 @@
"b = np.floor(my_colormap_vals_dec - r * 256 *256 - g * 256)\n",
"my_colormap_vals = np.vstack((r,g,b)).transpose()/255.0\n",
"my_colormap = ListedColormap(my_colormap_vals)"
],
"metadata": {
"id": "4PM8bf6lO0VE"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ONGRaQscfIOo"
},
"outputs": [],
"source": [
"# Probability distribution for normal\n",
"def norm_pdf(x, mu, sigma):\n",
" return np.exp(-0.5 * (x-mu) * (x-mu) / (sigma * sigma)) / np.sqrt(2*np.pi*sigma*sigma)"
],
"metadata": {
"id": "ONGRaQscfIOo"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "gZvG0MKhfY8Y"
},
"outputs": [],
"source": [
"# True distribution is a mixture of four Gaussians\n",
"class TrueDataDistribution:\n",
@@ -110,15 +101,15 @@
" mu_list = list(itemgetter(*hidden)(self.mu))\n",
" sigma_list = list(itemgetter(*hidden)(self.sigma))\n",
" return mu_list + sigma_list * epsilon"
],
"metadata": {
"id": "gZvG0MKhfY8Y"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "iJu_uBiaeUVv"
},
"outputs": [],
"source": [
"# Define ground truth probability distribution that we will model\n",
"true_dist = TrueDataDistribution()\n",
@@ -133,25 +124,26 @@
"ax.set_ylim(0,1.0)\n",
"ax.set_xlim(-3,3)\n",
"plt.show()"
],
"metadata": {
"id": "iJu_uBiaeUVv"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "DRHUG_41i4t_"
},
"source": [
"To train the model to describe this distribution, we'll need to generate pairs of samples drawn from $Pr(z_t|x)$ (diffusion kernel) and $q(z_{t-1}|z_{t},x)$ (equation 18.15).\n",
"\n"
],
"metadata": {
"id": "DRHUG_41i4t_"
}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "x6B8t72Ukscd"
},
"outputs": [],
"source": [
"# Return z_t (the argument of g_{t}[] in the loss function in algorithm 18.1) and epsilon\n",
"def get_data_pairs(x_train,t,beta):\n",
@@ -161,24 +153,25 @@
" z_t = np.ones_like(x_train)\n",
"\n",
" return z_t, epsilon"
],
"metadata": {
"id": "x6B8t72Ukscd"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"We also need models $\\mbox{g}_t[z_{t},\\phi_{t}]$ that map from $z_{t}$ to the noise $\\epsilon$ that was added. We're just going to use a very hacky non-parametric model (basically a lookup table) that tells you the result based on the (quantized) input."
],
"metadata": {
"id": "aSG_4uA8_zZ-"
}
},
"source": [
"We also need models $\\text{g}_t[z_{t},\\phi_{t}]$ that map from $z_{t}$ to the noise $\\epsilon$ that was added. We're just going to use a very hacky non-parametric model (basically a lookup table) that tells you the result based on the (quantized) input."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ZHViC0pL_yy5"
},
"outputs": [],
"source": [
"# This code is really ugly! Don't look too closely at it!\n",
"# All you need to know is that it is a model that trains from pairs zt, zt_minus1\n",
@@ -204,15 +197,15 @@
" bin_index = np.floor((zt+self.max_val)/self.inc)\n",
" bin_index = np.clip(bin_index,0, len(self.model)-1).astype('uint32')\n",
" return self.model[bin_index]"
],
"metadata": {
"id": "ZHViC0pL_yy5"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "CzVFybWoBygu"
},
"outputs": [],
"source": [
"# Sample data from distribution (this would usually be our collected training set)\n",
"n_sample = 100000\n",
@@ -230,24 +223,25 @@
" all_models.append(NonParametricModel())\n",
" # The model at index t maps data from z_{t+1} to epsilon\n",
" all_models[t].train(zt,epsilon)"
],
"metadata": {
"id": "CzVFybWoBygu"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"Now that we've learned the model, let's draw some samples from it. We start at $z_{100}$ and use the model to predict $z_{99}$, then $z_{98}$ and so on until finally we get to $z_{1}$ and then $x$ (represented as $z_{0}$ here). We'll store all of the intermediate stages as well, so we can plot the trajectories. See algorithm 18.2"
],
"metadata": {
"id": "ZPc9SEvtl14U"
}
},
"source": [
"Now that we've learned the model, let's draw some samples from it. We start at $z_{100}$ and use the model to predict $z_{99}$, then $z_{98}$ and so on until finally we get to $z_{1}$ and then $x$ (represented as $z_{0}$ here). We'll store all of the intermediate stages as well, so we can plot the trajectories. See algorithm 18.2"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "A-ZMFOvACIOw"
},
"outputs": [],
"source": [
"def sample(model, T, sigma_t, n_samples):\n",
" # Create the output array\n",
@@ -277,24 +271,25 @@
" samples[t-1,:] = samples[t-1,:]\n",
"\n",
" return samples"
],
"metadata": {
"id": "A-ZMFOvACIOw"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"Now let's run the diffusion process for a whole bunch of samples"
],
"metadata": {
"id": "ECAUfHNi9NVW"
}
},
"source": [
"Now let's run the diffusion process for a whole bunch of samples"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "M-TY5w9Q8LYW"
},
"outputs": [],
"source": [
"sigma_t=0.12288\n",
"n_samples = 100000\n",
@@ -311,24 +306,25 @@
"plt.hist(sampled_data, bins=bins, density =True)\n",
"ax.set_ylim(0, 0.8)\n",
"plt.show()"
],
"metadata": {
"id": "M-TY5w9Q8LYW"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"Let's, plot the evolution of a few of the paths as in figure 18.7 (paths are from bottom to top now)."
],
"metadata": {
"id": "jYrAW6tN-gJ4"
}
},
"source": [
"Let's, plot the evolution of a few of the paths as in figure 18.7 (paths are from bottom to top now)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "4XU6CDZC_kFo"
},
"outputs": [],
"source": [
"fig, ax = plt.subplots()\n",
"t_vals = np.arange(0,101,1)\n",
@@ -342,21 +338,33 @@
"ax.set_xlabel('value')\n",
"ax.set_ylabel('z_{t}')\n",
"plt.show()"
],
"metadata": {
"id": "4XU6CDZC_kFo"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"Notice that the samples have a tendency to move from positions that are near the center at time 100 to positions that are high in the true probability distribution at time 0"
],
"metadata": {
"id": "SGTYGGevAktz"
}
}
},
"source": [
"Notice that the samples have a tendency to move from positions that are near the center at time 100 to positions that are high in the true probability distribution at time 0"
]
}
],
"metadata": {
"colab": {
"authorship_tag": "ABX9TyNd+D0/IVWXtU2GKsofyk2d",
"include_colab_link": true,
"provenance": []
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

View File

@@ -1,33 +1,22 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"authorship_tag": "ABX9TyNFSvISBXo/Z1l+onknF2Gw",
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
}
},
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
"colab_type": "text",
"id": "view-in-github"
},
"source": [
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap18/18_4_Families_of_Diffusion_Models.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "t9vk9Elugvmi"
},
"source": [
"# **Notebook 18.4: Families of diffusion models**\n",
"\n",
@@ -36,13 +25,15 @@
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
"\n",
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
],
"metadata": {
"id": "t9vk9Elugvmi"
}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "OLComQyvCIJ7"
},
"outputs": [],
"source": [
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
@@ -50,15 +41,15 @@
"from operator import itemgetter\n",
"from scipy import stats\n",
"from IPython.display import display, clear_output"
],
"metadata": {
"id": "OLComQyvCIJ7"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "4PM8bf6lO0VE"
},
"outputs": [],
"source": [
"#Create pretty colormap as in book\n",
"my_colormap_vals_hex =('2a0902', '2b0a03', '2c0b04', '2d0c05', '2e0c06', '2f0d07', '300d08', '310e09', '320f0a', '330f0b', '34100b', '35110c', '36110d', '37120e', '38120f', '39130f', '3a1410', '3b1411', '3c1511', '3d1612', '3e1613', '3f1713', '401714', '411814', '421915', '431915', '451a16', '461b16', '471b17', '481c17', '491d18', '4a1d18', '4b1e19', '4c1f19', '4d1f1a', '4e201b', '50211b', '51211c', '52221c', '53231d', '54231d', '55241e', '56251e', '57261f', '58261f', '592720', '5b2821', '5c2821', '5d2922', '5e2a22', '5f2b23', '602b23', '612c24', '622d25', '632e25', '652e26', '662f26', '673027', '683027', '693128', '6a3229', '6b3329', '6c342a', '6d342a', '6f352b', '70362c', '71372c', '72372d', '73382e', '74392e', '753a2f', '763a2f', '773b30', '783c31', '7a3d31', '7b3e32', '7c3e33', '7d3f33', '7e4034', '7f4134', '804235', '814236', '824336', '834437', '854538', '864638', '874739', '88473a', '89483a', '8a493b', '8b4a3c', '8c4b3c', '8d4c3d', '8e4c3e', '8f4d3f', '904e3f', '924f40', '935041', '945141', '955242', '965343', '975343', '985444', '995545', '9a5646', '9b5746', '9c5847', '9d5948', '9e5a49', '9f5a49', 'a05b4a', 'a15c4b', 'a35d4b', 'a45e4c', 'a55f4d', 'a6604e', 'a7614e', 'a8624f', 'a96350', 'aa6451', 'ab6552', 'ac6552', 'ad6653', 'ae6754', 'af6855', 'b06955', 'b16a56', 'b26b57', 'b36c58', 'b46d59', 'b56e59', 'b66f5a', 'b7705b', 'b8715c', 'b9725d', 'ba735d', 'bb745e', 'bc755f', 'bd7660', 'be7761', 'bf7862', 'c07962', 'c17a63', 'c27b64', 'c27c65', 'c37d66', 'c47e67', 'c57f68', 'c68068', 'c78169', 'c8826a', 'c9836b', 'ca846c', 'cb856d', 'cc866e', 'cd876f', 'ce886f', 'ce8970', 'cf8a71', 'd08b72', 'd18c73', 'd28d74', 'd38e75', 'd48f76', 'd59077', 'd59178', 'd69279', 'd7937a', 'd8957b', 'd9967b', 'da977c', 'da987d', 'db997e', 'dc9a7f', 'dd9b80', 'de9c81', 'de9d82', 'df9e83', 'e09f84', 'e1a185', 'e2a286', 'e2a387', 'e3a488', 'e4a589', 'e5a68a', 'e5a78b', 'e6a88c', 'e7aa8d', 'e7ab8e', 'e8ac8f', 'e9ad90', 'eaae91', 'eaaf92', 'ebb093', 'ecb295', 'ecb396', 'edb497', 'eeb598', 'eeb699', 'efb79a', 'efb99b', 'f0ba9c', 'f1bb9d', 'f1bc9e', 'f2bd9f', 'f2bfa1', 'f3c0a2', 'f3c1a3', 'f4c2a4', 'f5c3a5', 'f5c5a6', 'f6c6a7', 'f6c7a8', 'f7c8aa', 'f7c9ab', 'f8cbac', 'f8ccad', 'f8cdae', 'f9ceb0', 'f9d0b1', 'fad1b2', 'fad2b3', 'fbd3b4', 'fbd5b6', 'fbd6b7', 'fcd7b8', 'fcd8b9', 'fcdaba', 'fddbbc', 'fddcbd', 'fddebe', 'fddfbf', 'fee0c1', 'fee1c2', 'fee3c3', 'fee4c5', 'ffe5c6', 'ffe7c7', 'ffe8c9', 'ffe9ca', 'ffebcb', 'ffeccd', 'ffedce', 'ffefcf', 'fff0d1', 'fff2d2', 'fff3d3', 'fff4d5', 'fff6d6', 'fff7d8', 'fff8d9', 'fffada', 'fffbdc', 'fffcdd', 'fffedf', 'ffffe0')\n",
@@ -68,28 +59,28 @@
"b = np.floor(my_colormap_vals_dec - r * 256 *256 - g * 256)\n",
"my_colormap_vals = np.vstack((r,g,b)).transpose()/255.0\n",
"my_colormap = ListedColormap(my_colormap_vals)"
],
"metadata": {
"id": "4PM8bf6lO0VE"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ONGRaQscfIOo"
},
"outputs": [],
"source": [
"# Probability distribution for normal\n",
"def norm_pdf(x, mu, sigma):\n",
" return np.exp(-0.5 * (x-mu) * (x-mu) / (sigma * sigma)) / np.sqrt(2*np.pi*sigma*sigma)"
],
"metadata": {
"id": "ONGRaQscfIOo"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "gZvG0MKhfY8Y"
},
"outputs": [],
"source": [
"# True distribution is a mixture of four Gaussians\n",
"class TrueDataDistribution:\n",
@@ -110,15 +101,15 @@
" mu_list = list(itemgetter(*hidden)(self.mu))\n",
" sigma_list = list(itemgetter(*hidden)(self.sigma))\n",
" return mu_list + sigma_list * epsilon"
],
"metadata": {
"id": "gZvG0MKhfY8Y"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "iJu_uBiaeUVv"
},
"outputs": [],
"source": [
"# Define ground truth probability distribution that we will model\n",
"true_dist = TrueDataDistribution()\n",
@@ -133,25 +124,26 @@
"ax.set_ylim(0,1.0)\n",
"ax.set_xlim(-3,3)\n",
"plt.show()"
],
"metadata": {
"id": "iJu_uBiaeUVv"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "DRHUG_41i4t_"
},
"source": [
"To train the model to describe this distribution, we'll need to generate pairs of samples drawn from $Pr(z_t|x)$ (diffusion kernel) and $q(z_{t-1}|z_{t},x)$ (equation 18.15).\n",
"\n"
],
"metadata": {
"id": "DRHUG_41i4t_"
}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "x6B8t72Ukscd"
},
"outputs": [],
"source": [
"# Return z_t (the argument of g_{t}[] in the loss function in algorithm 18.1) and epsilon\n",
"def get_data_pairs(x_train,t,beta):\n",
@@ -161,24 +153,25 @@
" z_t = x_train * np.sqrt(alpha_t) + np.sqrt(1-alpha_t) * epsilon\n",
"\n",
" return z_t, epsilon"
],
"metadata": {
"id": "x6B8t72Ukscd"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"We also need models $\\mbox{g}_t[z_{t},\\phi_{t}]$ that map from $z_{t}$ to the noise $\\epsilon$ that was added. We're just going to use a very hacky non-parametric model (basically a lookup table) that tells you the result based on the (quantized) input."
],
"metadata": {
"id": "aSG_4uA8_zZ-"
}
},
"source": [
"We also need models $\\text{g}_t[z_{t},\\phi_{t}]$ that map from $z_{t}$ to the noise $\\epsilon$ that was added. We're just going to use a very hacky non-parametric model (basically a lookup table) that tells you the result based on the (quantized) input."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ZHViC0pL_yy5"
},
"outputs": [],
"source": [
"# This code is really ugly! Don't look too closely at it!\n",
"# All you need to know is that it is a model that trains from pairs zt, zt_minus1\n",
@@ -204,15 +197,15 @@
" bin_index = np.floor((zt+self.max_val)/self.inc)\n",
" bin_index = np.clip(bin_index,0, len(self.model)-1).astype('uint32')\n",
" return self.model[bin_index]"
],
"metadata": {
"id": "ZHViC0pL_yy5"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "CzVFybWoBygu"
},
"outputs": [],
"source": [
"# Sample data from distribution (this would usually be our collected training set)\n",
"n_sample = 100000\n",
@@ -230,15 +223,14 @@
" all_models.append(NonParametricModel())\n",
" # The model at index t maps data from z_{t+1} to epsilon\n",
" all_models[t].train(zt,epsilon)"
],
"metadata": {
"id": "CzVFybWoBygu"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "ZPc9SEvtl14U"
},
"source": [
"Now that we've learned the model, let's draw some samples from it. We start at $z_{100}$ and use the model to predict $z_{99}$, then $z_{98}$ and so on until finally we get to $z_{1}$ and then $x$ (represented as $z_{0}$ here). We'll store all of the intermediate stages as well, so we can plot the trajectories.\n",
"\n",
@@ -247,17 +239,19 @@
"One such model is the denoising diffusion implicit model, which has a sampling step:\n",
"\n",
"\\begin{equation}\n",
"\\mathbf{z}_{t-1} = \\sqrt{\\alpha_{t-1}}\\left(\\frac{\\mathbf{z}_{t}-\\sqrt{1-\\alpha_{t}}\\mbox{g}_t[\\mathbf{z}_{t},\\boldsymbol\\phi]}{\\sqrt{\\alpha_{t}}}\\right) + \\sqrt{1-\\alpha_{t-1}-\\sigma^2}\\mbox{g}_t[\\mathbf{z}_{t},\\boldsymbol\\phi]+\\sigma\\epsilon\n",
"\\mathbf{z}_{t-1} = \\sqrt{\\alpha_{t-1}}\\left(\\frac{\\mathbf{z}_{t}-\\sqrt{1-\\alpha_{t}}\\text{g}_t[\\mathbf{z}_{t},\\boldsymbol\\phi]}{\\sqrt{\\alpha_{t}}}\\right) + \\sqrt{1-\\alpha_{t-1}-\\sigma^2}\\text{g}_t[\\mathbf{z}_{t},\\boldsymbol\\phi]+\\sigma\\epsilon\n",
"\\end{equation}\n",
"\n",
"(see equation 12 of the denoising [diffusion implicit models paper ](https://arxiv.org/pdf/2010.02502.pdf).\n"
],
"metadata": {
"id": "ZPc9SEvtl14U"
}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "A-ZMFOvACIOw"
},
"outputs": [],
"source": [
"def sample_ddim(model, T, sigma_t, n_samples):\n",
" # Create the output array\n",
@@ -283,24 +277,25 @@
" if t>0:\n",
" samples[t-1,:] = samples[t-1,:]+ np.random.standard_normal(n_samples) * sigma_t\n",
" return samples"
],
"metadata": {
"id": "A-ZMFOvACIOw"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"Now let's run the diffusion process for a whole bunch of samples"
],
"metadata": {
"id": "ECAUfHNi9NVW"
}
},
"source": [
"Now let's run the diffusion process for a whole bunch of samples"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "M-TY5w9Q8LYW"
},
"outputs": [],
"source": [
"# Now we'll set the noise to a MUCH smaller level\n",
"sigma_t=0.001\n",
@@ -318,24 +313,25 @@
"plt.hist(sampled_data, bins=bins, density =True)\n",
"ax.set_ylim(0, 0.8)\n",
"plt.show()"
],
"metadata": {
"id": "M-TY5w9Q8LYW"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"Let's, plot the evolution of a few of the paths as in figure 18.7 (paths are from bottom to top now)."
],
"metadata": {
"id": "jYrAW6tN-gJ4"
}
},
"source": [
"Let's, plot the evolution of a few of the paths as in figure 18.7 (paths are from bottom to top now)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "4XU6CDZC_kFo"
},
"outputs": [],
"source": [
"fig, ax = plt.subplots()\n",
"t_vals = np.arange(0,101,1)\n",
@@ -349,35 +345,37 @@
"ax.set_xlabel('value')\n",
"ax.set_ylabel('z_{t}')\n",
"plt.show()"
],
"metadata": {
"id": "4XU6CDZC_kFo"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"The samples have a tendency to move from positions that are near the center at time 100 to positions that are high in the true probability distribution at time 0"
],
"metadata": {
"id": "SGTYGGevAktz"
}
},
"source": [
"The samples have a tendency to move from positions that are near the center at time 100 to positions that are high in the true probability distribution at time 0"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "Z-LZp_fMXxRt"
},
"source": [
"Let's now sample from the accelerated model, that requires fewer models. Again, we don't need to learn anything new -- this is just the reverse process that corresponds to a different forward process that is compatible with the same diffusion kernel.\n",
"\n",
"There's nothing to do here except read the code. It uses the same DDIM model as you just implemented in the previous step, but it jumps timesteps five at a time."
],
"metadata": {
"id": "Z-LZp_fMXxRt"
}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "3Z0erjGbYj1u"
},
"outputs": [],
"source": [
"def sample_accelerated(model, T, sigma_t, n_steps, n_samples):\n",
" # Create the output array\n",
@@ -403,24 +401,25 @@
" if t>0:\n",
" samples[c_step-1,:] = samples[c_step-1,:]+ np.random.standard_normal(n_samples) * sigma_t\n",
" return samples"
],
"metadata": {
"id": "3Z0erjGbYj1u"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"Now let's draw a bunch of samples from the model"
],
"metadata": {
"id": "D3Sm_WYrcuED"
}
},
"source": [
"Now let's draw a bunch of samples from the model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "UB45c7VMcGy-"
},
"outputs": [],
"source": [
"sigma_t=0.11\n",
"n_samples = 100000\n",
@@ -438,15 +437,15 @@
"plt.hist(sampled_data, bins=bins, density =True)\n",
"ax.set_ylim(0, 0.9)\n",
"plt.show()"
],
"metadata": {
"id": "UB45c7VMcGy-"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Luv-6w84c_qO"
},
"outputs": [],
"source": [
"fig, ax = plt.subplots()\n",
"step_increment = 100/ n_steps\n",
@@ -464,21 +463,32 @@
"ax.set_xlabel('value')\n",
"ax.set_ylabel('z_{t}')\n",
"plt.show()"
],
"metadata": {
"id": "Luv-6w84c_qO"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"source": [],
"execution_count": null,
"metadata": {
"id": "LSJi72f0kw_e"
},
"execution_count": null,
"outputs": []
"outputs": [],
"source": []
}
]
],
"metadata": {
"colab": {
"authorship_tag": "ABX9TyNFSvISBXo/Z1l+onknF2Gw",
"include_colab_link": true,
"provenance": []
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

File diff suppressed because one or more lines are too long

View File

@@ -1,33 +1,22 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"authorship_tag": "ABX9TyNQPfTDV6PFG7Ctcl+XVNlz",
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
}
},
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
"colab_type": "text",
"id": "view-in-github"
},
"source": [
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap21/21_1_Bias_Mitigation.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "t9vk9Elugvmi"
},
"source": [
"# **Notebook 21.1: Bias mitigation**\n",
"\n",
@@ -36,39 +25,42 @@
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
"\n",
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n"
],
"metadata": {
"id": "t9vk9Elugvmi"
}
]
},
{
"cell_type": "code",
"source": [
"import numpy as np\n",
"import matplotlib.pyplot as plt"
],
"execution_count": null,
"metadata": {
"id": "yC_LpiJqZXEL"
},
"execution_count": null,
"outputs": []
"outputs": [],
"source": [
"import numpy as np\n",
"import matplotlib.pyplot as plt"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "2FYo1dWGZXgg"
},
"source": [
"# Worked example: loans\n",
"\n",
"Consider the example of an algorithm $c=\\mbox{f}[\\mathbf{x},\\boldsymbol\\phi]$ that predicts credit rating scores $c$ for loan decisions. There are two pools of loan applicants identified by the variable $p\\in\\{0,1\\}$ that well describe as the blue and yellow populations. We assume that we are given historical data, so we know both the credit rating and whether the applicant actually defaulted on the loan ($y=0$) or\n",
"Consider the example of an algorithm $c=\\text{f}[\\mathbf{x},\\boldsymbol\\phi]$ that predicts credit rating scores $c$ for loan decisions. There are two pools of loan applicants identified by the variable $p\\in\\{0,1\\}$ that well describe as the blue and yellow populations. We assume that we are given historical data, so we know both the credit rating and whether the applicant actually defaulted on the loan ($y=0$) or\n",
" repaid it ($y=1$).\n",
"\n",
"We can now think of four groups of data corresponding to (i) the blue and yellow populations and (ii) whether they did or did not repay the loan. For each of these four groups we have a distribution of credit ratings (figure 1). In an ideal world, the two distributions for the yellow population would be exactly the same as those for the blue population. However, as figure 1 shows, this is clearly not the case here."
],
"metadata": {
"id": "2FYo1dWGZXgg"
}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "O_0gGH9hZcjo"
},
"outputs": [],
"source": [
"# Class that can describe interesting curve shapes based on the input parameters\n",
"# Details don't matter\n",
@@ -86,30 +78,30 @@
" * 1.0 / np.sqrt(2*np.pi*self.sigma1*self.sigma1) \\\n",
" + self.weight * (1-self.prop) * np.exp(-0.5 * (x-self.mean2) * (x-self.mean2) / (self.sigma2 * self.sigma2)) \\\n",
" * 1.0 / np.sqrt(2*np.pi*self.sigma2*self.sigma2)\n"
],
"metadata": {
"id": "O_0gGH9hZcjo"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Bkp7vffBbrNW"
},
"outputs": [],
"source": [
"credit_scores = np.arange(-4,4,0.01)\n",
"freq_y0_p0 = FreqCurve(800, -1.5, -2.5, 0.8, 0.6, 0.6).freq(credit_scores)\n",
"freq_y1_p0 = FreqCurve(500, 0.1, 0.7, 1.5, 0.8, 0.4 ).freq(credit_scores)\n",
"freq_y0_p1 = FreqCurve(400, 0.2, -0.1, 0.8, 0.6, 0.3).freq(credit_scores)\n",
"freq_y1_p1 = FreqCurve(650, 0.6, 1.6, 1.2, 0.7, 0.6 ).freq(credit_scores)\n"
],
"metadata": {
"id": "Bkp7vffBbrNW"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Jf7uqyRyhVdS"
},
"outputs": [],
"source": [
"\n",
"fig = plt.figure\n",
@@ -136,15 +128,14 @@
"ax.legend()\n",
"\n",
"plt.show()"
],
"metadata": {
"id": "Jf7uqyRyhVdS"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "CfZ-srQtmff2"
},
"source": [
"Why might the distributions for blue and yellow populations be different? It could be that the behaviour of the populations is identical, but the credit rating algorithm is biased; it may favor one population over another or simply be more noisy for one group. Alternatively, it could be that that the populations genuinely behave differently. In practice, the differences in blue and yellow distributions are probably attributable to a combination of these factors.\n",
"\n",
@@ -153,45 +144,49 @@
" to go on, the best we can do is to assign different thresholds $\\tau_{1}$\n",
" and $\\tau_{2}$\n",
" for the blue and yellow populations so that the loan is granted if the credit score $c$ generated by the model exceeds $\\tau_0$ for the blue population and $\\tau_1$ for the yellow population."
],
"metadata": {
"id": "CfZ-srQtmff2"
}
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"Now let's investiate how to set these thresholds to fulfil different criteria."
],
"metadata": {
"id": "569oU1OtoFz8"
}
},
"source": [
"Now let's investiate how to set these thresholds to fulfil different criteria."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "bE7yPyuWoSUy"
},
"source": [
"# Blindness to protected attribute\n",
"\n",
"We'll first do the simplest possible thing. We'll choose the same threshold for both blue and yellow populations so that $\\tau_0$ = $\\tau_1$. Basically, we'll ignore what we know about the group membership. Let's see what the ramifications of that."
],
"metadata": {
"id": "bE7yPyuWoSUy"
}
]
},
{
"cell_type": "code",
"source": [
"# Set the thresholds\n",
"tau0 = tau1 = 0.0"
],
"execution_count": null,
"metadata": {
"id": "WIG8I-LvoFBY"
},
"execution_count": null,
"outputs": []
"outputs": [],
"source": [
"# Set the thresholds\n",
"tau0 = tau1 = 0.0"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "2EvkCvVBiCBn"
},
"outputs": [],
"source": [
"def compute_probability_get_loan(credit_scores, frequencies, threshold):\n",
" # TODO - Write this function\n",
@@ -202,47 +197,49 @@
"\n",
"\n",
" return prob"
],
"metadata": {
"id": "2EvkCvVBiCBn"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"First let's see what the overall probability of getting the loan is for the yellow and blue populations."
],
"metadata": {
"id": "AGT40q6_qfpv"
}
},
"source": [
"First let's see what the overall probability of getting the loan is for the yellow and blue populations."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "4nI-PR_wqWj6"
},
"outputs": [],
"source": [
"pr_get_loan_p0 = compute_probability_get_loan(credit_scores, freq_y0_p0+freq_y1_p0, tau0)\n",
"pr_get_loan_p1 = compute_probability_get_loan(credit_scores, freq_y0_p1+freq_y1_p1, tau1)\n",
"print(\"Probability blue group gets loan = %3.3f\"%(pr_get_loan_p0))\n",
"print(\"Probability yellow group gets loan = %3.3f\"%(pr_get_loan_p1))"
],
"metadata": {
"id": "4nI-PR_wqWj6"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"Now let's plot a receiver operating characteristic (ROC) curve. This shows the rate of true positives $Pr(\\hat{y}=1|y=1)$ (people who got loan and paid it back) and false alarms $Pr(\\hat{y}=1|y=0)$ (people who got the loan but didn't pay it back) for all possible thresholds."
],
"metadata": {
"id": "G2pEa6h6sIyu"
}
},
"source": [
"Now let's plot a receiver operating characteristic (ROC) curve. This shows the rate of true positives $Pr(\\hat{y}=1|y=1)$ (people who got loan and paid it back) and false alarms $Pr(\\hat{y}=1|y=0)$ (people who got the loan but didn't pay it back) for all possible thresholds."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "2C7kNt3hqwiu"
},
"outputs": [],
"source": [
"def plot_roc(credit_scores, freq_y0_p0, freq_y1_p0, freq_y0_p1, freq_y1_p1, tau0, tau1):\n",
" true_positives_p0 = np.zeros_like(credit_scores)\n",
@@ -272,61 +269,64 @@
" ax.set_aspect('equal')\n",
"\n",
" plt.show()"
],
"metadata": {
"id": "2C7kNt3hqwiu"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"source": [
"plot_roc(credit_scores, freq_y0_p0, freq_y1_p0, freq_y0_p1, freq_y1_p1, tau0, tau1)"
],
"execution_count": null,
"metadata": {
"id": "h3OOQeTsv8uS"
},
"execution_count": null,
"outputs": []
"outputs": [],
"source": [
"plot_roc(credit_scores, freq_y0_p0, freq_y1_p0, freq_y0_p1, freq_y1_p1, tau0, tau1)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "UCObTsa57uuC"
},
"source": [
"On this plot, the true positive and false alarm rate for the particular thresholds ($\\tau_0=\\tau_{1}=0$) that we chose are indicated by the circles.\n",
"\n",
"This criterion is clearly not great. The blue and yellow groups get given loans at different rates overall, and (for this threshold), the false alarms and true positives are also different, so it's not even fair when we consider whether the loans really were paid back. \n",
"\n",
"TODO -- investigate setting a different threshold $\\tau_{0}=\\tau_{1}$. Is it possible to make the overall rates that loans are given the same? Is it possible to make the false alarm rates the same? Is it possible to make the true positive rates the same?"
],
"metadata": {
"id": "UCObTsa57uuC"
}
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "Yhrxv5AQ-PWA"
},
"source": [
"# Equality of odds\n",
"\n",
"This definition of fairness proposes that the false positive and true positive rates should be the same for both populations. This also sounds reasonable, but the ROC curve shows that it is not possible for this example. There is no combination of thresholds that can achieve this because the ROC curves do not intersect. Even if they did, we would be stuck giving loans based on the particular false positive and true positive rates at the intersection which might not be desirable."
],
"metadata": {
"id": "Yhrxv5AQ-PWA"
}
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "l6yb8vjX-gdi"
},
"source": [
"Demographic parity\n",
"\n",
"The thresholds can be chosen so that the same proportion of each group are classified as $\\hat{y}=1$ and given loans. We make an equal number of loans to each group despite the different tendencies of each to repay. This has the disadvantage that the true positive and false positive rates might be completely different in different populations. From the perspective of the lender, it is desirable to give loans in proportion to peoples ability to pay them back. From the perspective of an individual in a more reliable group, it may seem unfair that the other group gets offered the same number of loans despite the fact they are less reliable."
],
"metadata": {
"id": "l6yb8vjX-gdi"
}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "syjZ2fn5wC9-"
},
"outputs": [],
"source": [
"# TO DO -- try to change the two thresholds so the overall probability of getting the loan is 0.6 for each group\n",
"# Change the values in these lines\n",
@@ -340,55 +340,58 @@
"pr_get_loan_p1 = compute_probability_get_loan(credit_scores, freq_y0_p1+freq_y1_p1, tau1)\n",
"print(\"Probability blue group gets loan = %3.3f\"%(pr_get_loan_p0))\n",
"print(\"Probability yellow group gets loan = %3.3f\"%(pr_get_loan_p1))"
],
"metadata": {
"id": "syjZ2fn5wC9-"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"This is good, because now both groups get roughly the same amount of loans. But hold on... let's look at the ROC curve:"
],
"metadata": {
"id": "5QrtvZZlHCJy"
}
},
"source": [
"This is good, because now both groups get roughly the same amount of loans. But hold on... let's look at the ROC curve:"
]
},
{
"cell_type": "code",
"source": [
"plot_roc(credit_scores, freq_y0_p0, freq_y1_p0, freq_y0_p1, freq_y1_p1, tau0, tau1)"
],
"execution_count": null,
"metadata": {
"id": "VApyl_58GUQb"
},
"execution_count": null,
"outputs": []
"outputs": [],
"source": [
"plot_roc(credit_scores, freq_y0_p0, freq_y1_p0, freq_y0_p1, freq_y1_p1, tau0, tau1)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"The blue dot is waaay above the yellow dot. The proportion of people who are given a load and do pay it back from the blue population is much higher than that from the yellow population. From another perspective, that's unfair... it seems like the yellow population are 'allowed' to default more often than the blue. This leads to another possibility."
],
"metadata": {
"id": "_GgX_b6yIE4W"
}
},
"source": [
"The blue dot is waaay above the yellow dot. The proportion of people who are given a load and do pay it back from the blue population is much higher than that from the yellow population. From another perspective, that's unfair... it seems like the yellow population are 'allowed' to default more often than the blue. This leads to another possibility."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "WDnaqetXHhlv"
},
"source": [
"# Equal opportunity:\n",
"\n",
"The thresholds are chosen so that so that the true positive rate is is the same for both population. Of the people who pay back the loan, the same proportion are offered credit in each group. In terms of the two ROC curves, it means choosing thresholds so that the vertical position on each curve is the same without regard for the horizontal position."
],
"metadata": {
"id": "WDnaqetXHhlv"
}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "zEN6HGJ7HJAZ"
},
"outputs": [],
"source": [
"# TO DO -- try to change the two thresholds so the true positive are 0.8 for each group\n",
"# Change the values in these lines so that both points on the curves have a height of 0.8\n",
@@ -397,45 +400,58 @@
"\n",
"\n",
"plot_roc(credit_scores, freq_y0_p0, freq_y1_p0, freq_y0_p1, freq_y1_p1, tau0, tau1)"
],
"metadata": {
"id": "zEN6HGJ7HJAZ"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"This seems fair -- people who are given loans default at the same rate (20%) for both groups. But hold on... let's look at the overall loan rate between the two populations:"
],
"metadata": {
"id": "JsyW0pBGJ24b"
}
},
"source": [
"This seems fair -- people who are given loans default at the same rate (20%) for both groups. But hold on... let's look at the overall loan rate between the two populations:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "2a5PXHeNJDjg"
},
"outputs": [],
"source": [
"# Compute overall probability of getting loan\n",
"pr_get_loan_p0 = compute_probability_get_loan(credit_scores, freq_y0_p0+freq_y1_p0, tau0)\n",
"pr_get_loan_p1 = compute_probability_get_loan(credit_scores, freq_y0_p1+freq_y1_p1, tau1)\n",
"print(\"Probability blue group gets loan = %3.3f\"%(pr_get_loan_p0))\n",
"print(\"Probability yellow group gets loan = %3.3f\"%(pr_get_loan_p1))"
],
"metadata": {
"id": "2a5PXHeNJDjg"
},
"execution_count": null,
"outputs": []
]
},
{
"attachments": {},
"cell_type": "markdown",
"source": [
"The conclusion from all this is that (i) definitions of fairness are quite subtle and (ii) it's not possible to satisfy them all simultaneously."
],
"metadata": {
"id": "tZTM7N6jKC7q"
}
}
},
"source": [
"The conclusion from all this is that (i) definitions of fairness are quite subtle and (ii) it's not possible to satisfy them all simultaneously."
]
}
],
"metadata": {
"colab": {
"authorship_tag": "ABX9TyNQPfTDV6PFG7Ctcl+XVNlz",
"include_colab_link": true,
"provenance": []
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@@ -14,9 +14,9 @@
<br>Published by MIT Press Dec 5th 2023.<br>
<ul>
<li>
<p style="font-size: larger; margin-bottom: 0">Download draft PDF Chapters 1-21 <a
href="https://github.com/udlbook/udlbook/releases/download/v1.17/UnderstandingDeepLearning_17_12_23_C.pdf">here</a>
</p>2023-12-17. CC-BY-NC-ND license<br>
<p style="font-size: larger; margin-bottom: 0">Download full PDF <a
href="https://github.com/udlbook/udlbook/releases/download/v2.00/UnderstandingDeepLearning_28_01_24_C.pdf">here</a>
</p>2024-01-28. CC-BY-NC-ND license<br>
<img src="https://img.shields.io/github/downloads/udlbook/udlbook/total" alt="download stats shield">
</li>
<li> Order your copy from <a href="https://mitpress.mit.edu/9780262048644/understanding-deep-learning/">here </a></li>
@@ -218,7 +218,7 @@
<li><a href="https://drive.google.com/uc?export=download&id=1RVZW3KjEs0vNSGx3B2fdizddlr6I0wLl">8. Performance</a></li>
<li><a href="https://drive.google.com/uc?export=download&id=1LTicIKPRPbZRkkg6qOr1DSuOB72axood">9. Regularization</a></li>
<li><a href="https://drive.google.com/uc?export=download&id=1bGVuwAwrofzZdfvj267elIzkYMIvYFj0">10. Convolutional Networks</a></li>
<li><a href="https://drive.google.com/uc?export=download&id=1Kllhj0HdS_I3qE2XDU6ifgGGj3tmSRcl">11. Image Generation</a></li>
<li><a href="https://drive.google.com/uc?export=download&id=14w31QqWRDix1GdUE-na0_E0kGKBhtKzs">11. Image Generation</a></li>
<li><a href="https://drive.google.com/uc?export=download&id=1af6bTTjAbhDYfrDhboW7Fuv52Gk9ygKr">12. Transformers and LLMs</a></li>
</ul>