Compare commits

...

82 Commits

Author SHA1 Message Date
udlbook
b576187c01 Created using Colaboratory 2023-07-26 12:21:21 -04:00
udlbook
ffc6df0849 Created using Colaboratory 2023-07-26 11:59:17 -04:00
udlbook
17eec0ba60 Created using Colaboratory 2023-07-26 11:47:55 -04:00
udlbook
ffcc6d3c32 Created using Colaboratory 2023-07-26 11:23:39 -04:00
udlbook
1c9dd9f387 Created using Colaboratory 2023-07-26 10:57:28 -04:00
udlbook
624dbee611 Created using Colaboratory 2023-07-26 10:40:09 -04:00
udlbook
ef2920a5fc Created using Colaboratory 2023-07-26 10:07:40 -04:00
udlbook
7809de3d7f Update index.html 2023-07-25 17:10:22 -04:00
udlbook
a7bb4e5ce8 Update index.html 2023-07-25 17:09:30 -04:00
udlbook
a09e3cde2e Update index.html 2023-07-25 17:03:47 -04:00
udlbook
ea03aca917 Update index.html 2023-07-25 17:03:03 -04:00
udlbook
c84eedd252 Update index.html 2023-07-25 16:58:39 -04:00
udlbook
b085766752 Update index.html 2023-07-25 16:54:54 -04:00
udlbook
759d566a0f Add files via upload 2023-07-25 16:51:06 -04:00
udlbook
fd65e9bce7 Update index.html 2023-07-25 16:43:26 -04:00
udlbook
8a02341771 Update index.html 2023-07-25 16:41:19 -04:00
udlbook
dd4926dfde Add files via upload 2023-07-25 16:40:11 -04:00
udlbook
ea959297b8 Add files via upload 2023-07-25 16:34:43 -04:00
udlbook
6be2911ec8 Update index.html 2023-07-25 16:31:38 -04:00
udlbook
88e8e3b336 Update index.html 2023-07-25 16:23:45 -04:00
udlbook
ee6960bf0f Update index.html 2023-07-25 16:05:24 -04:00
udlbook
eb10907982 Update index.html 2023-07-25 15:59:33 -04:00
udlbook
ef1c6aaec9 Update index.html 2023-07-25 15:48:11 -04:00
udlbook
76df999fec Update index.html 2023-07-25 15:38:38 -04:00
udlbook
238285d363 Created using Colaboratory 2023-07-25 15:18:10 -04:00
udlbook
2417379be3 Created using Colaboratory 2023-07-25 14:52:03 -04:00
udlbook
c2bf535ad2 Created using Colaboratory 2023-07-25 13:14:42 -04:00
udlbook
322b3da22b Created using Colaboratory 2023-07-25 13:13:10 -04:00
udlbook
46bc9d5723 Created using Colaboratory 2023-07-25 12:39:54 -04:00
udlbook
b000fafc0d Created using Colaboratory 2023-07-25 10:06:38 -04:00
udlbook
40201c4604 Created using Colaboratory 2023-07-25 08:53:52 -04:00
udlbook
f41ba979f8 Delete temp.txt 2023-07-25 08:47:47 -04:00
udlbook
b8fe99a6d3 Created using Colaboratory 2023-07-25 08:47:27 -04:00
udlbook
a20bf8f59a Create temp.txt 2023-07-25 08:43:52 -04:00
udlbook
cf8d35aa28 Delete temp.txt 2023-07-24 12:20:06 -04:00
udlbook
199a0bd2d5 Created using Colaboratory 2023-07-24 12:19:42 -04:00
udlbook
45bc31cc39 Create temp.txt 2023-07-24 12:19:17 -04:00
udlbook
f6eedcdc51 Delete temp.txt 2023-07-24 09:49:46 -04:00
udlbook
a983fb78c8 Created using Colaboratory 2023-07-24 09:49:23 -04:00
udlbook
cd0d587e08 Created using Colaboratory 2023-07-24 09:48:26 -04:00
udlbook
885cb786cb Create temp.txt 2023-07-24 09:47:06 -04:00
udlbook
a86e4bca4f Create Info.txt 2023-07-24 09:46:37 -04:00
udlbook
9e5700c6c7 Delete practicals directory 2023-07-24 06:22:41 -04:00
udlbook
559807f22c Add files via upload 2023-07-24 06:21:48 -04:00
udlbook
d7ff1e1531 Create Info.txt 2023-07-24 06:21:27 -04:00
udlbook
ef0a0397ce Add files via upload 2023-07-24 06:20:25 -04:00
udlbook
5f0a4970e5 Delete Understanding_Deep_Learning_01_10_22_C.pdf 2023-07-23 18:43:02 -04:00
udlbook
1022439364 Delete UnderstandingDeepLearning_30_07_22_C.pdf 2023-07-23 18:42:48 -04:00
udlbook
80fff5c1ce Rename Training_I.ipynb to CM20315/CM20315_Training_I.ipynb 2023-07-23 18:41:48 -04:00
udlbook
e82701b1fd Rename CX20315_Transformers.ipynb to CM20315/CM20315_Transformers.ipynb 2023-07-23 18:41:25 -04:00
udlbook
e7a3e4c633 Rename CM20315_Training_III.ipynb to CM20315/CM20315_Training_III.ipynb 2023-07-23 18:41:05 -04:00
udlbook
9ffd7c710d Rename CM20315_Training_II.ipynb to CM20315/CM20315_Training_II.ipynb 2023-07-23 18:40:47 -04:00
udlbook
832bb929a2 Rename CM20315_Shallow.ipynb to CM20315/CM20315_Shallow.ipynb 2023-07-23 18:40:30 -04:00
udlbook
9da150a703 Rename CM20315_Loss_III.ipynb to CM20315/CM20315_Loss_III.ipynb 2023-07-23 18:40:09 -04:00
udlbook
59ae70fe74 Rename CM20315_Loss_II.ipynb to CM20315/CM20315_Loss_II.ipynb 2023-07-23 18:39:44 -04:00
udlbook
d4ad50fa6a Rename CM20315_Loss.ipynb to CM20315/CM20315_Loss.ipynb 2023-07-23 18:39:02 -04:00
udlbook
bdea676518 Rename CM20315_Intro_Answers.ipynb to CM20315/CM20315_Intro_Answers.ipynb 2023-07-23 18:38:37 -04:00
udlbook
ffae34bc5e Rename CM20315_Intro.ipynb to CM20315/CM20315_Intro.ipynb 2023-07-23 18:38:12 -04:00
udlbook
37111ff72c Rename CM20315_Gradients_III.ipynb to CM20315/CM20315_Gradients_III.ipynb 2023-07-23 18:37:46 -04:00
udlbook
75ddf8c16f Rename CM20315_Gradients_II.ipynb to CM20315/CM20315_Gradients_II.ipynb 2023-07-23 18:37:26 -04:00
udlbook
37d21102da Rename CM20315_Gradients_I.ipynb to CM20315/CM20315_Gradients_I.ipynb 2023-07-23 18:36:55 -04:00
udlbook
f09c84de02 Rename CM20315_Deep2.ipynb to CM20315/CM20315_Deep2.ipynb 2023-07-23 18:36:34 -04:00
udlbook
26e80dbcc2 Rename CM20315_Deep.ipynb to CM20315/CM20315_Deep.ipynb 2023-07-23 18:36:17 -04:00
udlbook
8e19360163 Rename CM20315_Coursework_IV.ipynb to CM20315/CM20315_Coursework_IV.ipynb 2023-07-23 18:35:58 -04:00
udlbook
6cd2aab77c Rename CM20315_Coursework_III.ipynb to CM20315/CM20315_Coursework_III.ipynb 2023-07-23 18:35:36 -04:00
udlbook
34a7775f9f Rename CM20315_Coursework_II.ipynb to CM20315/CM20315_Coursework_II.ipynb 2023-07-23 18:35:17 -04:00
udlbook
f1b5b6fee5 Rename CM20315_Coursework_I.ipynb to CM20315/CM20315_Coursework_I.ipynb 2023-07-23 18:35:00 -04:00
udlbook
a82a3ce69c Rename CM20315_Convolution_III.ipynb to CM20315/CM20315_Convolution_III.ipynb 2023-07-23 18:34:34 -04:00
udlbook
3b59907c9c Rename CM20315_Convolution_II.ipynb to CM20315/CM20315_Convolution_II.ipynb 2023-07-23 18:34:18 -04:00
udlbook
e5b7eaed92 Rename CM20315_Convolution_I.ipynb to CM20315/CM20315_Convolution_I.ipynb 2023-07-23 18:33:57 -04:00
udlbook
90e3138acf Create Info.txt 2023-07-23 18:32:59 -04:00
udlbook
9fa2e5c372 Add files via upload 2023-07-23 18:22:44 -04:00
udlbook
45ddce0cd2 Update index.html 2023-07-23 17:57:45 -04:00
udlbook
a0a29a9a6b Update index.html 2023-05-08 08:50:30 -04:00
udlbook
25df06a02d Update index.html 2023-05-03 19:19:34 -04:00
udlbook
9bb3f672d8 Update index.html 2023-05-03 19:09:39 -04:00
udlbook
b5fbe8445e Update index.html 2023-04-26 18:19:22 -04:00
udlbook
fd0144d4ab Update index.html 2023-04-24 14:34:36 -04:00
udlbook
4335f935a1 Update index.html 2023-04-19 08:33:10 -04:00
udlbook
45ddca3c52 Update index.html 2023-04-17 14:01:58 -04:00
udlbook
b52d05a785 Update index.html 2023-04-11 11:23:03 -04:00
udlbook
61316a273b Update index.html 2023-04-08 17:09:26 -04:00
77 changed files with 4923 additions and 51 deletions

View File

@@ -429,4 +429,4 @@
"outputs": [] "outputs": []
} }
] ]
} }

View File

@@ -250,4 +250,4 @@
"outputs": [] "outputs": []
} }
] ]
} }

View File

@@ -645,4 +645,4 @@
} }
} }
] ]
} }

View File

@@ -444,4 +444,4 @@
"outputs": [] "outputs": []
} }
] ]
} }

View File

@@ -289,4 +289,4 @@
"outputs": [] "outputs": []
} }
] ]
} }

View File

@@ -287,4 +287,4 @@
"outputs": [] "outputs": []
} }
] ]
} }

View File

@@ -219,4 +219,4 @@
"outputs": [] "outputs": []
} }
] ]
} }

View File

@@ -445,4 +445,4 @@
"outputs": [] "outputs": []
} }
] ]
} }

View File

@@ -314,4 +314,4 @@
"outputs": [] "outputs": []
} }
] ]
} }

View File

@@ -417,4 +417,4 @@
} }
} }
] ]
} }

View File

@@ -347,4 +347,4 @@
"outputs": [] "outputs": []
} }
] ]
} }

View File

@@ -348,4 +348,4 @@
"outputs": [] "outputs": []
} }
] ]
} }

View File

@@ -295,4 +295,4 @@
} }
} }
] ]
} }

View File

@@ -451,4 +451,4 @@
"outputs": [] "outputs": []
} }
] ]
} }

View File

@@ -561,4 +561,4 @@
} }
} }
] ]
} }

View File

@@ -450,4 +450,4 @@
"outputs": [] "outputs": []
} }
] ]
} }

View File

@@ -444,4 +444,4 @@
} }
} }
] ]
} }

View File

@@ -626,4 +626,4 @@
"outputs": [] "outputs": []
} }
] ]
} }

View File

@@ -187,4 +187,4 @@
"outputs": [] "outputs": []
} }
] ]
} }

View File

@@ -424,4 +424,4 @@
"outputs": [] "outputs": []
} }
] ]
} }

View File

@@ -582,4 +582,4 @@
"outputs": [] "outputs": []
} }
] ]
} }

View File

@@ -631,4 +631,4 @@
} }
} }
] ]
} }

1
CM20315/Data/Info.txt Normal file
View File

@@ -0,0 +1 @@
Data for CM20315 practical

View File

Can't render this file because it is too large.

View File

Can't render this file because it is too large.

1
CM20315/Info.txt Normal file
View File

@@ -0,0 +1 @@
Practicals from CM20315 course taught at University of Bath, Fall 2022

View File

@@ -0,0 +1,423 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap01/1_1_BackgroundMathematics.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "s5zzKSOusPOB"
},
"source": [
"\n",
"# **Notebook 1.1 -- Background Mathematics**\n",
"\n",
"The purpose of this Python notebook is to make sure you can use CoLab and to familiarize yourself with some of the background mathematical concepts that you are going to need to understand deep learning. <br><br> It's not meant to be difficult and it may be that you know some or all of this information already.<br><br> Math is *NOT* a spectator sport. You won't learn it by just listening to lectures or reading books. It really helps to interact with it and explore yourself. <br><br> Work through the cells below, running each cell in turn. In various places you will see the words **\"TO DO\"**. Follow the instructions at these places and write code to complete the functions. There are also questions interspersed in the text.\n",
"\n",
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "aUAjBbqzivMY"
},
"outputs": [],
"source": [
"# Imports math library\n",
"import numpy as np\n",
"# Imports plotting library\n",
"import matplotlib.pyplot as plt"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "WV2Dl6owme2d"
},
"source": [
"**Linear functions**<br> We will be using the term *linear equation* to mean a weighted sum of inputs plus an offset. If there is just one input $x$, then this is a straight line:\n",
"\n",
"\\begin{equation}y=\\beta+\\omega x,\\end{equation} <br>\n",
"\n",
"where $\\beta$ is the y-intercept of the linear and $\\omega$ is the slope of the line. When there are two inputs $x_{1}$ and $x_{2}$, then this becomes:\n",
"\n",
"\\begin{equation}y=\\beta+\\omega_1 x_1 + \\omega_2 x_2.\\end{equation} <br><br>\n",
"\n",
"Any other functions are by definition **non-linear**.\n",
"\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "WeFK4AvTotd8"
},
"outputs": [],
"source": [
"# Define a linear function with just one input, x\n",
"def linear_function_1D(x,beta,omega):\n",
" # TODO -- replace the code lin below with formula for 1D linear equation\n",
" y = x\n",
"\n",
" return y"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "eimhJ8_jpmEp"
},
"outputs": [],
"source": [
"# Plot the 1D linear function\n",
"\n",
"# Define an array of x values from 0 to 10 with increments of 0.1\n",
"# https://numpy.org/doc/stable/reference/generated/numpy.arange.html\n",
"x = np.arange(0.0,10.0, 0.01)\n",
"# Compute y using the function you filled in above\n",
"beta = 0.0; omega = 1.0\n",
"\n",
"y = linear_function_1D(x,beta,omega)\n",
"\n",
"# Plot this function\n",
"fig, ax = plt.subplots()\n",
"ax.plot(x,y,'r-')\n",
"ax.set_ylim([0,10]);ax.set_xlim([0,10])\n",
"ax.set_xlabel('x'); ax.set_ylabel('y')\n",
"plt.show\n",
"\n",
"# TODO -- experiment with changing the values of beta and omega\n",
"# to understand what they do. Try to make a line\n",
"# that crosses the y-axis at y=10 and the x-axis at x=5"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "AedfvD9dxShZ"
},
"source": [
"Now let's investigate a 2D linear function"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "57Gvkk-Ir_7b"
},
"outputs": [],
"source": [
"# Code to draw 2D function -- read it so you know what is going on, but you don't have to change it\n",
"def draw_2D_function(x1_mesh, x2_mesh, y):\n",
" fig, ax = plt.subplots()\n",
" fig.set_size_inches(7,7)\n",
" pos = ax.contourf(x1_mesh, x2_mesh, y, levels=256 ,cmap = 'hot', vmin=-10,vmax=10.0)\n",
" fig.colorbar(pos, ax=ax)\n",
" ax.set_xlabel('x1');ax.set_ylabel('x2')\n",
" levels = np.arange(-10,10,1.0)\n",
" ax.contour(x1_mesh, x2_mesh, y, levels, cmap='winter')\n",
" plt.show()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "YxeNhrXMzkZR"
},
"outputs": [],
"source": [
"# Define a linear function with two inputs, x1 and x2\n",
"def linear_function_2D(x1,x2,beta,omega1,omega2):\n",
" # TODO -- replace the code line below with formula for 2D linear equation\n",
" y = x1\n",
"\n",
" return y"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "rn_UBRDBysmR"
},
"outputs": [],
"source": [
"# Plot the 2D function\n",
"\n",
"# Make 2D array of x and y points\n",
"x1 = np.arange(0.0, 10.0, 0.1)\n",
"x2 = np.arange(0.0, 10.0, 0.1)\n",
"x1,x2 = np.meshgrid(x1,x2) # https://www.geeksforgeeks.org/numpy-meshgrid-function/\n",
"\n",
"# Compute the 2D function for given values of omega1, omega2\n",
"beta = 0.0; omega1 = 1.0; omega2 = -0.5\n",
"y = linear_function_2D(x1,x2,beta, omega1, omega2)\n",
"\n",
"# Draw the function.\n",
"# Color represents y value (brighter = higher value)\n",
"# Black = -10 or less, White = +10 or more\n",
"# 0 = mid orange\n",
"# Lines are conoturs where value is equal\n",
"draw_2D_function(x1,x2,y)\n",
"\n",
"# TODO\n",
"# Predict what this plot will look like if you set omega_1 to zero\n",
"# Change the code and see if you are right.\n",
"\n",
"# TODO\n",
"# Predict what this plot will look like if you set omega_2 to zero\n",
"# Change the code and see if you are right.\n",
"\n",
"# TODO\n",
"# Predict what this plot will look like if you set beta to -5\n",
"# Change the code and see if you are correct\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "i8tLwpls476R"
},
"source": [
"Often we will want to compute many linear functions at the same time. For example, we might have three inputs, $x_1$, $x_2$, and $x_3$ and want to compute two linear functions giving $y_1$ and $y_2$. Of course, we could do this by just running each equation separately,<br><br>\n",
"\n",
"\\begin{eqnarray}y_1 &=& \\beta_1 + \\omega_{11} x_1 + \\omega_{12} x_2 + \\omega_{13} x_3\\\\\n",
"y_2 &=& \\beta_2 + \\omega_{21} x_1 + \\omega_{22} x_2 + \\omega_{23} x_3.\n",
"\\end{eqnarray}<br>\n",
"\n",
"However, we can write it more compactly with vectors and matrices:\n",
"\n",
"\\begin{equation}\n",
"\\begin{bmatrix} y_1\\\\ y_2 \\end{bmatrix} = \\begin{bmatrix}\\beta_{1}\\\\\\beta_{2}\\end{bmatrix}+ \\begin{bmatrix}\\omega_{11}&\\omega_{12}&\\omega_{13}\\\\\\omega_{21}&\\omega_{22}&\\omega_{23}\\end{bmatrix}\\begin{bmatrix}x_{1}\\\\x_{2}\\\\x_{3}\\end{bmatrix},\n",
"\\end{equation}<br>\n",
"or\n",
"\n",
"\\begin{equation}\n",
"\\mathbf{y} = \\boldsymbol\\beta +\\boldsymbol\\Omega\\mathbf{x}.\n",
"\\end{equation}\n",
"\n",
"for short. Here, lowercase bold symbols are used for vectors. Upper case bold symbols are used for matrices.\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "MjHXMavh9IUz"
},
"outputs": [],
"source": [
"# Define a linear function with three inputs, x1, x2, and x_3\n",
"def linear_function_3D(x1,x2,x3,beta,omega1,omega2,omega3):\n",
" # TODO -- replace the code below with formula for a single 3D linear equation\n",
" y = x1\n",
"\n",
" return y"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "fGzVJQ6N-mHJ"
},
"source": [
"Let's compute two linear equations, using both the individual equations and the vector / matrix form and check they give the same result"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Swd_bFIE9p2n"
},
"outputs": [],
"source": [
"# Define the parameters\n",
"beta1 = 0.5; beta2 = 0.2\n",
"omega11 = -1.0 ; omega12 = 0.4; omega13 = -0.3\n",
"omega21 = 0.1 ; omega22 = 0.1; omega23 = 1.2\n",
"\n",
"# Define the inputs\n",
"x1 = 4 ; x2 =-1; x3 = 2\n",
"\n",
"# Compute using the individual equations\n",
"y1 = linear_function_3D(x1,x2,x3,beta1,omega11,omega12,omega13)\n",
"y2 = linear_function_3D(x1,x2,x3,beta2,omega21,omega22,omega23)\n",
"print(\"Individual equations\")\n",
"print('y1 = %3.3f\\ny2 = %3.3f'%((y1,y2)))\n",
"\n",
"# Define vectors and matrices\n",
"beta_vec = np.array([[beta1],[beta2]])\n",
"omega_mat = np.array([[omega11,omega12,omega13],[omega21,omega22,omega23]])\n",
"x_vec = np.array([[x1], [x2], [x3]])\n",
"\n",
"# Compute with vector/matrix form\n",
"y_vec = beta_vec+np.matmul(omega_mat, x_vec)\n",
"print(\"Matrix/vector form\")\n",
"print('y1= %3.3f\\ny2 = %3.3f'%((y_vec[0],y_vec[1])))\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "3LGRoTMLU8ZU"
},
"source": [
"# Questions\n",
"\n",
"1. A single linear equation with three inputs (i.e. **linear_function_3D()**) associates a value y with each point in a 3D space ($x_1$,$x_2$,$x_3$). Is it possible to visualize this? What value is at position (0,0,0)?\n",
"\n",
"2. Write code to compute three linear equations with two inputs ($x_1$, $x_2$) using both the individual equations and the matrix form (you can make up any values for the inputs $\\beta_{i}$ and the slopes $\\omega_{ij}$."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "7Y5zdKtKZAB2"
},
"source": [
"# Special functions\n",
"\n",
"Throughout the book, we'll be using some special functions (see Appendix B.1.3). The most important of these are the logarithm and exponential functions. Let's investigate their properties.\n",
"\n",
"We'll start with the exponential function $y=\\mbox{exp}[x]=e^x$ which maps the real line $[-\\infty,+\\infty]$ to non-negative numbers $[0,+\\infty]$."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "c_GkjiY9IWCu"
},
"outputs": [],
"source": [
"# Draw the exponential function\n",
"\n",
"# Define an array of x values from -5 to 5 with increments of 0.1\n",
"x = np.arange(-5.0,5.0, 0.01)\n",
"y = np.exp(x) ;\n",
"\n",
"# Plot this function\n",
"fig, ax = plt.subplots()\n",
"ax.plot(x,y,'r-')\n",
"ax.set_ylim([0,100]);ax.set_xlim([-5,5])\n",
"ax.set_xlabel('x'); ax.set_ylabel('exp[x]')\n",
"plt.show"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "XyrT8257IWCu"
},
"source": [
"# Questions\n",
"\n",
"1. What is $\\mbox{exp}[0]$? \n",
"2. What is $\\mbox{exp}[1]$?\n",
"3. What is $\\mbox{exp}[-\\infty]$?\n",
"4. What is $\\mbox{exp}[+\\infty]$?\n",
"5. A function is convex if we can draw a straight line between any two points on the\n",
"function, and this line always lies above the function. Similarly, a function is concave\n",
"if a straight line between any two points always lies below the function. Is the exponential function convex or concave or neither?\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "R6A4e5IxIWCu"
},
"source": [
"Now let's consider the logarithm function $y=\\log[x]$. Throughout the book we always use natural (base $e$) logarithms. The log funcction maps non-negative numbers $[0,\\infty]$ to real numbers $[-\\infty,\\infty]$. It is the inverse of the exponential function. So when we compute $\\log[x]$ we are really asking \"What is the number $y$ so that $e^y=x$?\""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "fOR7v2iXIWCu"
},
"outputs": [],
"source": [
"# Draw the logarithm function\n",
"\n",
"# Define an array of x values from -5 to 5 with increments of 0.1\n",
"x = np.arange(0.01,5.0, 0.01)\n",
"y = np.log(x) ;\n",
"\n",
"# Plot this function\n",
"fig, ax = plt.subplots()\n",
"ax.plot(x,y,'r-')\n",
"ax.set_ylim([-5,5]);ax.set_xlim([0,5])\n",
"ax.set_xlabel('x'); ax.set_ylabel('$\\log[x]$')\n",
"plt.show"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "yYWrL5AXIWCv"
},
"source": [
"# Questions\n",
"\n",
"1. What is $\\mbox{log}[0]$? \n",
"2. What is $\\mbox{log}[1]$?\n",
"3. What is $\\mbox{log}[e]$?\n",
"4. What is $\\mbox{log}[\\exp[3]]$?\n",
"5. What is $\\mbox{exp}[\\log[4]]$?\n",
"6. What is $\\mbox{log}[-1]$?\n",
"7. Is the logarithm function concave or convex?\n"
]
},
{
"cell_type": "code",
"source": [],
"metadata": {
"id": "XG0CKLiPJI7I"
},
"execution_count": null,
"outputs": []
}
],
"metadata": {
"colab": {
"provenance": [],
"include_colab_link": true
},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.10"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

View File

@@ -0,0 +1,253 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"authorship_tag": "ABX9TyOmndC0N7dFV7W3Mh5ljOLl",
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap02/2_1_Supervised_Learning.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"source": [
"# Notebook 2.1 Supervised Learning\n",
"\n",
"The purpose of this notebook is to explore the linear regression model dicussed in Chapter 2 of the book.\n",
"\n",
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and write code to complete the functions. There are also questions interspersed in the text.\n",
"\n",
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
],
"metadata": {
"id": "sfB2oX2RNvuF"
}
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "uoYl2Gn3Nr52"
},
"outputs": [],
"source": [
"# Math library\n",
"import numpy as np\n",
"# Plotting library\n",
"import matplotlib.pyplot as plt"
]
},
{
"cell_type": "code",
"source": [
"# Create some input / output data\n",
"x = np.array([0.03, 0.19, 0.34, 0.46, 0.78, 0.81, 1.08, 1.18, 1.39, 1.60, 1.65, 1.90])\n",
"y = np.array([0.67, 0.85, 1.05, 1.0, 1.40, 1.5, 1.3, 1.54, 1.55, 1.68, 1.73, 1.6 ])\n",
"\n",
"print(x)\n",
"print(y)"
],
"metadata": {
"id": "MUbTD4znORtd"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Define 1D linear regression model\n",
"def f(x, phi0, phi1):\n",
" # TODO : Replace this line with the linear regression model (eq 2.4)\n",
" y = x\n",
"\n",
" return y"
],
"metadata": {
"id": "lw2dCRHwSW9a"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Function to help plot the data\n",
"def plot(x, y, phi0, phi1):\n",
" fig,ax = plt.subplots()\n",
" ax.scatter(x,y)\n",
" plt.xlim([0,2.0])\n",
" plt.ylim([0,2.0])\n",
" ax.set_xlabel('Input, $x$')\n",
" ax.set_ylabel('Output, $y$')\n",
" # Draw line\n",
" x_line = np.arange(0,2,0.01)\n",
" y_line = f(x_line, phi0, phi1)\n",
" plt.plot(x_line, y_line,'b-',lw=2)\n",
"\n",
" plt.show()"
],
"metadata": {
"id": "VT4F3xxSOt8C"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Set the intercept and slope as in figure 2.2b\n",
"phi0 = 0.4 ; phi1 = 0.2\n",
"# Plot the data and the model\n",
"plot(x,y,phi0,phi1)"
],
"metadata": {
"id": "AkdZdmhHWuVR"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Function to calculate the loss\n",
"def compute_loss(x,y,phi0,phi1):\n",
"\n",
" # TODO Replace this line with the loss calculation (equation 2.5)\n",
" loss = 0\n",
"\n",
"\n",
" return loss"
],
"metadata": {
"id": "1-GW218wX44b"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Compute the loss for our current model\n",
"loss = compute_loss(x,y,phi0,phi1)\n",
"print(f'Your Loss = {loss:3.2f}, Ground truth =7.07')"
],
"metadata": {
"id": "Hgw7_GzBZ8tX"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Set the intercept and slope as in figure 2.2c\n",
"phi0 = 1.60 ; phi1 =-0.8\n",
"# Plot the data and the model\n",
"plot(x,y,phi0,phi1)\n",
"loss = compute_loss(x,y,phi0,phi1)\n",
"print(f'Your Loss = {loss:3.2f}, Ground truth =10.28')"
],
"metadata": {
"id": "_vZS28-FahGP"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# TO DO -- Change the parameters manually to fit the model\n",
"# First fix phi1 and try changing phi0 until you can't make the loss go down any more\n",
"# Then fix phi0 and try changing phi1 until you can't make the loss go down any more\n",
"# Repeat this process until you find a set of parameters that fit the model as in figure 2.2d\n",
"# You can either do this by hand, or if you want to get fancy, write code to descent automatically in this way\n",
"# Start at these values:\n",
"phi0 = 1.60 ; phi1 =-0.8\n",
"\n",
"plot(x,y,phi0,phi1)\n",
"print(f'Your Loss = {compute_loss(x,y,phi0,phi1):3.2f}')"
],
"metadata": {
"id": "VzpnzdW5d9vj"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"# Visualizing the loss function\n",
"\n",
"The above process is equivalent to to descending coordinate wise on the loss function<br>\n",
"\n",
"Now let's plot that function"
],
"metadata": {
"id": "MNC4qEZognEe"
}
},
{
"cell_type": "code",
"source": [
"# Make a 2D grid of possible phi0 and phi1 values\n",
"phi0_mesh, phi1_mesh = np.meshgrid(np.arange(0.0,2.0,0.02), np.arange(-1.0,1.0,0.02))\n",
"\n",
"# Make a 2D array for the losses\n",
"all_losses = np.zeros_like(phi1_mesh)\n",
"# Run throught each 2D combination of phi0, phi1 and compute loss\n",
"for indices,temp in np.ndenumerate(phi1_mesh):\n",
" all_losses[indices] = compute_loss(x,y, phi0_mesh[indices], phi1_mesh[indices])\n"
],
"metadata": {
"id": "ATrU8sqqg2hJ"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Plot the loss function as a heatmap\n",
"fig = plt.figure()\n",
"ax = plt.axes()\n",
"fig.set_size_inches(7,7)\n",
"levels = 256\n",
"ax.contourf(phi0_mesh, phi1_mesh, all_losses ,levels)\n",
"levels = 40\n",
"ax.contour(phi0_mesh, phi1_mesh, all_losses ,levels, colors=['#80808080'])\n",
"ax.set_ylim([1,-1])\n",
"ax.set_xlabel('Intercept, $\\phi_0$')\n",
"ax.set_ylabel('Slope, $\\phi_1$')\n",
"\n",
"# Plot the position of your best fitting line on the loss function\n",
"# It should be close to the minimum\n",
"ax.plot(phi0,phi1,'ro')\n",
"plt.show()"
],
"metadata": {
"id": "6OXAjx5xfQkl"
},
"execution_count": null,
"outputs": []
}
]
}

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,294 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"authorship_tag": "ABX9TyPFqKOqd6BjlymOawCRkmfn",
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap03/3_2_Shallow_Networks_II.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"source": [
"# **Notebook 3.2 -- Shallow neural networks II**\n",
"\n",
"The purpose of this notebook is to gain some familiarity with shallow neural networks with 2D inputs. It works through an example similar to figure 3.8 and experiments with different activation functions. <br><br>\n",
"\n",
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and write code to complete the functions. There are also questions interspersed in the text.\n",
"\n",
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n"
],
"metadata": {
"id": "1Z6LB4Ybn1oN"
}
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "hAM55ZjSncOk"
},
"outputs": [],
"source": [
"# Imports math library\n",
"import numpy as np\n",
"# Imports plotting library\n",
"import matplotlib.pyplot as plt"
]
},
{
"cell_type": "code",
"source": [
"# Code to draw 2D function -- read it so you know what is going on, but you don't have to change it\n",
"def draw_2D_function(ax, x1_mesh, x2_mesh, y):\n",
" pos = ax.contourf(x1_mesh, x2_mesh, y, levels=256 ,cmap = 'hot', vmin=-10,vmax=10.0)\n",
" ax.set_xlabel('x1');ax.set_ylabel('x2')\n",
" levels = np.arange(-10,10,1.0)\n",
" ax.contour(x1_mesh, x2_mesh, y, levels, cmap='winter')\n",
"\n",
"# Plot the shallow neural network. We'll assume input in is range [0,10],[0,10] and output [-10,10]\n",
"def plot_neural_2_inputs(x1,x2, y, pre_1, pre_2, pre_3, act_1, act_2, act_3, w_act_1, w_act_2, w_act_3):\n",
"\n",
" fig, ax = plt.subplots(3,3)\n",
" fig.set_size_inches(8.5, 8.5)\n",
" fig.tight_layout(pad=3.0)\n",
" draw_2D_function(ax[0,0], x1,x2,pre_1); ax[0,0].set_title('Preactivation')\n",
" draw_2D_function(ax[0,1], x1,x2,pre_2); ax[0,1].set_title('Preactivation')\n",
" draw_2D_function(ax[0,2], x1,x2,pre_3); ax[0,2].set_title('Preactivation')\n",
" draw_2D_function(ax[1,0], x1,x2,act_1); ax[1,0].set_title('Activation')\n",
" draw_2D_function(ax[1,1], x1,x2,act_2); ax[1,1].set_title('Activation')\n",
" draw_2D_function(ax[1,2], x1,x2,act_3); ax[1,2].set_title('Activation')\n",
" draw_2D_function(ax[2,0], x1,x2,w_act_1); ax[2,0].set_title('Weighted Act')\n",
" draw_2D_function(ax[2,1], x1,x2,w_act_2); ax[2,1].set_title('Weighted Act')\n",
" draw_2D_function(ax[2,2], x1,x2,w_act_3); ax[2,2].set_title('Weighted Act')\n",
" plt.show()\n",
"\n",
" fig, ax = plt.subplots()\n",
" draw_2D_function(ax,x1,x2,y)\n",
" ax.set_title('Network output, $y$')\n",
" ax.set_aspect(1.0)\n",
" plt.show()"
],
"metadata": {
"id": "IHtCP0t2HC4c"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Define the Rectified Linear Unit (ReLU) function\n",
"def ReLU(preactivation):\n",
" activation = preactivation.clip(0.0)\n",
" return activation"
],
"metadata": {
"id": "Lw71laEeJgKs"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Define a shallow neural network with, two input, one output, and three hidden units\n",
"def shallow_2_1_3(x1,x2, activation_fn, phi_0,phi_1,phi_2,phi_3, theta_10, theta_11,\\\n",
" theta_12, theta_20, theta_21, theta_22, theta_30, theta_31, theta_32):\n",
" # TODO Replace the lines below to compute the three initial linear functions\n",
" # (figure 3.8a-c) from the theta parameters. These are the preactivations\n",
" pre_1 = np.zeros_like(x1)\n",
" pre_2 = np.zeros_like(x1)\n",
" pre_3 = np.zeros_like(x1)\n",
"\n",
" # Pass these through the ReLU function to compute the activations as in\n",
" # figure 3.8 d-f\n",
" act_1 = activation_fn(pre_1)\n",
" act_2 = activation_fn(pre_2)\n",
" act_3 = activation_fn(pre_3)\n",
"\n",
" # TODO Replace the code below to weight the activations using phi1, phi2 and phi3\n",
" # To create the equivalent of figure 3.8 g-i\n",
" w_act_1 = np.zeros_like(x1)\n",
" w_act_2 = np.zeros_like(x1)\n",
" w_act_3 = np.zeros_like(x1)\n",
"\n",
" # TODO Replace the code below to combing the weighted activations and add\n",
" # phi_0 to create the output as in figure 3.8j\n",
" y = np.zeros_like(x1)\n",
"\n",
" # Return everything we have calculated\n",
" return y, pre_1, pre_2, pre_3, act_1, act_2, act_3, w_act_1, w_act_2, w_act_3"
],
"metadata": {
"id": "VIZA8HywIjfl"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Now lets define some parameters and run the neural network\n",
"theta_10 = -4.0 ; theta_11 = 0.9; theta_12 = 0.0\n",
"theta_20 = 5.0 ; theta_21 = -0.9 ; theta_22 = -0.5\n",
"theta_30 = -7 ; theta_31 = 0.5; theta_32 = 0.9\n",
"phi_0 = 0.0; phi_1 = -2.0; phi_2 = 2.0; phi_3 = 1.5\n",
"\n",
"x1 = np.arange(0.0, 10.0, 0.1)\n",
"x2 = np.arange(0.0, 10.0, 0.1)\n",
"x1,x2 = np.meshgrid(x1,x2) # https://www.geeksforgeeks.org/numpy-meshgrid-function/\n",
"\n",
"# We run the neural network for each of these input values\n",
"y, pre_1, pre_2, pre_3, act_1, act_2, act_3, w_act_1, w_act_2, w_act_3 = \\\n",
" shallow_2_1_3(x1,x2, ReLU, phi_0,phi_1,phi_2,phi_3, theta_10, theta_11, theta_12, theta_20, theta_21, theta_22, theta_30, theta_31, theta_32)\n",
"# And then plot it\n",
"plot_neural_2_inputs(x1,x2, y, pre_1, pre_2, pre_3, act_1, act_2, act_3, w_act_1, w_act_2, w_act_3)"
],
"metadata": {
"id": "51lvc9bfIrs4"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"How many different linear polytopes are made by this model? Identify each in the network output."
],
"metadata": {
"id": "j62IizIfMYZK"
}
},
{
"cell_type": "markdown",
"source": [
"Now we'll extend this model to have two outputs $y_1$ and $y_2$, each of which can be visualized with a separate heatmap. You will now have sets of parameters $\\phi_{10}, \\phi_{11},\\phi_{12}$ and $\\phi_{2}, \\phi_{21},\\phi_{22}$ that correspond to each of these outputs."
],
"metadata": {
"id": "Xl6LcrUyM7Lh"
}
},
{
"cell_type": "code",
"source": [
"# Plot the shallow neural network. We'll assume input in is range [0,10],[0,10] and output [-10,10]\n",
"def plot_neural_2_inputs_2_outputs(x1,x2, y1, y2, pre_1, pre_2, pre_3, act_1, act_2, act_3, w_act_11, w_act_12, w_act_13, w_act_21, w_act_22, w_act_23):\n",
"\n",
" # Plot intermediate plots if flag set\n",
" fig, ax = plt.subplots(4,3)\n",
" fig.set_size_inches(8.5, 8.5)\n",
" fig.tight_layout(pad=3.0)\n",
" draw_2D_function(ax[0,0], x1,x2,pre_1); ax[0,0].set_title('Preactivation')\n",
" draw_2D_function(ax[0,1], x1,x2,pre_2); ax[0,1].set_title('Preactivation')\n",
" draw_2D_function(ax[0,2], x1,x2,pre_3); ax[0,2].set_title('Preactivation')\n",
" draw_2D_function(ax[1,0], x1,x2,act_1); ax[1,0].set_title('Activation')\n",
" draw_2D_function(ax[1,1], x1,x2,act_2); ax[1,1].set_title('Activation')\n",
" draw_2D_function(ax[1,2], x1,x2,act_3); ax[1,2].set_title('Activation')\n",
" draw_2D_function(ax[2,0], x1,x2,w_act_11); ax[2,0].set_title('Weighted Act 1')\n",
" draw_2D_function(ax[2,1], x1,x2,w_act_12); ax[2,1].set_title('Weighted Act 1')\n",
" draw_2D_function(ax[2,2], x1,x2,w_act_13); ax[2,2].set_title('Weighted Act 1')\n",
" draw_2D_function(ax[3,0], x1,x2,w_act_21); ax[3,0].set_title('Weighted Act 2')\n",
" draw_2D_function(ax[3,1], x1,x2,w_act_22); ax[3,1].set_title('Weighted Act 2')\n",
" draw_2D_function(ax[3,2], x1,x2,w_act_23); ax[3,2].set_title('Weighted Act 2')\n",
" plt.show()\n",
"\n",
" fig, ax = plt.subplots()\n",
" draw_2D_function(ax,x1,x2,y1)\n",
" ax.set_title('Network output, $y_1$')\n",
" ax.set_aspect(1.0)\n",
" plt.show()\n",
"\n",
" fig, ax = plt.subplots()\n",
" draw_2D_function(ax,x1,x2,y2)\n",
" ax.set_title('Network output, $y_2$')\n",
" ax.set_aspect(1.0)\n",
" plt.show()"
],
"metadata": {
"id": "DlznqZWdPtjI"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"\n",
"# Define a shallow neural network with, two inputs, two outputs, and three hidden units\n",
"def shallow_2_2_3(x1,x2, activation_fn, phi_10,phi_11,phi_12,phi_13, phi_20,phi_21,phi_22,phi_23, theta_10, theta_11,\\\n",
" theta_12, theta_20, theta_21, theta_22, theta_30, theta_31, theta_32):\n",
"\n",
" # TODO -- write this function -- replace the dummy code blow\n",
" pre_1 = np.zeros_like(x1)\n",
" pre_2 = np.zeros_like(x1)\n",
" pre_3 = np.zeros_like(x1)\n",
" act_1 = np.zeros_like(x1)\n",
" act_2 = np.zeros_like(x1)\n",
" act_3 = np.zeros_like(x1)\n",
" w_act_11 = np.zeros_like(x1)\n",
" w_act_12 = np.zeros_like(x1)\n",
" w_act_13 = np.zeros_like(x1)\n",
" w_act_21 = np.zeros_like(x1)\n",
" w_act_22 = np.zeros_like(x1)\n",
" w_act_23 = np.zeros_like(x1)\n",
" y1 = np.zeros_like(x1)\n",
" y2 = np.zeros_like(x1)\n",
"\n",
"\n",
" # Return everything we have calculated\n",
" return y1,y2, pre_1, pre_2, pre_3, act_1, act_2, act_3, w_act_11, w_act_12, w_act_13, w_act_21, w_act_22, w_act_23\n"
],
"metadata": {
"id": "m8KAhwr4QWro"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Now lets define some parameters and run the neural network\n",
"theta_10 = -4.0 ; theta_11 = 0.9; theta_12 = 0.0\n",
"theta_20 = 5.0 ; theta_21 = -0.9 ; theta_22 = -0.5\n",
"theta_30 = -7 ; theta_31 = 0.5; theta_32 = 0.9\n",
"phi_10 = 0.0; phi_11 = -2.0; phi_12 = 2.0; phi_13 = 1.5\n",
"phi_20 = -2.0; phi_21 = -1.0; phi_22 = -2.0; phi_23 = 0.8\n",
"\n",
"x1 = np.arange(0.0, 10.0, 0.1)\n",
"x2 = np.arange(0.0, 10.0, 0.1)\n",
"x1,x2 = np.meshgrid(x1,x2) # https://www.geeksforgeeks.org/numpy-meshgrid-function/\n",
"\n",
"# We run the neural network for each of these input values\n",
"y1, y2, pre_1, pre_2, pre_3, act_1, act_2, act_3, w_act_11, w_act_12, w_act_13, w_act_21, w_act_22, w_act_23 = \\\n",
" shallow_2_2_3(x1,x2, ReLU, phi_10,phi_11,phi_12,phi_13, phi_20,phi_21,phi_22,phi_23, theta_10, theta_11, theta_12, theta_20, theta_21, theta_22, theta_30, theta_31, theta_32)\n",
"# And then plot it\n",
"plot_neural_2_inputs_2_outputs(x1,x2, y1, y2, pre_1, pre_2, pre_3, act_1, act_2, act_3, w_act_11, w_act_12, w_act_13, w_act_21, w_act_22, w_act_23)"
],
"metadata": {
"id": "ms4YTqbYUeRV"
},
"execution_count": null,
"outputs": []
}
]
}

View File

@@ -0,0 +1,259 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"authorship_tag": "ABX9TyMhLSGU8+odPS/CoW5PwKna",
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap03/3_3_Shallow_Network_Regions.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"source": [
"# **Notebook 3.3 -- Shallow network regions**\n",
"\n",
"The purpose of this notebook is to compute the maximum possible number of linear regions as seen in figure 3.9 of the book.\n",
"\n",
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and write code to complete the functions. There are also questions interspersed in the text.\n",
"\n",
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
],
"metadata": {
"id": "DCTC8fQ6cp-n"
}
},
{
"cell_type": "code",
"source": [
"# Imports math library\n",
"import numpy as np\n",
"# Imports plotting library\n",
"import matplotlib.pyplot as plt\n",
"# Imports math libray\n",
"import math"
],
"metadata": {
"id": "W3C1ZA1gcpq_"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"The number of regions $N$ created by a shallow neural network with $D_i$ inputs and $D$ hidden units is given by Zaslavsky's formula:\n",
"\n",
"\\begin{equation}N = \\sum_{j=1}^{D_{i}}\\binom{D}{j}=\\sum_{j=1}^{D_{i}} \\frac{D!}{(D-j)!j!} \\end{equation} <br>\n",
"\n"
],
"metadata": {
"id": "TbfanfXBe84L"
}
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "4UQ2n0RWcgOb"
},
"outputs": [],
"source": [
"def number_regions(Di, D):\n",
" # TODO -- implement Zaslavsky's formula\n",
" # You will need to use math.factorial() https://www.geeksforgeeks.org/factorial-in-python/\n",
" # Replace this code\n",
" N = 1;\n",
"\n",
" return N"
]
},
{
"cell_type": "code",
"source": [
"# Calculate the number of regions for 2D input (Di=2) and 3 hidden units (D=3) as in figure 3.8j\n",
"N = number_regions(2, 3)\n",
"print(f\"Di=2, D=3, Number of regions = {int(N)}, True value = 7\")"
],
"metadata": {
"id": "AqSUfuJDigN9"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Calculate the number of regions for 10D input (Di=2) and 50 hidden units (D=50)\n",
"N = number_regions(10, 50)\n",
"print(f\"Di=10, D=50, Number of regions = {int(N)}, True value = 13432735556\")"
],
"metadata": {
"id": "krNKPV9gjCu-"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"This works but there is a complication. If the number of hidden units $D$ is fewer than the number of hidden dimensions $D_i$ , the formula will fail. When this is the case, there are just $2^D$ regions (see figure 3.10 to understand why).\n",
"\n",
"Let's demonstrate this:"
],
"metadata": {
"id": "rk1a2LqGkO9u"
}
},
{
"cell_type": "code",
"source": [
"# Show that calculation fails when $D_i < D$\n",
"try:\n",
" N = number_regions(10, 8)\n",
" print(f\"Di=10, D=8, Number of regions = {int(N)}, True value = 256\")\n",
"except Exception as error:\n",
" print(\"An exception occurred:\", error)\n"
],
"metadata": {
"id": "uq5IeAZTkIMg"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Let's do the calculation properly when D<Di\n",
"D = 8; Di = 10\n",
"N = np.power(2,D)\n",
"# We can equivalently do this by calling number_regions with the D twice\n",
"# Think about why this works\n",
"N2 = number_regions (D,D)\n",
"print(f\"Di=10, D=8, Number of regions = {int(N)}, Number of regions = {int(N2)}, True value = 256\")"
],
"metadata": {
"id": "Ig8Kg_ADjoQd"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Now let's plot the graph from figure 3.9a\n",
"dims = np.array([1,5,10,50,100])\n",
"regions = np.zeros((dims.shape[0], 1000))\n",
"for c_dim in range(dims.shape[0]):\n",
" D_i = dims[c_dim]\n",
" print (f\"Counting regions for {D_i} input dimensions\")\n",
" for D in range(1000):\n",
" regions[c_dim, D] = number_regions(np.min([D_i,D]), D)\n",
"\n",
"fig, ax = plt.subplots()\n",
"ax.semilogy(regions[0,:],'k-')\n",
"ax.semilogy(regions[1,:],'b-')\n",
"ax.semilogy(regions[2,:],'m-')\n",
"ax.semilogy(regions[3,:],'c-')\n",
"ax.semilogy(regions[4,:],'y-')\n",
"ax.legend(['$D_i$=1', '$D_i$=5', '$D_i$=10', '$D_i$=50', '$D_i$=100'])\n",
"ax.set_xlabel(\"Number of hidden units, D\")\n",
"ax.set_ylabel(\"Number of regions, N\")\n",
"plt.xlim([0,1000])\n",
"plt.ylim([1e1,1e150])\n",
"plt.show()"
],
"metadata": {
"id": "5XnEOp0Bj_QK"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Now let's compute and plot the number of regions as a function of the number of parameters as in figure 3.9b\n",
"# First let's write a function that computes the number of parameters as a function of the input dimension and number of hidden layers (assuming just one output)\n",
"\n",
"def number_parameters(D_i, D):\n",
" # TODO -- replace this code with the proper calculation\n",
" N = 1\n",
"\n",
" return N ;"
],
"metadata": {
"id": "Pav1OsCnpm6P"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Now let's test the code\n",
"N = number_parameters(10, 8)\n",
"print(f\"Di=10, D=8, Number of parameters = {int(N)}, True value = 90\")"
],
"metadata": {
"id": "VbhDmZ1gwkQj"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Now let's plot the graph from figure 3.9a (takes ~1min)\n",
"dims = np.array([1,5,10,50,100])\n",
"regions = np.zeros((dims.shape[0], 200))\n",
"params = np.zeros((dims.shape[0], 200))\n",
"\n",
"# We'll compute the five lines separately this time to make it faster\n",
"for c_dim in range(dims.shape[0]):\n",
" D_i = dims[c_dim]\n",
" print (f\"Counting regions for {D_i} input dimensions\")\n",
" for c_hidden in range(1, 200):\n",
" # Iterate over different ranges of number hidden variables for different input sizes\n",
" D = int(c_hidden * 500 / D_i)\n",
" params[c_dim, c_hidden] = D_i * D +1 + D +1\n",
" regions[c_dim, c_hidden] = number_regions(np.min([D_i,D]), D)\n",
"\n",
"fig, ax = plt.subplots()\n",
"ax.semilogy(params[0,:], regions[0,:],'k-')\n",
"ax.semilogy(params[1,:], regions[1,:],'b-')\n",
"ax.semilogy(params[2,:], regions[2,:],'m-')\n",
"ax.semilogy(params[3,:], regions[3,:],'c-')\n",
"ax.semilogy(params[4,:], regions[4,:],'y-')\n",
"ax.legend(['$D_i$=1', '$D_i$=5', '$D_i$=10', '$D_i$=50', '$D_i$=100'])\n",
"ax.set_xlabel(\"Number of parameters, D\")\n",
"ax.set_ylabel(\"Number of regions, N\")\n",
"plt.xlim([0,100000])\n",
"plt.ylim([1e1,1e150])\n",
"plt.show()\n"
],
"metadata": {
"id": "AH4nA50Au8-a"
},
"execution_count": null,
"outputs": []
}
]
}

View File

@@ -0,0 +1,426 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"authorship_tag": "ABX9TyOu5BvK3aFb7ZEQKG5vfOZ1",
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap03/3_4_Activation_Functions.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"source": [
"# **Notebook 3.4 -- Activation functions**\n",
"\n",
"The purpose of this practical is to experiment with different activation functions. <br>\n",
"\n",
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and write code to complete the functions. There are also questions interspersed in the text.\n",
"\n",
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
],
"metadata": {
"id": "Mn0F56yY8ohX"
}
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "2GaDML3I8Yx4"
},
"outputs": [],
"source": [
"# Imports math library\n",
"import numpy as np\n",
"# Imports plotting library\n",
"import matplotlib.pyplot as plt"
]
},
{
"cell_type": "code",
"source": [
"# Plot the shallow neural network. We'll assume input in is range [0,1] and output [-1,1]\n",
"# If the plot_all flag is set to true, then we'll plot all the intermediate stages as in Figure 3.3\n",
"def plot_neural(x, y, pre_1, pre_2, pre_3, act_1, act_2, act_3, w_act_1, w_act_2, w_act_3, plot_all=False, x_data=None, y_data=None):\n",
"\n",
" # Plot intermediate plots if flag set\n",
" if plot_all:\n",
" fig, ax = plt.subplots(3,3)\n",
" fig.set_size_inches(8.5, 8.5)\n",
" fig.tight_layout(pad=3.0)\n",
" ax[0,0].plot(x,pre_1,'r-'); ax[0,0].set_ylabel('Preactivation')\n",
" ax[0,1].plot(x,pre_2,'b-'); ax[0,1].set_ylabel('Preactivation')\n",
" ax[0,2].plot(x,pre_3,'g-'); ax[0,2].set_ylabel('Preactivation')\n",
" ax[1,0].plot(x,act_1,'r-'); ax[1,0].set_ylabel('Activation')\n",
" ax[1,1].plot(x,act_2,'b-'); ax[1,1].set_ylabel('Activation')\n",
" ax[1,2].plot(x,act_3,'g-'); ax[1,2].set_ylabel('Activation')\n",
" ax[2,0].plot(x,w_act_1,'r-'); ax[2,0].set_ylabel('Weighted Act')\n",
" ax[2,1].plot(x,w_act_2,'b-'); ax[2,1].set_ylabel('Weighted Act')\n",
" ax[2,2].plot(x,w_act_3,'g-'); ax[2,2].set_ylabel('Weighted Act')\n",
"\n",
" for plot_y in range(3):\n",
" for plot_x in range(3):\n",
" ax[plot_y,plot_x].set_xlim([0,1]);ax[plot_x,plot_y].set_ylim([-1,1])\n",
" ax[plot_y,plot_x].set_aspect(0.5)\n",
" ax[2,plot_y].set_xlabel('Input, $x$');\n",
" plt.show()\n",
"\n",
" fig, ax = plt.subplots()\n",
" ax.plot(x,y)\n",
" ax.set_xlabel('Input, $x$'); ax.set_ylabel('Output, $y$')\n",
" ax.set_xlim([0,1]);ax.set_ylim([-1,1])\n",
" ax.set_aspect(0.5)\n",
" if x_data is not None:\n",
" ax.plot(x_data, y_data, 'mo')\n",
" for i in range(len(x_data)):\n",
" ax.plot(x_data[i], y_data[i],)\n",
" plt.show()"
],
"metadata": {
"id": "AeHzflFt9Tgn"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Define a shallow neural network with, one input, one output, and three hidden units\n",
"def shallow_1_1_3(x, activation_fn, phi_0,phi_1,phi_2,phi_3, theta_10, theta_11, theta_20, theta_21, theta_30, theta_31):\n",
" pre_1 = theta_10 + theta_11 * x\n",
" pre_2 = theta_20 + theta_21 * x\n",
" pre_3 = theta_30 + theta_31 * x\n",
" # Pass these through the ReLU function to compute the activations as in\n",
" # figure 3.3 d-f\n",
" act_1 = activation_fn(pre_1)\n",
" act_2 = activation_fn(pre_2)\n",
" act_3 = activation_fn(pre_3)\n",
"\n",
" w_act_1 = phi_1 * act_1\n",
" w_act_2 = phi_2 * act_2\n",
" w_act_3 = phi_3 * act_3\n",
"\n",
" y = phi_0 + w_act_1 + w_act_2 + w_act_3\n",
"\n",
" # Return everything we have calculated\n",
" return y, pre_1, pre_2, pre_3, act_1, act_2, act_3, w_act_1, w_act_2, w_act_3"
],
"metadata": {
"id": "7qeIUrh19AkH"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Define the Rectified Linear Unit (ReLU) function\n",
"def ReLU(preactivation):\n",
" activation = preactivation.clip(0.0)\n",
" return activation"
],
"metadata": {
"id": "cwTp__Fk9YUx"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"First, let's run the network with a ReLU functions"
],
"metadata": {
"id": "INQkRzyn9kVC"
}
},
{
"cell_type": "code",
"source": [
"# Now lets define some parameters and run the neural network\n",
"theta_10 = 0.3 ; theta_11 = -1.0\n",
"theta_20 = -1.0 ; theta_21 = 2.0\n",
"theta_30 = -0.5 ; theta_31 = 0.65\n",
"phi_0 = -0.3; phi_1 = 2.0; phi_2 = -1.0; phi_3 = 7.0\n",
"\n",
"# Define a range of input values\n",
"x = np.arange(0,1,0.01)\n",
"\n",
"# We run the neural network for each of these input values\n",
"y, pre_1, pre_2, pre_3, act_1, act_2, act_3, w_act_1, w_act_2, w_act_3 = \\\n",
" shallow_1_1_3(x, ReLU, phi_0,phi_1,phi_2,phi_3, theta_10, theta_11, theta_20, theta_21, theta_30, theta_31)\n",
"# And then plot it\n",
"plot_neural(x, y, pre_1, pre_2, pre_3, act_1, act_2, act_3, w_act_1, w_act_2, w_act_3, plot_all=True)"
],
"metadata": {
"id": "jT9QuKou9i0_"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"# Sigmoid activation function\n",
"\n",
"The ReLU isn't the only kind of activation function. For a long time, people used sigmoid functions. A logistic sigmoid function is defined by the equation\n",
"\n",
"\\begin{equation}\n",
"f[h] = \\frac{1}{1+\\exp{[-10 z ]}}\n",
"\\end{equation}\n",
"\n",
"(Note that the factor of 10 is not standard -- but it allow us to plot on the same axes as the ReLU examples)"
],
"metadata": {
"id": "-I8N7r1o9HYf"
}
},
{
"cell_type": "code",
"source": [
"# Define the sigmoid function\n",
"def sigmoid(preactivation):\n",
" # TODO write code to implement the sigmoid function and compute the activation at the\n",
" # hidden unit from the preactivation. Use the np.exp() function.\n",
" activation = np.zeros_like(preactivation);\n",
"\n",
" return activation"
],
"metadata": {
"id": "hgkioNyr975Y"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Make an array of inputs\n",
"z = np.arange(-1,1,0.01)\n",
"sig_z = sigmoid(z)\n",
"\n",
"# Plot the sigmoid function\n",
"fig, ax = plt.subplots()\n",
"ax.plot(z,sig_z,'r-')\n",
"ax.set_xlim([-1,1]);ax.set_ylim([0,1])\n",
"ax.set_xlabel('z'); ax.set_ylabel('sig[z]')\n",
"plt.show"
],
"metadata": {
"id": "94HIXKJH97ve"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Let's see what happens when we use this activation function in a neural network"
],
"metadata": {
"id": "p3zQNXhj-J-o"
}
},
{
"cell_type": "code",
"source": [
"theta_10 = 0.3 ; theta_11 = -1.0\n",
"theta_20 = -1.0 ; theta_21 = 2.0\n",
"theta_30 = -0.5 ; theta_31 = 0.65\n",
"phi_0 = 0.3; phi_1 = 0.5; phi_2 = -1.0; phi_3 = 0.9\n",
"\n",
"# Define a range of input values\n",
"x = np.arange(0,1,0.01)\n",
"\n",
"# We run the neural network for each of these input values\n",
"y, pre_1, pre_2, pre_3, act_1, act_2, act_3, w_act_1, w_act_2, w_act_3 = \\\n",
" shallow_1_1_3(x, sigmoid, phi_0,phi_1,phi_2,phi_3, theta_10, theta_11, theta_20, theta_21, theta_30, theta_31)\n",
"# And then plot it\n",
"plot_neural(x, y, pre_1, pre_2, pre_3, act_1, act_2, act_3, w_act_1, w_act_2, w_act_3, plot_all=True)"
],
"metadata": {
"id": "C1dASr9L-GNt"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"You probably notice that this gives nice smooth curves. So why don't we use this? Aha... it's not obvious right now, but we will get to it when we learn to fit models."
],
"metadata": {
"id": "Uuam_DewA9fH"
}
},
{
"cell_type": "markdown",
"source": [
"# Heaviside activation function\n",
"\n",
"The Heaviside function is defined as:\n",
"\n",
"\\begin{equation}\n",
"\\mbox{heaviside}[z] = \\begin{cases} 0 & \\quad z <0 \\\\ 1 & \\quad z\\geq 0\\end{cases}\n",
"\\end{equation}"
],
"metadata": {
"id": "C9WKkcMUABze"
}
},
{
"cell_type": "code",
"source": [
"# Define the heaviside function\n",
"def heaviside(preactivation):\n",
" # TODO write code to implement the heaviside function and compute the activation at the\n",
" # hidden unit from the preactivation. Depending on your implementation you may need to\n",
" # convert a Boolean array to an array of ones and zeros. To do this, use .astype(int)\n",
" activation = np.zeros_like(preactivation);\n",
"\n",
"\n",
" return activation"
],
"metadata": {
"id": "-1qFkdOL-NPc"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Make an array of inputs\n",
"z = np.arange(-1,1,0.01)\n",
"heav_z = heaviside(z)\n",
"\n",
"# Plot the heaviside function\n",
"fig, ax = plt.subplots()\n",
"ax.plot(z,heav_z,'r-')\n",
"ax.set_xlim([-1,1]);ax.set_ylim([-2,2])\n",
"ax.set_xlabel('z'); ax.set_ylabel('heaviside[z]')\n",
"plt.show"
],
"metadata": {
"id": "mSPyp7iA-44H"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"theta_10 = 0.3 ; theta_11 = -1.0\n",
"theta_20 = -1.0 ; theta_21 = 2.0\n",
"theta_30 = -0.5 ; theta_31 = 0.65\n",
"phi_0 = 0.3; phi_1 = 0.5; phi_2 = -1.0; phi_3 = 0.9\n",
"\n",
"# Define a range of input values\n",
"x = np.arange(0,1,0.01)\n",
"\n",
"# We run the neural network for each of these input values\n",
"y, pre_1, pre_2, pre_3, act_1, act_2, act_3, w_act_1, w_act_2, w_act_3 = \\\n",
" shallow_1_1_3(x, heaviside, phi_0,phi_1,phi_2,phi_3, theta_10, theta_11, theta_20, theta_21, theta_30, theta_31)\n",
"# And then plot it\n",
"plot_neural(x, y, pre_1, pre_2, pre_3, act_1, act_2, act_3, w_act_1, w_act_2, w_act_3, plot_all=True)"
],
"metadata": {
"id": "t99K2lSl--Mq"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"This can approximate any function, but the output is discontinuous, and there are also reasons not to use it that we will discover when we learn more about model fitting."
],
"metadata": {
"id": "T65MRtM-BCQA"
}
},
{
"cell_type": "markdown",
"source": [
"# Linear activation functions\n",
"\n",
"Neural networks don't work if the activation function is linear. For example, consider what would happen if the activation function was:\n",
"\n",
"\\begin{equation}\n",
"\\mbox{lin}[z] = a + bz\n",
"\\end{equation}"
],
"metadata": {
"id": "RkB-XZMLBTaR"
}
},
{
"cell_type": "code",
"source": [
"# Define the linear activation function\n",
"def lin(preactivation):\n",
" a =0\n",
" b =1\n",
" # Compute linear function\n",
" activation = a+b * preactivation\n",
" # Return\n",
" return activation"
],
"metadata": {
"id": "Q59v3saj_jq1"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# TODO\n",
"# 1. The linear activation function above just returns the input: (0+1*z) = z\n",
"# Before running the code Make a prediction about what the ten panels of the drawing will look like\n",
"# Now run the code below to see if you were right. What family of functions can this represent?\n",
"\n",
"# 2. What happens if you change the parameters (a,b) to different values?\n",
"# Try a=0.5, b=-0.4 Don't forget to run the cell again to update the function\n",
"\n",
"theta_10 = 0.3 ; theta_11 = -1.0\n",
"theta_20 = -1.0 ; theta_21 = 2.0\n",
"theta_30 = -0.5 ; theta_31 = 0.65\n",
"phi_0 = 0.3; phi_1 = 0.5; phi_2 = -1.0; phi_3 = 0.9\n",
"\n",
"# Define a range of input values\n",
"x = np.arange(0,1,0.01)\n",
"\n",
"# We run the neural network for each of these input values\n",
"y, pre_1, pre_2, pre_3, act_1, act_2, act_3, w_act_1, w_act_2, w_act_3 = \\\n",
" shallow_1_1_3(x, lin, phi_0,phi_1,phi_2,phi_3, theta_10, theta_11, theta_20, theta_21, theta_30, theta_31)\n",
"# And then plot it\n",
"plot_neural(x, y, pre_1, pre_2, pre_3, act_1, act_2, act_3, w_act_1, w_act_2, w_act_3, plot_all=True)"
],
"metadata": {
"id": "IwodsBr0BkDn"
},
"execution_count": null,
"outputs": []
}
]
}

View File

@@ -0,0 +1,362 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"authorship_tag": "ABX9TyPEQEGetZqWnLRNn99Q2aaT",
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap04/4_1_Composing_Networks.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"source": [
"#Notebook 4.1 -- Composing networks\n",
"\n",
"The purpose of this notebook is to understand what happens when we feed one neural network into another. It works through an example similar to 4.1 and varies both networks\n",
"\n",
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
"\n",
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions"
],
"metadata": {
"id": "MaKn8CFlzN8E"
}
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "8ClURpZQzI6L"
},
"outputs": [],
"source": [
"# Imports math library\n",
"import numpy as np\n",
"# Imports plotting library\n",
"import matplotlib.pyplot as plt"
]
},
{
"cell_type": "code",
"source": [
"# Define the Rectified Linear Unit (ReLU) function\n",
"def ReLU(preactivation):\n",
" activation = preactivation.clip(0.0)\n",
" return activation"
],
"metadata": {
"id": "YdmveeAUz4YG"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Define a shallow neural network with, one input, one output, and three hidden units\n",
"def shallow_1_1_3(x, activation_fn, phi_0,phi_1,phi_2,phi_3, theta_10, theta_11, theta_20, theta_21, theta_30, theta_31):\n",
" # Initial lines\n",
" pre_1 = theta_10 + theta_11 * x\n",
" pre_2 = theta_20 + theta_21 * x\n",
" pre_3 = theta_30 + theta_31 * x\n",
" # Activation functions\n",
" act_1 = activation_fn(pre_1)\n",
" act_2 = activation_fn(pre_2)\n",
" act_3 = activation_fn(pre_3)\n",
" # Weight activations\n",
" w_act_1 = phi_1 * act_1\n",
" w_act_2 = phi_2 * act_2\n",
" w_act_3 = phi_3 * act_3\n",
" # Combine weighted activation and add y offset\n",
" y = phi_0 + w_act_1 + w_act_2 + w_act_3\n",
" # Return everything we have calculated\n",
" return y"
],
"metadata": {
"id": "ximCLwIfz8kj"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# # Plot two shallow neural networks and the composition of the two\n",
"def plot_neural_two_components(x_in, net1_out, net2_out, net12_out=None):\n",
"\n",
" # Plot the two networks separately\n",
" fig, ax = plt.subplots(1,2)\n",
" fig.set_size_inches(8.5, 8.5)\n",
" fig.tight_layout(pad=3.0)\n",
" ax[0].plot(x_in, net1_out,'r-')\n",
" ax[0].set_xlabel('Net 1 input'); ax[0].set_ylabel('Net 1 output')\n",
" ax[0].set_xlim([-1,1]);ax[0].set_ylim([-1,1])\n",
" ax[0].set_aspect(1.0)\n",
" ax[1].plot(x_in, net2_out,'b-')\n",
" ax[1].set_xlabel('Net 2 input'); ax[1].set_ylabel('Net 2 output')\n",
" ax[1].set_xlim([-1,1]);ax[1].set_ylim([-1,1])\n",
" ax[1].set_aspect(1.0)\n",
" plt.show()\n",
"\n",
" if net12_out is not None:\n",
" # Plot their composition\n",
" fig, ax = plt.subplots()\n",
" ax.plot(x_in ,net12_out,'g-')\n",
" ax.set_xlabel('Net 1 Input'); ax.set_ylabel('Net 2 Output')\n",
" ax.set_xlim([-1,1]);ax.set_ylim([-1,1])\n",
" ax.set_aspect(1.0)\n",
" plt.show()"
],
"metadata": {
"id": "ZB2HTalOE40X"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Let's define two networks. We'll put the prefixes n1_ and n2_ before all the variables to make it clear which network is which. We'll just consider the inputs and outputs over the range [-1,1]. If you set the \"plot_all\" flat to True, you can see the details of how they were created."
],
"metadata": {
"id": "LxBJCObC-NTY"
}
},
{
"cell_type": "code",
"source": [
"# Now lets define some parameters and run the first neural network\n",
"n1_theta_10 = 0.0 ; n1_theta_11 = -1.0\n",
"n1_theta_20 = 0 ; n1_theta_21 = 1.0\n",
"n1_theta_30 = -0.67 ; n1_theta_31 = 1.0\n",
"n1_phi_0 = 1.0; n1_phi_1 = -2.0; n1_phi_2 = -3.0; n1_phi_3 = 9.3\n",
"\n",
"# Now lets define some parameters and run the second neural network\n",
"n2_theta_10 = -0.6 ; n2_theta_11 = -1.0\n",
"n2_theta_20 = 0.2 ; n2_theta_21 = 1.0\n",
"n2_theta_30 = -0.5 ; n2_theta_31 = 1.0\n",
"n2_phi_0 = 0.5; n2_phi_1 = -1.0; n2_phi_2 = -1.5; n2_phi_3 = 2.0\n",
"\n",
"# Display the two inputs\n",
"x = np.arange(-1,1,0.001)\n",
"# We run the first and second neural networks for each of these input values\n",
"net1_out = shallow_1_1_3(x, ReLU, n1_phi_0, n1_phi_1, n1_phi_2, n1_phi_3, n1_theta_10, n1_theta_11, n1_theta_20, n1_theta_21, n1_theta_30, n1_theta_31)\n",
"net2_out = shallow_1_1_3(x, ReLU, n2_phi_0, n2_phi_1, n2_phi_2, n2_phi_3, n2_theta_10, n2_theta_11, n2_theta_20, n2_theta_21, n2_theta_30, n2_theta_31)\n",
"# Plot both graphs\n",
"plot_neural_two_components(x, net1_out, net2_out)"
],
"metadata": {
"id": "JRebvurv22pT"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# TODO\n",
"# Take a piece of paper and draw what you think will happen when we feed the\n",
"# output of the first network into the second one. Draw the relationship between\n",
"# the input of the first network and the output of the second one."
],
"metadata": {
"id": "NUQVop9-Xta1"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Now let's see if your predictions were right\n",
"\n",
"# TODO feed the output of first network into second network (replace this line)\n",
"net12_out = np.zeros_like(x)\n",
"\n",
"# Plot all three graphs\n",
"plot_neural_two_components(x, net1_out, net2_out, net12_out)"
],
"metadata": {
"id": "Yq7GH-MCIyPI"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Now we'll change things a up a bit. What happens if we change the second network? (note the *-1 change)\n",
"net1_out = shallow_1_1_3(x, ReLU, n1_phi_0, n1_phi_1, n1_phi_2, n1_phi_3, n1_theta_10, n1_theta_11, n1_theta_20, n1_theta_21, n1_theta_30, n1_theta_31)\n",
"net2_out = shallow_1_1_3(x, ReLU, n2_phi_0, n2_phi_1*-1, n2_phi_2, n2_phi_3, n2_theta_10, n2_theta_11, n2_theta_20, n2_theta_21, n2_theta_30, n2_theta_31)\n",
"plot_neural_two_components(x, net1_out, net2_out)"
],
"metadata": {
"id": "BMlLkLbdEuPu"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# TODO\n",
"# Take a piece of paper and draw what you think will happen when we feed the\n",
"# output of the first network into the second one now that we have changed it. Draw the relationship between\n",
"# the input of the first network and the output of the second one."
],
"metadata": {
"id": "Of6jVXLTJ688"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# When you have a prediction, run this code to see if you were right\n",
"net12_out = shallow_1_1_3(net1_out, ReLU, n2_phi_0, n2_phi_1*-1, n2_phi_2, n2_phi_3, n2_theta_10, n2_theta_11, n2_theta_20, n2_theta_21, n2_theta_30, n2_theta_31)\n",
"plot_neural_two_components(x, net1_out, net2_out, net12_out)"
],
"metadata": {
"id": "PbbSCaSeK6SM"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Let's change things again. What happens if we change the first network? (note the changes)\n",
"net1_out = shallow_1_1_3(x, ReLU, n1_phi_0, n1_phi_1*0.5, n1_phi_2, n1_phi_3, n1_theta_10, n1_theta_11, n1_theta_20, n1_theta_21, n1_theta_30, n1_theta_31)\n",
"net2_out = shallow_1_1_3(x, ReLU, n2_phi_0, n2_phi_1, n2_phi_2, n2_phi_3, n2_theta_10, n2_theta_11, n2_theta_20, n2_theta_21, n2_theta_30, n2_theta_31)\n",
"plot_neural_two_components(x, net1_out, net2_out)"
],
"metadata": {
"id": "b39mcSGFK9Fd"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# TODO\n",
"# Take a piece of paper and draw what you think will happen when we feed the\n",
"# output of the first network now we have changed it into the original second network. Draw the relationship between\n",
"# the input of the first network and the output of the second one."
],
"metadata": {
"id": "MhO40cC_LW9I"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# When you have a prediction, run this code to see if you were right\n",
"net12_out = shallow_1_1_3(net1_out, ReLU, n2_phi_0, n2_phi_1, n2_phi_2, n2_phi_3, n2_theta_10, n2_theta_11, n2_theta_20, n2_theta_21, n2_theta_30, n2_theta_31)\n",
"plot_neural_two_components(x, net1_out, net2_out, net12_out)"
],
"metadata": {
"id": "Akwo-hnPLkNr"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Let's change things again. What happens if the first network and second networks are the same?\n",
"net1_out = shallow_1_1_3(x, ReLU, n1_phi_0, n1_phi_1, n1_phi_2, n1_phi_3, n1_theta_10, n1_theta_11, n1_theta_20, n1_theta_21, n1_theta_30, n1_theta_31)\n",
"net2_out_new = shallow_1_1_3(x, ReLU, n1_phi_0, n1_phi_1, n1_phi_2, n1_phi_3, n1_theta_10, n1_theta_11, n1_theta_20, n1_theta_21, n1_theta_30, n1_theta_31)\n",
"plot_neural_two_components(x, net1_out, net2_out_new)"
],
"metadata": {
"id": "TJ7wXKpRLl_E"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# TODO\n",
"# Take a piece of paper and draw what you think will happen when we feed the\n",
"# output of the first network into the original second network. Draw the relationship between\n",
"# the input of the first network and the output of the second one."
],
"metadata": {
"id": "dJbbh6R7NG9k"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# When you have a prediction, run this code to see if you were right\n",
"net12_out = shallow_1_1_3(net1_out, ReLU, n1_phi_0, n1_phi_1, n1_phi_2, n1_phi_3, n1_theta_10, n1_theta_11, n1_theta_20, n1_theta_21, n1_theta_30, n1_theta_31)\n",
"plot_neural_two_components(x, net1_out, net2_out_new, net12_out)"
],
"metadata": {
"id": "BiZZl3yNM2Bq"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# TODO\n",
"# Contemplate what you think will happen when we feed the\n",
"# output of the original first network into a second copy of the original first network, and then\n",
"# the output of that into the original second network (so now we have a three layer network)\n",
"# How many total linear regions will we have in the output?\n",
"net123_out = shallow_1_1_3(net12_out, ReLU, n2_phi_0, n2_phi_1, n2_phi_2, n2_phi_3, n2_theta_10, n2_theta_11, n2_theta_20, n2_theta_21, n2_theta_30, n2_theta_31)\n",
"plot_neural_two_components(x, net12_out, net2_out, net123_out)"
],
"metadata": {
"id": "BSd51AkzNf7-"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# TO DO\n",
"# How many linear regions would there be if we ran N copies of the first network, feeding the result of the first\n",
"# into the second, the second into the third and so on, and then passed the result into the original second\n",
"# network (blue curve above)\n",
"\n",
"# Take away conclusion: with very few parameters, we can make A LOT of linear regions, but\n",
"# they depend on one another in complex ways that quickly become to difficult to understand intuitively."
],
"metadata": {
"id": "HqzePCLOVQK7"
},
"execution_count": null,
"outputs": []
}
]
}

View File

@@ -0,0 +1,219 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"authorship_tag": "ABX9TyPkFrjmRAUf0fxN07RC4xMI",
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap04/4_2_Clipping_functions.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"source": [
"#Notebook 4.2 -- Clipping functions\n",
"\n",
"The purpose of this notebook is to understand how a neural network with two hidden layers build more complicated functions by clipping and recombining the representations at the intermediate hidden variables.\n",
"\n",
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
"\n",
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions"
],
"metadata": {
"id": "MaKn8CFlzN8E"
}
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "8ClURpZQzI6L"
},
"outputs": [],
"source": [
"# Imports math library\n",
"import numpy as np\n",
"# Imports plotting library\n",
"import matplotlib.pyplot as plt"
]
},
{
"cell_type": "code",
"source": [
"# Define the Rectified Linear Unit (ReLU) function\n",
"def ReLU(preactivation):\n",
" activation = preactivation.clip(0.0)\n",
" return activation"
],
"metadata": {
"id": "YdmveeAUz4YG"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Define a deep neural network with, one input, one output, two hidden layers and three hidden units (eqns 4.7-4.9)\n",
"# To make this easier, we store the parameters in ndarrays, so phi_0 = phi[0] and psi_3,3 = psi[3,3] etc.\n",
"def shallow_1_1_3_3(x, activation_fn, phi, psi, theta):\n",
"\n",
" # TODO -- You write this function\n",
" # Replace the skeleton code below.\n",
"\n",
" # ANSWER\n",
" # Preactivations at layer 1 (terms in brackets in equation 4.7)\n",
" layer1_pre_1 = np.zeros_like(x) ;\n",
" layer1_pre_2 = np.zeros_like(x) ;\n",
" layer1_pre_3 = np.zeros_like(x) ;\n",
"\n",
" # Activation functions (rest of equation 4.7)\n",
" h1 = activation_fn(layer1_pre_1)\n",
" h2 = activation_fn(layer1_pre_2)\n",
" h3 = activation_fn(layer1_pre_3)\n",
"\n",
" # Preactivations at layer 2 (terms in brackets in equation 4.8)\n",
" layer2_pre_1 = np.zeros_like(x) ;\n",
" layer2_pre_2 = np.zeros_like(x) ;\n",
" layer2_pre_3 = np.zeros_like(x) ;\n",
"\n",
" # Activation functions (rest of equation 4.8)\n",
" h1_prime = activation_fn(layer2_pre_1)\n",
" h2_prime = activation_fn(layer2_pre_2)\n",
" h3_prime = activation_fn(layer2_pre_3)\n",
"\n",
" # Weighted outputs by phi (three last terms of equation 4.9)\n",
" phi1_h1_prime = np.zeros_like(x) ;\n",
" phi2_h2_prime = np.zeros_like(x) ;\n",
" phi3_h3_prime = np.zeros_like(x) ;\n",
"\n",
" # Combine weighted activation and add y offset (summing terms of equation 4.9)\n",
" y = np.zeros_like(x) ;\n",
"\n",
"\n",
" # Return everything we have calculated\n",
" return y, layer2_pre_1, layer2_pre_2, layer2_pre_3, h1_prime, h2_prime, h3_prime, phi1_h1_prime, phi2_h2_prime, phi3_h3_prime"
],
"metadata": {
"id": "ximCLwIfz8kj"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# # Plot two layer neural network as in figure 4.5\n",
"def plot_neural_two_layers(x, y, layer2_pre_1, layer2_pre_2, layer2_pre_3, h1_prime, h2_prime, h3_prime, phi1_h1_prime, phi2_h2_prime, phi3_h3_prime):\n",
"\n",
" fig, ax = plt.subplots(3,3)\n",
" fig.set_size_inches(8.5, 8.5)\n",
" fig.tight_layout(pad=3.0)\n",
" ax[0,0].plot(x,layer2_pre_1,'r-'); ax[0,0].set_ylabel('$\\psi_{10}+\\psi_{11}h_{1}+\\psi_{12}h_{2}+\\psi_{13}h_3$')\n",
" ax[0,1].plot(x,layer2_pre_2,'b-'); ax[0,1].set_ylabel('$\\psi_{20}+\\psi_{21}h_{1}+\\psi_{22}h_{2}+\\psi_{23}h_3$')\n",
" ax[0,2].plot(x,layer2_pre_3,'g-'); ax[0,2].set_ylabel('$\\psi_{30}+\\psi_{31}h_{1}+\\psi_{32}h_{2}+\\psi_{33}h_3$')\n",
" ax[1,0].plot(x,h1_prime,'r-'); ax[1,0].set_ylabel(\"$h_{1}^{'}$\")\n",
" ax[1,1].plot(x,h2_prime,'b-'); ax[1,1].set_ylabel(\"$h_{2}^{'}$\")\n",
" ax[1,2].plot(x,h3_prime,'g-'); ax[1,2].set_ylabel(\"$h_{3}^{'}$\")\n",
" ax[2,0].plot(x,phi1_h1_prime,'r-'); ax[2,0].set_ylabel(\"$\\phi_1 h_{1}^{'}$\")\n",
" ax[2,1].plot(x,phi2_h2_prime,'b-'); ax[2,1].set_ylabel(\"$\\phi_2 h_{2}^{'}$\")\n",
" ax[2,2].plot(x,phi3_h3_prime,'g-'); ax[2,2].set_ylabel(\"$\\phi_3 h_{3}^{'}$\")\n",
"\n",
" for plot_y in range(3):\n",
" for plot_x in range(3):\n",
" ax[plot_y,plot_x].set_xlim([0,1]);ax[plot_x,plot_y].set_ylim([-1,1])\n",
" ax[plot_y,plot_x].set_aspect(0.5)\n",
" ax[2,plot_y].set_xlabel('Input, $x$');\n",
" plt.show()\n",
"\n",
" fig, ax = plt.subplots()\n",
" ax.plot(x,y)\n",
" ax.set_xlabel('Input, $x$'); ax.set_ylabel('Output, $y$')\n",
" ax.set_xlim([0,1]);ax.set_ylim([-1,1])\n",
" ax.set_aspect(0.5)\n",
" plt.show()"
],
"metadata": {
"id": "ZB2HTalOE40X"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Now let's define the parameters and visualize the network"
],
"metadata": {
"id": "LxBJCObC-NTY"
}
},
{
"cell_type": "code",
"source": [
"# Define parameters (note first dimension of theta and phi is padded to make indices match\n",
"# notation in book)\n",
"theta = np.zeros([4,2])\n",
"psi = np.zeros([4,4])\n",
"phi = np.zeros([4,1])\n",
"\n",
"theta[1,0] = 0.3 ; theta[1,1] = -1.0\n",
"theta[2,0]= -1.0 ; theta[2,1] = 2.0\n",
"theta[3,0] = -0.5 ; theta[3,1] = 0.65\n",
"psi[1,0] = 0.3; psi[1,1] = 2.0; psi[1,2] = -1.0; psi[1,3]=7.0\n",
"psi[2,0] = -0.2; psi[2,1] = 2.0; psi[2,2] = 1.2; psi[2,3]=-8.0\n",
"psi[3,0] = 0.3; psi[3,1] = -2.3; psi[3,2] = -0.8; psi[3,3]=2.0\n",
"phi[0] = 0.0; phi[1] = 0.5; phi[2] = -1.5; phi [3] = 2.2\n",
"\n",
"# Define a range of input values\n",
"x = np.arange(0,1,0.01)\n",
"\n",
"# Run the neural network\n",
"y, layer2_pre_1, layer2_pre_2, layer2_pre_3, h1_prime, h2_prime, h3_prime, phi1_h1_prime, phi2_h2_prime, phi3_h3_prime \\\n",
" = shallow_1_1_3_3(x, ReLU, phi, psi, theta)\n",
"\n",
"# And then plot it\n",
"plot_neural_two_layers(x, y, layer2_pre_1, layer2_pre_2, layer2_pre_3, h1_prime, h2_prime, h3_prime, phi1_h1_prime, phi2_h2_prime, phi3_h3_prime)"
],
"metadata": {
"id": "JRebvurv22pT"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"To do: To test your understanding of this, consider:\n",
"\n",
"1. What would happen if we increase $\\psi_{1,0}$?\n",
"2. What would happen if we multiplied $\\psi_{2,0}, \\psi_{2,1}, \\psi_{2,2}, \\psi_{2,3}$ by -1?\n",
"3. What would happen if set $\\phi_{3}$ to -1?\n",
"\n",
"You can rerun the code to see if you were correct.\n",
"\n"
],
"metadata": {
"id": "GcjUUHbXf25D"
}
}
]
}

View File

@@ -0,0 +1,322 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"authorship_tag": "ABX9TyMbJGN6f2+yKzzsVep/wi5U",
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap04/4_3_Deep_Networks.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"source": [
"# **Notebook 4.3 Deep neural networks**\n",
"\n",
"This network investigates converting neural networks to matrix form.\n",
"\n",
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
"\n",
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
],
"metadata": {
"id": "MaKn8CFlzN8E"
}
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "8ClURpZQzI6L"
},
"outputs": [],
"source": [
"# Imports math library\n",
"import numpy as np\n",
"# Imports plotting library\n",
"import matplotlib.pyplot as plt"
]
},
{
"cell_type": "code",
"source": [
"# Define the Rectified Linear Unit (ReLU) function\n",
"def ReLU(preactivation):\n",
" activation = preactivation.clip(0.0)\n",
" return activation"
],
"metadata": {
"id": "YdmveeAUz4YG"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Define a shallow neural network with, one input, one output, and three hidden units\n",
"def shallow_1_1_3(x, activation_fn, phi_0,phi_1,phi_2,phi_3, theta_10, theta_11, theta_20, theta_21, theta_30, theta_31):\n",
" # Initial lines\n",
" pre_1 = theta_10 + theta_11 * x\n",
" pre_2 = theta_20 + theta_21 * x\n",
" pre_3 = theta_30 + theta_31 * x\n",
" # Activation functions\n",
" act_1 = activation_fn(pre_1)\n",
" act_2 = activation_fn(pre_2)\n",
" act_3 = activation_fn(pre_3)\n",
" # Weight activations\n",
" w_act_1 = phi_1 * act_1\n",
" w_act_2 = phi_2 * act_2\n",
" w_act_3 = phi_3 * act_3\n",
" # Combine weighted activation and add y offset\n",
" y = phi_0 + w_act_1 + w_act_2 + w_act_3\n",
" # Return everything we have calculated\n",
" return y, pre_1, pre_2, pre_3, act_1, act_2, act_3, w_act_1, w_act_2, w_act_3"
],
"metadata": {
"id": "ximCLwIfz8kj"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# # Plot the shallow neural network. We'll assume input in is range [-1,1] and output [-1,1]\n",
"# If the plot_all flag is set to true, then we'll plot all the intermediate stages as in Figure 3.3\n",
"def plot_neural(x, y):\n",
" fig, ax = plt.subplots()\n",
" ax.plot(x.T,y.T)\n",
" ax.set_xlabel('Input'); ax.set_ylabel('Output')\n",
" ax.set_xlim([-1,1]);ax.set_ylim([-1,1])\n",
" ax.set_aspect(1.0)\n",
" plt.show()"
],
"metadata": {
"id": "btrt7BX20gKD"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Let's define a networks. We'll just consider the inputs and outputs over the range [-1,1]. If you set the \"plot_all\" flat to True, you can see the details of how it was created."
],
"metadata": {
"id": "LxBJCObC-NTY"
}
},
{
"cell_type": "code",
"source": [
"# Now lets define some parameters and run the first neural network\n",
"n1_theta_10 = 0.0 ; n1_theta_11 = -1.0\n",
"n1_theta_20 = 0 ; n1_theta_21 = 1.0\n",
"n1_theta_30 = -0.67 ; n1_theta_31 = 1.0\n",
"n1_phi_0 = 1.0; n1_phi_1 = -2.0; n1_phi_2 = -3.0; n1_phi_3 = 9.3\n",
"\n",
"# Define a range of input values\n",
"n1_in = np.arange(-1,1,0.01).reshape([1,-1])\n",
"\n",
"# We run the neural network for each of these input values\n",
"n1_out, *_ = shallow_1_1_3(n1_in, ReLU, n1_phi_0, n1_phi_1, n1_phi_2, n1_phi_3, n1_theta_10, n1_theta_11, n1_theta_20, n1_theta_21, n1_theta_30, n1_theta_31)\n",
"# And then plot it\n",
"plot_neural(n1_in, n1_out)"
],
"metadata": {
"id": "JRebvurv22pT"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Now we'll define the same neural network, but this time, we will use matrix form. When you get this right, it will draw the same plot as above."
],
"metadata": {
"id": "XCJqo_AjfAra"
}
},
{
"cell_type": "code",
"source": [
"beta_0 = np.zeros((3,1))\n",
"Omega_0 = np.zeros((3,1))\n",
"beta_1 = np.zeros((1,1))\n",
"Omega_1 = np.zeros((1,3))\n",
"\n",
"# TODO Fill in the values of the beta and Omega matrices with the n1_theta and n1_phi parameters that define the network above\n",
"# !!! NOTE THAT MATRICES ARE CONVENTIONALLY INDEXED WITH a_11 IN THE TOP LEFT CORNER, BUT NDARRAYS START AT [0,0]\n",
"# To get you started I've filled in a couple:\n",
"beta_0[0,0] = n1_theta_10\n",
"Omega_0[0,0] = n1_theta_11\n",
"\n",
"# Make sure that input data matrix has different inputs in its columns\n",
"n_data = n1_in.size\n",
"n_dim_in = 1\n",
"n1_in_mat = np.reshape(n1_in,(n_dim_in,n_data))\n",
"\n",
"# This runs the network for ALL of the inputs, x at once so we can draw graph\n",
"h1 = ReLU(np.matmul(beta_0,np.ones((1,n_data))) + np.matmul(Omega_0,n1_in_mat))\n",
"n1_out = np.matmul(beta_1,np.ones((1,n_data))) + np.matmul(Omega_1,h1)\n",
"\n",
"# Draw the network and check that it looks the same as the non-matrix case\n",
"plot_neural(n1_in, n1_out)"
],
"metadata": {
"id": "MR0AecZYfACR"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Now we'll feed the output of the first network into the second one."
],
"metadata": {
"id": "qOcj2Rof-o20"
}
},
{
"cell_type": "code",
"source": [
"# Now lets define some parameters and run the second neural network\n",
"n2_theta_10 = -0.6 ; n2_theta_11 = -1.0\n",
"n2_theta_20 = 0.2 ; n2_theta_21 = 1.0\n",
"n2_theta_30 = -0.5 ; n2_theta_31 = 1.0\n",
"n2_phi_0 = 0.5; n2_phi_1 = -1.0; n2_phi_2 = -1.5; n2_phi_3 = 2.0\n",
"\n",
"# Define a range of input values\n",
"n2_in = np.arange(-1,1,0.01)\n",
"\n",
"# We run the second neural network on the output of the first network\n",
"n2_out, *_ = \\\n",
" shallow_1_1_3(n1_out, ReLU, n2_phi_0, n2_phi_1, n2_phi_2, n2_phi_3, n2_theta_10, n2_theta_11, n2_theta_20, n2_theta_21, n2_theta_30, n2_theta_31)\n",
"# And then plot it\n",
"plot_neural(n1_in, n2_out)"
],
"metadata": {
"id": "ZRjWu8i9239X"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"beta_0 = np.zeros((3,1))\n",
"Omega_0 = np.zeros((3,1))\n",
"beta_1 = np.zeros((3,1))\n",
"Omega_1 = np.zeros((3,3))\n",
"beta_2 = np.zeros((1,1))\n",
"Omega_2 = np.zeros((1,3))\n",
"\n",
"# TODO Fill in the values of the beta and Omega matrices for with the n1_theta, n1_phi, n2_theta, and n2_phi parameters\n",
"# that define the composition of the two networks above (see eqn 4.5 for Omega1 and beta1 albeit in different notation)\n",
"# !!! NOTE THAT MATRICES ARE CONVENTIONALLY INDEXED WITH a_11 IN THE TOP LEFT CORNER, BUT NDARRAYS START AT [0,0] SO EVERYTHING IS OFFSET\n",
"# To get you started I've filled in a few:\n",
"beta_0[0,0] = n1_theta_10\n",
"Omega_0[0,0] = n1_theta_11\n",
"beta_1[0,0] = n2_theta_10 + n2_theta_11 * n1_phi_0\n",
"Omega_1[0,0] = n2_theta_11 * n1_phi_1\n",
"\n",
"\n",
"# Make sure that input data matrix has different inputs in its columns\n",
"n_data = n1_in.size\n",
"n_dim_in = 1\n",
"n1_in_mat = np.reshape(n1_in,(n_dim_in,n_data))\n",
"\n",
"# This runs the network for ALL of the inputs, x at once so we can draw graph (hence extra np.ones term)\n",
"h1 = ReLU(np.matmul(beta_0,np.ones((1,n_data))) + np.matmul(Omega_0,n1_in_mat))\n",
"h2 = ReLU(np.matmul(beta_1,np.ones((1,n_data))) + np.matmul(Omega_1,h1))\n",
"n1_out = np.matmul(beta_2,np.ones((1,n_data))) + np.matmul(Omega_2,h2)\n",
"\n",
"# Draw the network and check that it looks the same as the non-matrix version\n",
"plot_neural(n1_in, n1_out)"
],
"metadata": {
"id": "ZB2HTalOE40X"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Now let's make a deep network with 3 hidden layers. It will have $D_i=4$ inputs, $D_1=5$ neurons in the first layer, $D_2=2$ neurons in the second layer and $D_3=4$ neurons in the third layer, and $D_o = 1$ output. Consult figure 4.6 and equations 4.15 for guidance."
],
"metadata": {
"id": "0VANqxH2kyS4"
}
},
{
"cell_type": "code",
"source": [
"# define sizes\n",
"D_i=4; D_1=5; D_2=2; D_3=1; D_o=1\n",
"# We'll choose the inputs and parameters of this network randomly using np.random.normal\n",
"# For example, we'll set the input using\n",
"n_data = 4;\n",
"x = np.random.normal(size=(D_i, n_data))\n",
"# TODO initialize the parameters randomly with the correct sizes\n",
"# Replace the lines below\n",
"beta_0 = np.random.normal(size=(1,1))\n",
"Omega_0 = np.random.normal(size=(1,1))\n",
"beta_1 = np.random.normal(size=(1,1))\n",
"Omega_1 = np.random.normal(size=(1,1))\n",
"beta_2 = np.random.normal(size=(1,1))\n",
"Omega_2 = np.random.normal(size=(1,1))\n",
"beta_3 = np.random.normal(size=(1,1))\n",
"Omega_3 = np.random.normal(size=(1,1))\n",
"\n",
"\n",
"# If you set the parameters to the correct sizes, the following code will run\n",
"h1 = ReLU(np.matmul(beta_0,np.ones((1,n_data))) + np.matmul(Omega_0,x));\n",
"h2 = ReLU(np.matmul(beta_1,np.ones((1,n_data))) + np.matmul(Omega_1,h1));\n",
"h3 = ReLU(np.matmul(beta_2,np.ones((1,n_data))) + np.matmul(Omega_2,h2));\n",
"y = np.matmul(beta_3,np.ones((1,n_data))) + np.matmul(Omega_3,h3)\n",
"\n",
"if h1.shape[0] is not D_1 or h1.shape[1] is not n_data:\n",
" print(\"h1 is wrong shape\")\n",
"if h2.shape[0] is not D_2 or h1.shape[1] is not n_data:\n",
" print(\"h2 is wrong shape\")\n",
"if h3.shape[0] is not D_3 or h1.shape[1] is not n_data:\n",
" print(\"h3 is wrong shape\")\n",
"if y.shape[0] is not D_o or h1.shape[1] is not n_data:\n",
" print(\"Output is wrong shape\")\n",
"\n",
"# Print the inputs and outputs\n",
"print(\"Input data points\")\n",
"print(x)\n",
"print (\"Output data points\")\n",
"print(y)"
],
"metadata": {
"id": "RdBVAc_Rj22-"
},
"execution_count": null,
"outputs": []
}
]
}

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,566 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"authorship_tag": "ABX9TyNkBMOVt5gO7Awn9JMn4N8Z",
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap05/5_1_Least_Squares_Loss.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"source": [
"# **Notebook 5.1: Least Squares Loss**\n",
"\n",
"This notebook investigates the least squares loss and the equivalence of maximum likelihood and minimum negative log likelihood.\n",
"\n",
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
"\n",
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
],
"metadata": {
"id": "jSlFkICHwHQF"
}
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "PYMZ1x-Pv1ht"
},
"outputs": [],
"source": [
"# Imports math library\n",
"import numpy as np\n",
"# Imports plotting library\n",
"import matplotlib.pyplot as plt\n",
"# Import math Library\n",
"import math"
]
},
{
"cell_type": "code",
"source": [
"# Define the Rectified Linear Unit (ReLU) function\n",
"def ReLU(preactivation):\n",
" activation = preactivation.clip(0.0)\n",
" return activation\n",
"\n",
"# Define a shallow neural network\n",
"def shallow_nn(x, beta_0, omega_0, beta_1, omaga_1):\n",
" # Make sure that input data is (1 x n_data) array\n",
" n_data = x.size\n",
" x = np.reshape(x,(1,n_data))\n",
"\n",
" # This runs the network for ALL of the inputs, x at once so we can draw graph\n",
" h1 = ReLU(np.matmul(beta_0,np.ones((1,n_data))) + np.matmul(omega_0,x))\n",
" y = np.matmul(beta_1,np.ones((1,n_data))) + np.matmul(omega_1,h1)\n",
" return y"
],
"metadata": {
"id": "Fv7SZR3tv7mV"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Get parameters for model -- we can call this function to easily reset them\n",
"def get_parameters():\n",
" # And we'll create a network that approximately fits it\n",
" beta_0 = np.zeros((3,1)); # formerly theta_x0\n",
" omega_0 = np.zeros((3,1)); # formerly theta_x1\n",
" beta_1 = np.zeros((1,1)); # formerly phi_0\n",
" omega_1 = np.zeros((1,3)); # formerly phi_x\n",
"\n",
" beta_0[0,0] = 0.3; beta_0[1,0] = -1.0; beta_0[2,0] = -0.5\n",
" omega_0[0,0] = -1.0; omega_0[1,0] = 1.8; omega_0[2,0] = 0.65\n",
" beta_1[0,0] = 0.1;\n",
" omega_1[0,0] = -2.0; omega_1[0,1] = -1.0; omega_1[0,2] = 7.0\n",
"\n",
" return beta_0, omega_0, beta_1, omega_1"
],
"metadata": {
"id": "pUT9Ain_HRim"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Utility function for plotting data\n",
"def plot_univariate_regression(x_model, y_model, x_data = None, y_data = None, sigma_model = None, title= None):\n",
" # Make sure model data are 1D arrays\n",
" x_model = np.squeeze(x_model)\n",
" y_model = np.squeeze(y_model)\n",
"\n",
" fig, ax = plt.subplots()\n",
" ax.plot(x_model,y_model)\n",
" if sigma_model is not None:\n",
" ax.fill_between(x_model, y_model-2*sigma_model, y_model+2*sigma_model, color='lightgray')\n",
" ax.set_xlabel('Input, $x$'); ax.set_ylabel('Output, $y$')\n",
" ax.set_xlim([0,1]);ax.set_ylim([-1,1])\n",
" ax.set_aspect(0.5)\n",
" if title is not None:\n",
" ax.set_title(title)\n",
" if x_data is not None:\n",
" ax.plot(x_data, y_data, 'ko')\n",
" plt.show()"
],
"metadata": {
"id": "NRR67ri_1TzN"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"# Univariate regression\n",
"\n",
"We'll investigate a simple univarite regression situation with a single input $x$ and a single output $y$ as pictured in figures 5.4 and 5.5b."
],
"metadata": {
"id": "PsgLZwsPxauP"
}
},
{
"cell_type": "code",
"source": [
"# Let's create some 1D training data\n",
"x_train = np.array([0.09291784,0.46809093,0.93089486,0.67612654,0.73441752,0.86847339,\\\n",
" 0.49873225,0.51083168,0.18343972,0.99380898,0.27840809,0.38028817,\\\n",
" 0.12055708,0.56715537,0.92005746,0.77072270,0.85278176,0.05315950,\\\n",
" 0.87168699,0.58858043])\n",
"y_train = np.array([-0.25934537,0.18195445,0.651270150,0.13921448,0.09366691,0.30567674,\\\n",
" 0.372291170,0.20716968,-0.08131792,0.51187806,0.16943738,0.3994327,\\\n",
" 0.019062570,0.55820410,0.452564960,-0.1183121,0.02957665,-1.24354444, \\\n",
" 0.248038840,0.26824970])\n",
"\n",
"# Get parameters for the model\n",
"beta_0, omega_0, beta_1, omega_1 = get_parameters()\n",
"sigma = 0.2\n",
"\n",
"# Define a range of input values\n",
"x_model = np.arange(0,1,0.01)\n",
"# Run the model to get values to plot and plot it.\n",
"y_model = shallow_nn(x_model, beta_0, omega_0, beta_1, omega_1)\n",
"plot_univariate_regression(x_model, y_model, x_train, y_train, sigma_model = sigma)\n"
],
"metadata": {
"id": "VWzNOt1swFVd"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"The blue line is the mean prediction of the model and the gray area represents plus/minus two standard deviations. This model fits okay, but could be improved. Let's compute the loss. We'll compute the the least squares error, the likelihood, the negative log likelihood."
],
"metadata": {
"id": "MvVX6tl9AEXF"
}
},
{
"cell_type": "code",
"source": [
"# Return probability under normal distribution for input x\n",
"def normal_distribution(y, mu, sigma):\n",
" # TODO-- write in the equation for the normal distribution\n",
" # Equation 5.7 from the notes (you will need np.sqrt() and np.exp(), and math.pi)\n",
" # Don't use the numpy version -- that's cheating!\n",
" # Replace the line below\n",
" prob = np.zeros_like(y)\n",
"\n",
" return prob"
],
"metadata": {
"id": "YaLdRlEX0FkU"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Let's double check we get the right answer before proceeding\n",
"print(\"Correct answer = %3.3f, Your answer = %3.3f\"%(0.119,normal_distribution(1,-1,2.3)))"
],
"metadata": {
"id": "4TSL14dqHHbV"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Let's plot the Gaussian distribution.\n",
"y_gauss = np.arange(-5,5,0.1)\n",
"mu = 0; sigma = 1.0\n",
"gauss_prob = normal_distribution(y_gauss, mu, sigma)\n",
"fig, ax = plt.subplots()\n",
"ax.plot(y_gauss, gauss_prob)\n",
"ax.set_xlabel('Input, $y$'); ax.set_ylabel('Probability $Pr(y)$')\n",
"ax.set_xlim([-5,5]);ax.set_ylim([0,1.0])\n",
"plt.show()\n",
"\n",
"# TODO\n",
"# 1. Predict what will happen if we change to mu=1 and leave sigma=1\n",
"# Now change the code above and see if you were correct.\n",
"\n",
"# 2. Predict what will happen if we leave mu = 0 and change sigma to 2.0\n",
"\n",
"# 3. Predict what will happen if we leave mu = 0 and change sigma to 0.5"
],
"metadata": {
"id": "A2HcmNfUMIlj"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Now let's compute the likelihood using this function"
],
"metadata": {
"id": "R5z_0dzQMF35"
}
},
{
"cell_type": "code",
"source": [
"# Return the likelihood of all of the data under the model\n",
"def compute_likelihood(y_train, mu, sigma):\n",
" # TODO -- compute the likelihood of the data -- the product of the normal probabilities for each data point\n",
" # Top line of equation 5.3 in the notes\n",
" # You will need np.prod() and the normal_distribution function you used above\n",
" # Replace the line below\n",
" likelihood = 0\n",
"\n",
" return likelihood"
],
"metadata": {
"id": "zpS7o6liCx7f"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Let's test this for a homoscedastic (constant sigma) model\n",
"beta_0, omega_0, beta_1, omega_1 = get_parameters()\n",
"# Use our neural network to predict the mean of the Gaussian\n",
"mu_pred = shallow_nn(x_train, beta_0, omega_0, beta_1, omega_1)\n",
"# Set the standard deviation to something reasonable\n",
"sigma = 0.2\n",
"# Compute the likelihood\n",
"likelihood = compute_likelihood(y_train, mu_pred, sigma)\n",
"# Let's double check we get the right answer before proceeding\n",
"print(\"Correct answer = %9.9f, Your answer = %9.9f\"%(0.000010624,likelihood))"
],
"metadata": {
"id": "1hQxBLoVNlr2"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"You can see that this gives a very small answer, even for this small 1D dataset, and with the model fitting quite well. This is because it is the product of several probabilities, which are all quite small themselves.\n",
"This will get out of hand pretty quickly with real datasets -- the likelihood will get so small that we can't represent it with normal finite-precision math\n",
"\n",
"This is why we use negative log likelihood"
],
"metadata": {
"id": "HzphKgPfOvlk"
}
},
{
"cell_type": "code",
"source": [
"# Return the negative log likelihood of the data under the model\n",
"def compute_negative_log_likelihood(y_train, mu, sigma):\n",
" # TODO -- compute the likelihood of the data -- don't use the likelihood function above -- compute the negative sum of the log probabilities\n",
" # Bottom line of equation 5.3 in the notes\n",
" # You will need np.sum(), np.log()\n",
" # Replace the line below\n",
" nll = 0\n",
"\n",
" return nll"
],
"metadata": {
"id": "dsT0CWiKBmTV"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Let's test this for a homoscedastic (constant sigma) model\n",
"beta_0, omega_0, beta_1, omega_1 = get_parameters()\n",
"# Use our neural network to predict the mean of the Gaussian\n",
"mu_pred = shallow_nn(x_train, beta_0, omega_0, beta_1, omega_1)\n",
"# Set the standard deviation to something reasonable\n",
"sigma = 0.2\n",
"# Compute the log likelihood\n",
"nll = compute_negative_log_likelihood(y_train, mu_pred, sigma)\n",
"# Let's double check we get the right answer before proceeding\n",
"print(\"Correct answer = %9.9f, Your answer = %9.9f\"%(11.452419564,nll))"
],
"metadata": {
"id": "nVxUXg9rQmwI"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"For good measure, let's compute the sum of squares as well"
],
"metadata": {
"id": "-S8bXApoWVLG"
}
},
{
"cell_type": "code",
"source": [
"# Return the squared distance between the predicted\n",
"def compute_sum_of_squares(y_train, y_pred):\n",
" # TODO -- compute the sum of squared distances between the training data and the model prediction\n",
" # Eqn 5.10 in the notes. Make sure that you understand this, and ask questions if you don't\n",
" # Replace the line below\n",
" sum_of_squares = 0;\n",
"\n",
" return sum_of_squares"
],
"metadata": {
"id": "I1pjFdHCF4JZ"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Let's test this again\n",
"beta_0, omega_0, beta_1, omega_1 = get_parameters()\n",
"# Use our neural network to predict the mean of the Gaussian\n",
"y_pred = shallow_nn(x_train, beta_0, omega_0, beta_1, omega_1)\n",
"# Compute the log likelihood\n",
"sum_of_squares = compute_sum_of_squares(y_train, y_pred)\n",
"# Let's double check we get the right answer before proceeding\n",
"print(\"Correct answer = %9.9f, Your answer = %9.9f\"%(2.020992572,sum_of_squares))"
],
"metadata": {
"id": "2C40fskIHBx7"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Now let's investigate finding the maximum likelihood / minimum log likelihood / least squares solution. For simplicity, we'll assume that all the parameters are correct except one and look at how the likelihood, log likelihood, and sum of squares change as we manipulate the last parameter. We'll start with overall y offset, beta_1 (formerly phi_0)"
],
"metadata": {
"id": "OgcRojvPWh4V"
}
},
{
"cell_type": "code",
"source": [
"# Define a range of values for the parameter\n",
"beta_1_vals = np.arange(0,1.0,0.01)\n",
"# Create some arrays to store the likelihoods, negative log likelihoods and sum of squares\n",
"likelihoods = np.zeros_like(beta_1_vals)\n",
"nlls = np.zeros_like(beta_1_vals)\n",
"sum_squares = np.zeros_like(beta_1_vals)\n",
"\n",
"# Initialise the parameters\n",
"beta_0, omega_0, beta_1, omega_1 = get_parameters()\n",
"sigma = 0.2\n",
"for count in range(len(beta_1_vals)):\n",
" # Set the value for the parameter\n",
" beta_1[0,0] = beta_1_vals[count]\n",
" # Run the network with new parameters\n",
" mu_pred = y_pred = shallow_nn(x_train, beta_0, omega_0, beta_1, omega_1)\n",
" # Compute and store the three values\n",
" likelihoods[count] = compute_likelihood(y_train, mu_pred, sigma)\n",
" nlls[count] = compute_negative_log_likelihood(y_train, mu_pred, sigma)\n",
" sum_squares[count] = compute_sum_of_squares(y_train, y_pred)\n",
" # Draw the model for every 20th parameter setting\n",
" if count % 20 == 0:\n",
" # Run the model to get values to plot and plot it.\n",
" y_model = shallow_nn(x_model, beta_0, omega_0, beta_1, omega_1)\n",
" plot_univariate_regression(x_model, y_model, x_train, y_train, sigma_model = sigma, title=\"beta1=%3.3f\"%(beta_1[0,0]))\n"
],
"metadata": {
"id": "pFKtDaAeVU4U"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Now let's plot the likelihood, negative log likelihood, and least squares as a function the value of the offset beta1\n",
"fig, ax = plt.subplots(1,3)\n",
"fig.set_size_inches(10.5, 3.5)\n",
"fig.tight_layout(pad=3.0)\n",
"ax[0].plot(beta_1_vals, likelihoods); ax[0].set_xlabel('beta_1[0]$'); ax[0].set_ylabel('likelihood')\n",
"ax[1].plot(beta_1_vals, nlls); ax[1].set_xlabel('beta_1[0]'); ax[1].set_ylabel('negative log likelihood')\n",
"ax[2].plot(beta_1_vals, sum_squares); ax[2].set_xlabel('beta_1[0]'); ax[2].set_ylabel('sum of squares')\n",
"plt.show()"
],
"metadata": {
"id": "UHXeTa9MagO6"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Hopefully, you can see that the maximum of the likelihood fn is at the same position as the minimum negative log likelihood\n",
"# and the least squares solutions\n",
"# Let's check that:\n",
"print(\"Maximum likelihood = %3.3f, at beta_1=%3.3f\"%( (likelihoods[np.argmax(likelihoods)],beta_1_vals[np.argmax(likelihoods)])))\n",
"print(\"Minimum negative log likelihood = %3.3f, at beta_1=%3.3f\"%( (nlls[np.argmin(nlls)],beta_1_vals[np.argmin(nlls)])))\n",
"print(\"Least squares = %3.3f, at beta_1=%3.3f\"%( (sum_squares[np.argmin(sum_squares)],beta_1_vals[np.argmin(sum_squares)])))\n",
"\n",
"# Plot the best model\n",
"beta_1[0,0] = beta_1_vals[np.argmin(sum_squares)]\n",
"y_model = shallow_nn(x_model, beta_0, omega_0, beta_1, omega_1)\n",
"plot_univariate_regression(x_model, y_model, x_train, y_train, sigma_model = sigma, title=\"beta1=%3.3f\"%(beta_1[0,0]))"
],
"metadata": {
"id": "aDEPhddNdN4u"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"They all give the same answer. But you can see from the three plots above that the likelihood is very small unless the parameters are almost correct. So in practice, we would work with the negative log likelihood or the least squares.<br>\n",
"\n",
"Let's do the same thing with the standard deviation parameter of our network. This is not an output of the network (unless we choose to make that the case), but it still affects the likelihood.\n",
"\n"
],
"metadata": {
"id": "771G8N1Vk5A2"
}
},
{
"cell_type": "code",
"source": [
"# Define a range of values for the parameter\n",
"sigma_vals = np.arange(0.1,0.5,0.005)\n",
"# Create some arrays to store the likelihoods, negative log likelihoods and sum of squares\n",
"likelihoods = np.zeros_like(sigma_vals)\n",
"nlls = np.zeros_like(sigma_vals)\n",
"sum_squares = np.zeros_like(sigma_vals)\n",
"\n",
"# Initialise the parameters\n",
"beta_0, omega_0, beta_1, omega_1 = get_parameters()\n",
"# Might as well set to the best offset\n",
"beta_1[0,0] = 0.27\n",
"for count in range(len(sigma_vals)):\n",
" # Set the value for the parameter\n",
" sigma = sigma_vals[count]\n",
" # Run the network with new parameters\n",
" mu_pred = y_pred = shallow_nn(x_train, beta_0, omega_0, beta_1, omega_1)\n",
" # Compute and store the three values\n",
" likelihoods[count] = compute_likelihood(y_train, mu_pred, sigma)\n",
" nlls[count] = compute_negative_log_likelihood(y_train, mu_pred, sigma)\n",
" sum_squares[count] = compute_sum_of_squares(y_train, y_pred)\n",
" # Draw the model for every 20th parameter setting\n",
" if count % 20 == 0:\n",
" # Run the model to get values to plot and plot it.\n",
" y_model = shallow_nn(x_model, beta_0, omega_0, beta_1, omega_1)\n",
" plot_univariate_regression(x_model, y_model, x_train, y_train, sigma_model=sigma, title=\"sigma=%3.3f\"%(sigma))"
],
"metadata": {
"id": "dMNAr0R8gg82"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Now let's plot the likelihood, negative log likelihood, and least squares as a function the value of the standard divation sigma\n",
"fig, ax = plt.subplots(1,3)\n",
"fig.set_size_inches(10.5, 3.5)\n",
"fig.tight_layout(pad=3.0)\n",
"ax[0].plot(sigma_vals, likelihoods); ax[0].set_xlabel('$\\sigma$'); ax[0].set_ylabel('likelihood')\n",
"ax[1].plot(sigma_vals, nlls); ax[1].set_xlabel('$\\sigma$'); ax[1].set_ylabel('negative log likelihood')\n",
"ax[2].plot(sigma_vals, sum_squares); ax[2].set_xlabel('$\\sigma$'); ax[2].set_ylabel('sum of squares')\n",
"plt.show()"
],
"metadata": {
"id": "l9jduyHLDAZC"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Hopefully, you can see that the maximum of the likelihood fn is at the same position as the minimum negative log likelihood\n",
"# The least squares solution does not depend on sigma, so it's just flat -- no use here.\n",
"# Let's check that:\n",
"print(\"Maximum likelihood = %3.3f, at beta_1=%3.3f\"%( (likelihoods[np.argmax(likelihoods)],sigma_vals[np.argmax(likelihoods)])))\n",
"print(\"Minimum negative log likelihood = %3.3f, at beta_1=%3.3f\"%( (nlls[np.argmin(nlls)],sigma_vals[np.argmin(nlls)])))\n",
"# Plot the best model\n",
"sigma= sigma_vals[np.argmin(nlls)]\n",
"y_model = shallow_nn(x_model, beta_0, omega_0, beta_1, omega_1)\n",
"plot_univariate_regression(x_model, y_model, x_train, y_train, sigma_model = sigma, title=\"beta_1=%3.3f, sigma =%3.3f\"%(beta_1[0,0],sigma))"
],
"metadata": {
"id": "XH7yER52Dxt5"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Obviously, to fit the full neural model we would vary all of the 10 parameters of the network in the $\\boldsymbol\\beta_{0},\\boldsymbol\\omega_{0},\\boldsymbol\\beta_{1},\\boldsymbol\\omega_{1}$ (and maybe $\\sigma$) until we find the combination that have the maximum likelihood / minimum negative log likelihood / least squares.<br><br>\n",
"\n",
"Here we just varied one at a time as it is easier to see what is going on. This is known as **coordinate descent**.\n"
],
"metadata": {
"id": "q_KeGNAHEbIt"
}
}
]
}

View File

@@ -0,0 +1,428 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"authorship_tag": "ABX9TyOlPP7m+YTLyMPaN0WxRdrb",
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap05/5_2_Binary_Cross_Entropy_Loss.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"source": [
"# **Notebook 5.2 Binary Cross-Entropy Loss**\n",
"\n",
"This notebook investigates the binary cross-entropy loss. It follows from applying the formula in section 5.2 to a loss function based on the Bernoulli distribution.\n",
"\n",
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
"\n",
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
],
"metadata": {
"id": "jSlFkICHwHQF"
}
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "PYMZ1x-Pv1ht"
},
"outputs": [],
"source": [
"# Imports math library\n",
"import numpy as np\n",
"# Imports plotting library\n",
"import matplotlib.pyplot as plt\n",
"# Import math Library\n",
"import math"
]
},
{
"cell_type": "code",
"source": [
"# Define the Rectified Linear Unit (ReLU) function\n",
"def ReLU(preactivation):\n",
" activation = preactivation.clip(0.0)\n",
" return activation\n",
"\n",
"# Define a shallow neural network\n",
"def shallow_nn(x, beta_0, omega_0, beta_1, omaga_1):\n",
" # Make sure that input data is (1 x n_data) array\n",
" n_data = x.size\n",
" x = np.reshape(x,(1,n_data))\n",
"\n",
" # This runs the network for ALL of the inputs, x at once so we can draw graph\n",
" h1 = ReLU(np.matmul(beta_0,np.ones((1,n_data))) + np.matmul(omega_0,x))\n",
" model_out = np.matmul(beta_1,np.ones((1,n_data))) + np.matmul(omega_1,h1)\n",
" return model_out"
],
"metadata": {
"id": "Fv7SZR3tv7mV"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Get parameters for model -- we can call this function to easily reset them\n",
"def get_parameters():\n",
" # And we'll create a network that approximately fits it\n",
" beta_0 = np.zeros((3,1)); # formerly theta_x0\n",
" omega_0 = np.zeros((3,1)); # formerly theta_x1\n",
" beta_1 = np.zeros((1,1)); # formerly phi_0\n",
" omega_1 = np.zeros((1,3)); # formerly phi_x\n",
"\n",
" beta_0[0,0] = 0.3; beta_0[1,0] = -1.0; beta_0[2,0] = -0.5\n",
" omega_0[0,0] = -1.0; omega_0[1,0] = 1.8; omega_0[2,0] = 0.65\n",
" beta_1[0,0] = 2.6;\n",
" omega_1[0,0] = -24.0; omega_1[0,1] = -8.0; omega_1[0,2] = 50.0\n",
"\n",
" return beta_0, omega_0, beta_1, omega_1"
],
"metadata": {
"id": "pUT9Ain_HRim"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Utility function for plotting data\n",
"def plot_binary_classification(x_model, out_model, lambda_model, x_data = None, y_data = None, title= None):\n",
" # Make sure model data are 1D arrays\n",
" x_model = np.squeeze(x_model)\n",
" out_model = np.squeeze(out_model)\n",
" lambda_model = np.squeeze(lambda_model)\n",
"\n",
" fig, ax = plt.subplots(1,2)\n",
" fig.set_size_inches(7.0, 3.5)\n",
" fig.tight_layout(pad=3.0)\n",
" ax[0].plot(x_model,out_model)\n",
" ax[0].set_xlabel('Input, $x$'); ax[0].set_ylabel('Model output')\n",
" ax[0].set_xlim([0,1]);ax[0].set_ylim([-4,4])\n",
" if title is not None:\n",
" ax[0].set_title(title)\n",
" ax[1].plot(x_model,lambda_model)\n",
" ax[1].set_xlabel('Input, $x$'); ax[1].set_ylabel('$\\lambda$ or Pr(y=1|x)')\n",
" ax[1].set_xlim([0,1]);ax[1].set_ylim([-0.05,1.05])\n",
" if title is not None:\n",
" ax[1].set_title(title)\n",
" if x_data is not None:\n",
" ax[1].plot(x_data, y_data, 'ko')\n",
" plt.show()"
],
"metadata": {
"id": "NRR67ri_1TzN"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"# Binary classification\n",
"\n",
"In binary classification tasks, the network predicts the probability of the output belonging to class 1. Since probabilities must lie in [0,1] and the network can output arbitrary values, we map the network through a sigmoid function that ensures the range is valid."
],
"metadata": {
"id": "PsgLZwsPxauP"
}
},
{
"cell_type": "code",
"source": [
"# Sigmoid function that maps [-infty,infty] to [0,1]\n",
"def sigmoid(model_out):\n",
" # TODO -- implement the logistic sigmoid function from equation 5.18\n",
" # Replace this line:\n",
" sig_model_out = np.zeros_like(model_out)\n",
"\n",
" return sig_model_out"
],
"metadata": {
"id": "uFb8h-9IXnIe"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Let's create some 1D training data\n",
"x_train = np.array([0.09291784,0.46809093,0.93089486,0.67612654,0.73441752,0.86847339,\\\n",
" 0.49873225,0.51083168,0.18343972,0.99380898,0.27840809,0.38028817,\\\n",
" 0.12055708,0.56715537,0.92005746,0.77072270,0.85278176,0.05315950,\\\n",
" 0.87168699,0.58858043])\n",
"y_train = np.array([0,1,1,0,0,1,\\\n",
" 1,0,0,1,0,1,\\\n",
" 0,1,1,0,1,0, \\\n",
" 1,1])\n",
"\n",
"# Get parameters for the model\n",
"beta_0, omega_0, beta_1, omega_1 = get_parameters()\n",
"\n",
"# Define a range of input values\n",
"x_model = np.arange(0,1,0.01)\n",
"# Run the model to get values to plot and plot it.\n",
"model_out= shallow_nn(x_model, beta_0, omega_0, beta_1, omega_1)\n",
"lambda_model = sigmoid(model_out)\n",
"plot_binary_classification(x_model, model_out, lambda_model, x_train, y_train)\n"
],
"metadata": {
"id": "VWzNOt1swFVd"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"The left is model output and the right is the model output after the sigmoid has been applied, so it now lies in the range [0,1] and represents the probability, that y=1. The black dots show the training data. We'll compute the the likelihood and the negative log likelihood."
],
"metadata": {
"id": "MvVX6tl9AEXF"
}
},
{
"cell_type": "code",
"source": [
"# Return probability under Bernoulli distribution for input x\n",
"def bernoulli_distribution(y, lambda_param):\n",
" # TODO-- write in the equation for the Bernoulli distribution\n",
" # Equation 5.17 from the notes (you will need np.power)\n",
" # Replace the line below\n",
" prob = np.zeros_like(y)\n",
"\n",
" return prob"
],
"metadata": {
"id": "YaLdRlEX0FkU"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Let's double check we get the right answer before proceeding\n",
"print(\"Correct answer = %3.3f, Your answer = %3.3f\"%(0.8,bernoulli_distribution(0,0.2)))\n",
"print(\"Correct answer = %3.3f, Your answer = %3.3f\"%(0.2,bernoulli_distribution(1,0.2)))"
],
"metadata": {
"id": "4TSL14dqHHbV"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Now let's compute the likelihood using this function"
],
"metadata": {
"id": "R5z_0dzQMF35"
}
},
{
"cell_type": "code",
"source": [
"# Return the likelihood of all of the data under the model\n",
"def compute_likelihood(y_train, lambda_param):\n",
" # TODO -- compute the likelihood of the data -- the product of the Bernoulli probabilities for each data point\n",
" # Top line of equation 5.3 in the notes\n",
" # You will need np.prod() and the bernoulli_distribution function you used above\n",
" # Replace the line below\n",
" likelihood = 0\n",
"\n",
" return likelihood"
],
"metadata": {
"id": "zpS7o6liCx7f"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Let's test this\n",
"beta_0, omega_0, beta_1, omega_1 = get_parameters()\n",
"# Use our neural network to predict the mean of the Gaussian\n",
"model_out = shallow_nn(x_train, beta_0, omega_0, beta_1, omega_1)\n",
"lambda_train = sigmoid(model_out)\n",
"# Compute the likelihood\n",
"likelihood = compute_likelihood(y_train, lambda_train)\n",
"# Let's double check we get the right answer before proceeding\n",
"print(\"Correct answer = %9.9f, Your answer = %9.9f\"%(0.000070237,likelihood))"
],
"metadata": {
"id": "1hQxBLoVNlr2"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"You can see that this gives a very small answer, even for this small 1D dataset, and with the model fitting quite well. This is because it is the product of several probabilities, which are all quite small themselves.\n",
"This will get out of hand pretty quickly with real datasets -- the likelihood will get so small that we can't represent it with normal finite-precision math\n",
"\n",
"This is why we use negative log likelihood"
],
"metadata": {
"id": "HzphKgPfOvlk"
}
},
{
"cell_type": "code",
"source": [
"# Return the negative log likelihood of the data under the model\n",
"def compute_negative_log_likelihood(y_train, lambda_param):\n",
" # TODO -- compute the likelihood of the data -- don't use the likelihood function above -- compute the negative sum of the log probabilities\n",
" # You will need np.sum(), np.log()\n",
" # Replace the line below\n",
" nll = 0\n",
"\n",
" return nll"
],
"metadata": {
"id": "dsT0CWiKBmTV"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Let's test this\n",
"beta_0, omega_0, beta_1, omega_1 = get_parameters()\n",
"# Use our neural network to predict the mean of the Gaussian\n",
"model_out = shallow_nn(x_train, beta_0, omega_0, beta_1, omega_1)\n",
"# Pass through the sigmoid function\n",
"lambda_train = sigmoid(model_out)\n",
"# Compute the log likelihood\n",
"nll = compute_negative_log_likelihood(y_train, lambda_train)\n",
"# Let's double check we get the right answer before proceeding\n",
"print(\"Correct answer = %9.9f, Your answer = %9.9f\"%(9.563639387,nll))"
],
"metadata": {
"id": "nVxUXg9rQmwI"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Now let's investigate finding the maximum likelihood / minimum negative log likelihood solution. For simplicity, we'll assume that all the parameters are fixed except one and look at how the likelihood and log likelihood change as we manipulate the last parameter. We'll start with overall y_offset, beta_1 (formerly phi_0)"
],
"metadata": {
"id": "OgcRojvPWh4V"
}
},
{
"cell_type": "code",
"source": [
"# Define a range of values for the parameter\n",
"beta_1_vals = np.arange(-2,6.0,0.1)\n",
"# Create some arrays to store the likelihoods, negative log likelihoods\n",
"likelihoods = np.zeros_like(beta_1_vals)\n",
"nlls = np.zeros_like(beta_1_vals)\n",
"\n",
"# Initialise the parameters\n",
"beta_0, omega_0, beta_1, omega_1 = get_parameters()\n",
"for count in range(len(beta_1_vals)):\n",
" # Set the value for the parameter\n",
" beta_1[0,0] = beta_1_vals[count]\n",
" # Run the network with new parameters\n",
" model_out = shallow_nn(x_train, beta_0, omega_0, beta_1, omega_1)\n",
" lambda_train = sigmoid(model_out)\n",
" # Compute and store the three values\n",
" likelihoods[count] = compute_likelihood(y_train,lambda_train)\n",
" nlls[count] = compute_negative_log_likelihood(y_train, lambda_train)\n",
" # Draw the model for every 20th parameter setting\n",
" if count % 20 == 0:\n",
" # Run the model to get values to plot and plot it.\n",
" model_out = shallow_nn(x_model, beta_0, omega_0, beta_1, omega_1)\n",
" lambda_model = sigmoid(model_out)\n",
" plot_binary_classification(x_model, model_out, lambda_model, x_train, y_train, title=\"beta_1[0]=%3.3f\"%(beta_1[0,0]))\n"
],
"metadata": {
"id": "pFKtDaAeVU4U"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Now let's plot the likelihood, and negative log likelihoods as a function the value of the offset beta1\n",
"fig, ax = plt.subplots(1,2)\n",
"fig.set_size_inches(10.5, 3.5)\n",
"fig.tight_layout(pad=3.0)\n",
"ax[0].plot(beta_1_vals, likelihoods); ax[0].set_xlabel('beta_1[0]'); ax[0].set_ylabel('likelihood')\n",
"ax[1].plot(beta_1_vals, nlls); ax[1].set_xlabel('beta_1[0]'); ax[1].set_ylabel('negative log likelihood')\n",
"plt.show()"
],
"metadata": {
"id": "UHXeTa9MagO6"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Hopefully, you can see that the maximum of the likelihood fn is at the same position as the minimum negative log likelihood\n",
"# Let's check that:\n",
"print(\"Maximum likelihood = %f, at beta_1=%3.3f\"%( (likelihoods[np.argmax(likelihoods)],beta_1_vals[np.argmax(likelihoods)])))\n",
"print(\"Minimum negative log likelihood = %f, at beta_1=%3.3f\"%( (nlls[np.argmin(nlls)],beta_1_vals[np.argmin(nlls)])))\n",
"\n",
"# Plot the best model\n",
"beta_1[0,0] = beta_1_vals[np.argmin(nlls)]\n",
"model_out = shallow_nn(x_model, beta_0, omega_0, beta_1, omega_1)\n",
"lambda_model = sigmoid(model_out)\n",
"plot_binary_classification(x_model, model_out, lambda_model, x_train, y_train, title=\"beta_1[0]=%3.3f\"%(beta_1[0,0]))\n"
],
"metadata": {
"id": "aDEPhddNdN4u"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"They both give the same answer. But you can see from the likelihood above that the likelihood is very small unless the parameters are almost correct. So in practice, we would work with the negative log likelihood.<br><br>\n",
"\n",
"Again, to fit the full neural model we would vary all of the 10 parameters of the network in the $\\boldsymbol\\beta_{0},\\boldsymbol\\omega_{0},\\boldsymbol\\beta_{1},\\boldsymbol\\omega_{1}$ until we find the combination that have the maximum likelihood / minimum negative log likelihood.<br><br>\n",
"\n"
],
"metadata": {
"id": "771G8N1Vk5A2"
}
}
]
}

View File

@@ -0,0 +1,451 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"authorship_tag": "ABX9TyOVTohDBtmCCzSEqLJ4J9R/",
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap05/5_3_Multiclass_Cross_entropy_Loss.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"source": [
"# **Notebook 5.3 Multiclass Cross-Entropy Loss**\n",
"\n",
"This notebook investigates the multi-class cross-entropy loss. It follows from applying the formula in section 5.2 to a loss function based on the Categorical distribution.\n",
"\n",
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
"\n",
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
],
"metadata": {
"id": "jSlFkICHwHQF"
}
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "PYMZ1x-Pv1ht"
},
"outputs": [],
"source": [
"# Imports math library\n",
"import numpy as np\n",
"# Used for repmat\n",
"import numpy.matlib\n",
"# Imports plotting library\n",
"import matplotlib.pyplot as plt\n",
"# Import math Library\n",
"import math"
]
},
{
"cell_type": "code",
"source": [
"# Define the Rectified Linear Unit (ReLU) function\n",
"def ReLU(preactivation):\n",
" activation = preactivation.clip(0.0)\n",
" return activation\n",
"\n",
"# Define a shallow neural network\n",
"def shallow_nn(x, beta_0, omega_0, beta_1, omaga_1):\n",
" # Make sure that input data is (1 x n_data) array\n",
" n_data = x.size\n",
" x = np.reshape(x,(1,n_data))\n",
"\n",
" # This runs the network for ALL of the inputs, x at once so we can draw graph\n",
" h1 = ReLU(np.matmul(beta_0,np.ones((1,n_data))) + np.matmul(omega_0,x))\n",
" model_out = np.matmul(beta_1,np.ones((1,n_data))) + np.matmul(omega_1,h1)\n",
" return model_out"
],
"metadata": {
"id": "Fv7SZR3tv7mV"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Get parameters for model -- we can call this function to easily reset them\n",
"def get_parameters():\n",
" # And we'll create a network that approximately fits it\n",
" beta_0 = np.zeros((3,1)); # formerly theta_x0\n",
" omega_0 = np.zeros((3,1)); # formerly theta_x1\n",
" beta_1 = np.zeros((3,1)); # NOTE -- there are three outputs now (one for each class, so three output biases)\n",
" omega_1 = np.zeros((3,3)); # NOTE -- there are three outputs now (one for each class, so nine output weights, connecting 3 hidden units to 3 outputs)\n",
"\n",
" beta_0[0,0] = 0.3; beta_0[1,0] = -1.0; beta_0[2,0] = -0.5\n",
" omega_0[0,0] = -1.0; omega_0[1,0] = 1.8; omega_0[2,0] = 0.65\n",
" beta_1[0,0] = 2.0; beta_1[1,0] = -2; beta_1[2,0] = 0.0\n",
" omega_1[0,0] = -24.0; omega_1[0,1] = -8.0; omega_1[0,2] = 50.0\n",
" omega_1[1,0] = -2.0; omega_1[1,1] = 8.0; omega_1[1,2] = -30.0\n",
" omega_1[2,0] = 16.0; omega_1[2,1] = -8.0; omega_1[2,2] =-8\n",
"\n",
" return beta_0, omega_0, beta_1, omega_1"
],
"metadata": {
"id": "pUT9Ain_HRim"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Utility function for plotting data\n",
"def plot_multiclass_classification(x_model, out_model, lambda_model, x_data = None, y_data = None, title= None):\n",
" # Make sure model data are 1D arrays\n",
" n_data = len(x_model)\n",
" n_class = 3\n",
" x_model = np.squeeze(x_model)\n",
" out_model = np.reshape(out_model, (n_class,n_data))\n",
" lambda_model = np.reshape(lambda_model, (n_class,n_data))\n",
"\n",
" fig, ax = plt.subplots(1,2)\n",
" fig.set_size_inches(7.0, 3.5)\n",
" fig.tight_layout(pad=3.0)\n",
" ax[0].plot(x_model,out_model[0,:],'r-')\n",
" ax[0].plot(x_model,out_model[1,:],'g-')\n",
" ax[0].plot(x_model,out_model[2,:],'b-')\n",
" ax[0].set_xlabel('Input, $x$'); ax[0].set_ylabel('Model outputs')\n",
" ax[0].set_xlim([0,1]);ax[0].set_ylim([-4,4])\n",
" if title is not None:\n",
" ax[0].set_title(title)\n",
" ax[1].plot(x_model,lambda_model[0,:],'r-')\n",
" ax[1].plot(x_model,lambda_model[1,:],'g-')\n",
" ax[1].plot(x_model,lambda_model[2,:],'b-')\n",
" ax[1].set_xlabel('Input, $x$'); ax[1].set_ylabel('$\\lambda$ or Pr(y=k|x)')\n",
" ax[1].set_xlim([0,1]);ax[1].set_ylim([-0.1,1.05])\n",
" if title is not None:\n",
" ax[1].set_title(title)\n",
" if x_data is not None:\n",
" for i in range(len(x_data)):\n",
" if y_data[i] ==0:\n",
" ax[1].plot(x_data[i],-0.05, 'r.')\n",
" if y_data[i] ==1:\n",
" ax[1].plot(x_data[i],-0.05, 'g.')\n",
" if y_data[i] ==2:\n",
" ax[1].plot(x_data[i],-0.05, 'b.')\n",
" plt.show()"
],
"metadata": {
"id": "NRR67ri_1TzN"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"# Multiclass classification\n",
"\n",
"For multiclass classification, the network must predict the probability of $K$ classes, using $K$ outputs. However, these probability must be non-negative and sum to one, and the network outputs can take arbitrary values. Hence, we pass the outputs through a softmax function which maps $K$ arbitrary values to $K$ non-negative values that sum to one."
],
"metadata": {
"id": "PsgLZwsPxauP"
}
},
{
"cell_type": "code",
"source": [
"# Softmax function that maps a vector of arbitrary values to a vector of values that are positive and sum to one.\n",
"def softmax(model_out):\n",
" # This operation has to be done separately for every column of the input\n",
" # Compute exponentials of all the elements\n",
" # TODO: compute the softmax function (eq 5.22)\n",
" # Replace this skeleton code\n",
"\n",
" # Compute the exponential of the model outputs\n",
" exp_model_out = np.zeros_like(model_out) ;\n",
" # Compute the sum of the exponentials (denominator of equation 5.22)\n",
" sum_exp_model_out = np.zeros_like(model_out) ;\n",
" # Normalize the exponentials (np.matlib.repmat might be useful here)\n",
" softmax_model_out = np.ones_like(model_out)/ exp_model_out.shape[0]\n",
"\n",
" return softmax_model_out"
],
"metadata": {
"id": "uFb8h-9IXnIe"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"\n",
"# Let's create some 1D training data\n",
"x_train = np.array([0.09291784,0.46809093,0.93089486,0.67612654,0.73441752,0.86847339,\\\n",
" 0.49873225,0.51083168,0.18343972,0.99380898,0.27840809,0.38028817,\\\n",
" 0.12055708,0.56715537,0.92005746,0.77072270,0.85278176,0.05315950,\\\n",
" 0.87168699,0.58858043])\n",
"y_train = np.array([2,0,1,2,1,0,\\\n",
" 0,2,2,0,2,0,\\\n",
" 2,0,1,2,1,2, \\\n",
" 1,0])\n",
"\n",
"# Get parameters for the model\n",
"beta_0, omega_0, beta_1, omega_1 = get_parameters()\n",
"\n",
"# Define a range of input values\n",
"x_model = np.arange(0,1,0.01)\n",
"# Run the model to get values to plot and plot it.\n",
"model_out= shallow_nn(x_model, beta_0, omega_0, beta_1, omega_1)\n",
"lambda_model = softmax(model_out)\n",
"plot_multiclass_classification(x_model, model_out, lambda_model, x_train, y_train)\n"
],
"metadata": {
"id": "VWzNOt1swFVd"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"The left is model output and the right is the model output after the softmax has been applied, so it now lies in the range [0,1] and represents the probability, that y=0 (red), 1 (green) and 2 (blue) The dots at the bottom show the training data with the same color scheme. So we want the red curve to be high where there are red dots, the green curve to be high where there are green dots, and the blue curve to be high where there are blue dots We'll compute the the likelihood and the negative log likelihood."
],
"metadata": {
"id": "MvVX6tl9AEXF"
}
},
{
"cell_type": "code",
"source": [
"# Return probability under Categorical distribution for input x\n",
"# Just take value from row k of lambda param where y =k,\n",
"def categorical_distribution(y, lambda_param):\n",
" return np.array([lambda_param[row, i] for i, row in enumerate (y)])"
],
"metadata": {
"id": "YaLdRlEX0FkU"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Let's double check we get the right answer before proceeding\n",
"print(\"Correct answer = %3.3f, Your answer = %3.3f\"%(0.2,categorical_distribution(np.array([[0]]),np.array([[0.2],[0.5],[0.3]]))))\n",
"print(\"Correct answer = %3.3f, Your answer = %3.3f\"%(0.5,categorical_distribution(np.array([[1]]),np.array([[0.2],[0.5],[0.3]]))))\n",
"print(\"Correct answer = %3.3f, Your answer = %3.3f\"%(0.3,categorical_distribution(np.array([[2]]),np.array([[0.2],[0.5],[0.3]]))))\n",
"\n"
],
"metadata": {
"id": "4TSL14dqHHbV"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Now let's compute the likelihood using this function"
],
"metadata": {
"id": "R5z_0dzQMF35"
}
},
{
"cell_type": "code",
"source": [
"# Return the likelihood of all of the data under the model\n",
"def compute_likelihood(y_train, lambda_param):\n",
" # TODO -- compute the likelihood of the data -- the product of the categorical probabilities for each data point\n",
" # Top line of equation 5.3 in the notes\n",
" # You will need np.prod() and the categorical_distribution function you used above\n",
" # Replace the line below\n",
" likelihood = 0\n",
"\n",
" return likelihood"
],
"metadata": {
"id": "zpS7o6liCx7f"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Let's test this\n",
"beta_0, omega_0, beta_1, omega_1 = get_parameters()\n",
"# Use our neural network to predict the mean of the Gaussian\n",
"model_out = shallow_nn(x_train, beta_0, omega_0, beta_1, omega_1)\n",
"lambda_train = softmax(model_out)\n",
"# Compute the likelihood\n",
"likelihood = compute_likelihood(y_train, lambda_train)\n",
"# Let's double check we get the right answer before proceeding\n",
"print(\"Correct answer = %9.9f, Your answer = %9.9f\"%(0.000000041,likelihood))"
],
"metadata": {
"id": "1hQxBLoVNlr2"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"You can see that this gives a very small answer, even for this small 1D dataset, and with the model fitting quite well. This is because it is the product of several probabilities, which are all quite small themselves.\n",
"This will get out of hand pretty quickly with real datasets -- the likelihood will get so small that we can't represent it with normal finite-precision math\n",
"\n",
"This is why we use negative log likelihood"
],
"metadata": {
"id": "HzphKgPfOvlk"
}
},
{
"cell_type": "code",
"source": [
"# Return the negative log likelihood of the data under the model\n",
"def compute_negative_log_likelihood(y_train, lambda_param):\n",
" # TODO -- compute the likelihood of the data -- don't use the likelihood function above -- compute the negative sum of the log probabilities\n",
" # You will need np.sum(), np.log()\n",
" # Replace the line below\n",
" nll = 0\n",
"\n",
" return nll"
],
"metadata": {
"id": "dsT0CWiKBmTV"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Let's test this\n",
"beta_0, omega_0, beta_1, omega_1 = get_parameters()\n",
"# Use our neural network to predict the mean of the Gaussian\n",
"model_out = shallow_nn(x_train, beta_0, omega_0, beta_1, omega_1)\n",
"# Pass the outputs through the softmax function\n",
"lambda_train = softmax(model_out)\n",
"# Compute the log likelihood\n",
"nll = compute_negative_log_likelihood(y_train, lambda_train)\n",
"# Let's double check we get the right answer before proceeding\n",
"print(\"Correct answer = %9.9f, Your answer = %9.9f\"%(17.015457867,nll))"
],
"metadata": {
"id": "nVxUXg9rQmwI"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Now let's investigate finding the maximum likelihood / minimum log likelihood solution. For simplicity, we'll assume that all the parameters are fixed except one and look at how the likelihood and log likelihood change as we manipulate the last parameter. We'll start with overall y_offset, beta_1 (formerly phi_0)"
],
"metadata": {
"id": "OgcRojvPWh4V"
}
},
{
"cell_type": "code",
"source": [
"# Define a range of values for the parameter\n",
"beta_1_vals = np.arange(-2,6.0,0.1)\n",
"# Create some arrays to store the likelihoods, negative log likelihoods\n",
"likelihoods = np.zeros_like(beta_1_vals)\n",
"nlls = np.zeros_like(beta_1_vals)\n",
"\n",
"# Initialise the parameters\n",
"beta_0, omega_0, beta_1, omega_1 = get_parameters()\n",
"for count in range(len(beta_1_vals)):\n",
" # Set the value for the parameter\n",
" beta_1[0,0] = beta_1_vals[count]\n",
" # Run the network with new parameters\n",
" model_out = shallow_nn(x_train, beta_0, omega_0, beta_1, omega_1)\n",
" lambda_train = softmax(model_out)\n",
" # Compute and store the three values\n",
" likelihoods[count] = compute_likelihood(y_train,lambda_train)\n",
" nlls[count] = compute_negative_log_likelihood(y_train, lambda_train)\n",
" # Draw the model for every 20th parameter setting\n",
" if count % 20 == 0:\n",
" # Run the model to get values to plot and plot it.\n",
" model_out = shallow_nn(x_model, beta_0, omega_0, beta_1, omega_1)\n",
" lambda_model = softmax(model_out)\n",
" plot_multiclass_classification(x_model, model_out, lambda_model, x_train, y_train, title=\"beta1[0,0]=%3.3f\"%(beta_1[0,0]))\n"
],
"metadata": {
"id": "pFKtDaAeVU4U"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Now let's plot the likelihood, negative log likelihood as a function the value of the offset beta1\n",
"fig, ax = plt.subplots(1,2)\n",
"fig.set_size_inches(10.5, 3.5)\n",
"fig.tight_layout(pad=3.0)\n",
"ax[0].plot(beta_1_vals, likelihoods); ax[0].set_xlabel('beta_1[0,0]'); ax[0].set_ylabel('likelihood')\n",
"ax[1].plot(beta_1_vals, nlls); ax[1].set_xlabel('beta_1[0,0]'); ax[1].set_ylabel('negative log likelihood')\n",
"plt.show()"
],
"metadata": {
"id": "UHXeTa9MagO6"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Hopefully, you can see that the maximum of the likelihood fn is at the same position as the minimum negative log likelihood solution\n",
"# Let's check that:\n",
"print(\"Maximum likelihood = %f, at beta_1=%3.3f\"%( (likelihoods[np.argmax(likelihoods)],beta_1_vals[np.argmax(likelihoods)])))\n",
"print(\"Minimum negative log likelihood = %f, at beta_1=%3.3f\"%( (nlls[np.argmin(nlls)],beta_1_vals[np.argmin(nlls)])))\n",
"\n",
"# Plot the best model\n",
"beta_1[0,0] = beta_1_vals[np.argmin(nlls)]\n",
"model_out = shallow_nn(x_model, beta_0, omega_0, beta_1, omega_1)\n",
"lambda_model = softmax(model_out)\n",
"plot_multiclass_classification(x_model, model_out, lambda_model, x_train, y_train, title=\"beta1[0,0]=%3.3f\"%(beta_1[0,0]))\n"
],
"metadata": {
"id": "aDEPhddNdN4u"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"They both give the same answer. But you can see from the likelihood above that the likelihood is very small unless the parameters are almost correct. So in practice, we would work with the negative log likelihood.<br><br>\n",
"\n",
"Again, to fit the full neural model we would vary all of the 16 parameters of the network in the $\\boldsymbol\\beta_{0},\\boldsymbol\\omega_{0},\\boldsymbol\\beta_{1},\\boldsymbol\\omega_{1}$ until we find the combination that have the maximum likelihood / minimum negative log likelihood.<br><br>\n",
"\n"
],
"metadata": {
"id": "771G8N1Vk5A2"
}
}
]
}

1
Notebooks/Info.txt Normal file
View File

@@ -0,0 +1 @@
This directory contains the Python notebooks referenced in the margins of the main text

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
PDFFigures/UDLChap14PDF.zip Normal file

Binary file not shown.

BIN
PDFFigures/UDLChap15PDF.zip Normal file

Binary file not shown.

BIN
PDFFigures/UDLChap16PDF.zip Normal file

Binary file not shown.

BIN
PDFFigures/UDLChap17PDF.zip Normal file

Binary file not shown.

BIN
PDFFigures/UDLChap18PDF.zip Normal file

Binary file not shown.

BIN
PDFFigures/UDLChap19PDF.zip Normal file

Binary file not shown.

BIN
PDFFigures/UDLChap1PDF.zip Normal file

Binary file not shown.

BIN
PDFFigures/UDLChap20PDF.zip Normal file

Binary file not shown.

BIN
PDFFigures/UDLChap21PDF.zip Normal file

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
UDLCover.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 128 KiB

BIN
UDLCoverSmall.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

View File

@@ -1,11 +1,13 @@
<h1>Understanding Deep Learning</h1> <h1>Understanding Deep Learning</h1>
by Simon J.D. Prince by Simon J.D. Prince
<br> <br>
To be published by MIT Press. To be published by MIT Press Dec 5th 2023.<br>
<img src="https://raw.githubusercontent.com/udlbook/udlbook/main/UDLCoverSmall.jpg" alt="front cover">
<h2> Download draft PDF </h2> <h2> Download draft PDF </h2>
<a href="https://github.com/udlbook/udlbook/releases/download/v.0.7.12/UnderstandingDeepLearning_06_04_23_C.pdf">Draft PDF Chapters 1-20</a><br> 2023-04-06. CC-BY-NC-ND license <a href="https://github.com/udlbook/udlbook/releases/download/v1.1/UnderstandingDeepLearning_23_07_23_C.pdf">Draft PDF Chapters 1-21</a><br> 2023-07-23. CC-BY-NC-ND license
<br> <br>
<img src="https://img.shields.io/github/downloads/udlbook/udlbook/total" alt="download stats shield"> <img src="https://img.shields.io/github/downloads/udlbook/udlbook/total" alt="download stats shield">
<br> <br>
@@ -33,44 +35,84 @@ To be published by MIT Press.
<li> Chapter 14 - Unsupervised learning <li> Chapter 14 - Unsupervised learning
<li> Chapter 15 - Generative adversarial networks <li> Chapter 15 - Generative adversarial networks
<li> Chapter 16 - Normalizing flows <li> Chapter 16 - Normalizing flows
<li> Chapter 17 - Variational auto-encoders <li> Chapter 17 - Variational autoencoders
<li> Chapter 18 - Diffusion models <li> Chapter 18 - Diffusion models
<li> Chapter 19 - Deep reinforcement learning <li> Chapter 19 - Deep reinforcement learning
<li> Chapter 20 - Why does deep learning work? <li> Chapter 20 - Why does deep learning work?
<li> Chapter 21 - Deep learning and ethics
</ul>
<h2>Resources for instructors </h2>
<p></p>Instructor answer booklet available with proof of credentials via <a href="https://mitpress.mit.edu/9780262048644/understanding-deep-learning"/> MIT Press</a></p>
<p></p>Figures in PDF (vector) / SVG (vector) / Powerpoint (images):
<ul>
<li> Chapter 1 - Introduction: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap1PDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1udnl5pUOAc8DcAQ7HQwyzP9pwL95ynnv"> SVG Figures</a> / <a href="https://docs.google.com/presentation/d/1IjTqIUvWCJc71b5vEJYte-Dwujcp7rvG/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint Figures</a>
<li> Chapter 2 - Supervised learning: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap2PDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1VSxcU5y1qNFlmd3Lb3uOWyzILuOj1Dla"> SVG Figures</a> / <a href="https://docs.google.com/presentation/d/1Br7R01ROtRWPlNhC_KOommeHAWMBpWtz/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint Figures</a>
<li> Chapter 3 - Shallow neural networks: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap3PDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=19kZFWlXhzN82Zx02ByMmSZOO4T41fmqI"> SVG Figures</a> / <a href="https://docs.google.com/presentation/d/1e9M3jB5I9qZ4dCBY90Q3Hwft_i068QVQ/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint Figures</a>
<li> Chapter 4 - Deep neural networks: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap4PDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1ojr0ebsOhzvS04ItAflX2cVmYqHQHZUa"> SVG Figures</a> / <a href="https://docs.google.com/presentation/d/1LTSsmY4mMrJbqXVvoTOCkQwHrRKoYnJj/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint Figures</a>
<li> Chapter 5 - Loss functions: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap5PDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=17MJO7fiMpFZVqKeqXTbQ36AMpmR4GizZ"> SVG Figures</a> / <a href="https://docs.google.com/presentation/d/1gcpC_3z9oRp87eMkoco-kdLD-MM54Puk/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint Figures</a>
<li> Chapter 6 - Training models: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap6PDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1VPdhFRnCr9_idTrX0UdHKGAw2shUuwhK"> SVG Figures</a> / <a href="https://docs.google.com/presentation/d/1AKoeggAFBl9yLC7X5tushAGzCCxmB7EY/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint Figures</a>
<li> Chapter 7 - Gradients and initialization: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap7PDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1TTl4gvrTvNbegnml4CoGoKOOd6O8-PGs"> SVG Figures</a> / <a href="https://docs.google.com/presentation/d/11zhB6PI-Dp6Ogmr4IcI6fbvbqNqLyYcz/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint Figures</a>
<li> Chapter 8 - Measuring performance: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap8PDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=19eQOnygd_l0DzgtJxXuYnWa4z7QKJrJx"> SVG Figures</a> / <a href="https://docs.google.com/presentation/d/1SHRmJscDLUuQrG7tmysnScb3ZUAqVMZo/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint Figures</a>
<li> Chapter 9 - Regularization: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap9PDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1LprgnUGL7xAM9-jlGZC9LhMPeefjY0r0"> SVG Figures</a> / <a href="https://docs.google.com/presentation/d/1VwIfvjpdfTny6sEfu4ZETwCnw6m8Eg-5/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint Figures</a>
<li> Chapter 10 - Convolutional networks: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap10PDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1-Wb3VzaSvVeRzoUzJbI2JjZE0uwqupM9"> SVG Figures</a> / <a href="https://docs.google.com/presentation/d/1MtfKBC4Y9hWwGqeP6DVwUNbi1j5ncQCg/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint Figures</a>
<li> Chapter 11 - Residual networks: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap11PDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1Mr58jzEVseUAfNYbGWCQyDtEDwvfHRi1"> SVG Figures</a> / <a href="https://docs.google.com/presentation/d/1saY8Faz0KTKAAifUrbkQdLA2qkyEjOPI/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint Figures</a>
<li> Chapter 12 - Transformers: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap12PDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1txzOVNf8-jH4UfJ6SLnrtOfPd1Q3ebzd"> SVG Figures</a> / <a href="https://docs.google.com/presentation/d/1GVNvYWa0WJA6oKg89qZre-UZEhABfm0l/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint Figures</a>
<li> Chapter 13 - Graph neural networks: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap13PDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1lQIV6nRp6LVfaMgpGFhuwEXG-lTEaAwe"> SVG Figures</a> / <a href="https://docs.google.com/presentation/d/1YwF3U82c1mQ74c1WqHVTzLZ0j7GgKaWP/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint Figures</a>
<li> Chapter 14 - Unsupervised learning: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap14PDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1aMbI6iCuUvOywqk5pBOmppJu1L1anqsM"> SVG Figures</a> / <a href="https://docs.google.com/presentation/d/1A-lBGv3NHl4L32NvfFgy1EKeSwY-0UeB/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true"> Powerpoint Figures</a>
<li> Chapter 15 - Generative adversarial networks: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap15PDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1EErnlZCOlXc3HK7m83T2Jh_0NzIUHvtL"> SVG Figures</a> / <a href="https://docs.google.com/presentation/d/10Ernk41ShOTf4IYkMD-l4dJfKATkXH4w/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint Figures</a>
<li> Chapter 16 - Normalizing flows: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap16PDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1B9bxtmdugwtg-b7Y4AdQKAIEVWxjx8l3"> SVG Figures</a> / <a href="https://docs.google.com/presentation/d/1nLLzqb9pdfF_h6i1HUDSyp7kSMIkSUUA/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint Figures</a>
<li> Chapter 17 - Variational autoencoders: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap17PDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1SNtNIY7khlHQYMtaOH-FosSH3kWwL4b7"> SVG Figures</a> / <a href="https://docs.google.com/presentation/d/1lQE4Bu7-LgvV2VlJOt_4dQT-kusYl7Vo/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint Figures</a>
<li> Chapter 18 - Diffusion models: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap18PDF.zip">PDF Figures</a> / <a href="https://docs.google.com/presentation/d/1x_ufIBtVPzWUvRieKMkpw5SdRjXWwdfR/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true"> PowerPoint Figures</a>
<li> Chapter 19 - Deep reinforcement learning: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap19PDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1a5WUoF7jeSgwC_PVdckJi1Gny46fCqh0"> SVG Figures</a> / <a href="https://docs.google.com/presentation/d/1TnYmVbFNhmMFetbjyfXGmkxp1EHauMqr/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true"> PowerPoint Figures </a>
<li> Chapter 20 - Why does deep learning work?: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap20PDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1M2d0DHEgddAQoIedKSDTTt7m1ZdmBLQ3"> SVG Figures</a> / <a href="https://docs.google.com/presentation/d/1coxF4IsrCzDTLrNjRagHvqB_FBy10miA/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true"> PowerPoint Figures</a>
<li> Chapter 21 - Deep learning and ethics: <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap21PDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1jixmFfwmZkW_UVYzcxmDcMsdFFtnZ0bU"> SVG Figures</a>/ <a href="https://docs.google.com/presentation/d/1EtfzanZYILvi9_-Idm28zD94I_6OrN9R/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">PowerPoint Figures</a>
<li> Appendices - <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLAppendixPDF.zip">PDF Figures</a> / <a href="https://drive.google.com/uc?export=download&id=1k2j7hMN40ISPSg9skFYWFL3oZT7r8v-l"> SVG Figures</a> / <a href="https://docs.google.com/presentation/d/1_2cJHRnsoQQHst0rwZssv-XH4o5SEHks/edit?usp=drive_link&ouid=110441678248547154185&rtpof=true&sd=true">Powerpoint Figures</a>
</ul>
Instructions for editing figures / equations can be found <a href="https://drive.google.com/uc?export=download&id=1T_MXXVR4AfyMnlEFI-UVDh--FXI5deAp/">here</a>.</p>
<h2>Resources for students</h2>
<p>Answers to selected questions: <a href="https://github.com/udlbook/udlbook/raw/main/UDL_Answer_Booklet_Students.pdf">PDF</a></p>
<p>Python notebooks:</p>
<ul>
<li> Chapter 1 - Introduction: <a href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap01/1_1_BackgroundMathematics.ipynb">1.1</a>
<li> Chapter 2 - Supervised learning <a href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap02/2_1_Supervised_Learning.ipynb">2.1</a>
<li> Chapter 3 - Shallow neural networks <a href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap03/3_1_Shallow_Networks_I.ipynb">3.1</a>, <a href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap03/3_2_Shallow_Networks_II.ipynb">3.2</a>, <a href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap03/3_3_Shallow_Network_Regions.ipynb">3.3</a>, <a href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap03/3_4_Activation_Functions.ipynb">3.4</a>
<li> Chapter 4 - Deep neural networks <a href="https://github.com/udlbook/udlbook/blob/main/Notebooks/Chap04/4_1_Composing_Networks.ipynb">4.1</a>, 4.2, 4.3 (coming soon)
<li> Chapter 5 - Loss functions (coming soon)
<li> Chapter 6 - Training models (coming soon)
<li> Chapter 7 - Gradients and initialization (coming soon)
<li> Chapter 8 - Measuring performance (coming soon)
<li> Chapter 9 - Regularization (coming soon)
<li> Chapter 10 - Convolutional networks (coming soon)
<li> Chapter 11 - Residual networks (coming soon)
<li> Chapter 12 - Transformers (coming soon)
<li> Chapter 13 - Graph neural networks (coming soon)
<li> Chapter 14 - Unsupervised learning (coming soon)
<li> Chapter 15 - Generative adversarial networks (coming soon)
<li> Chapter 16 - Normalizing flows (coming soon)
<li> Chapter 17 - Variational autoencoders (coming soon)
<li> Chapter 18 - Diffusion models (coming soon)
<li> Chapter 19 - Deep reinforcement learning (coming soon)
<li> Chapter 20 - Why does deep learning work? (coming soon)
<li> Chapter 21 - Deep learning and ethics (coming soon)
</ul> </ul>
<br> <br>
Citation: <h2>Citation:</h2>
<pre><code> <pre><code>
@book{prince2023understanding, @book{prince2023understanding,
author = "Simon J.D. Prince", author = "Simon J.D. Prince",
title = "Understanding Deep Learning", title = "Understanding Deep Learning",
publisher = "MIT Press", publisher = "MIT Press",
year = 2023, year = 2023,
url = "https://udlbook.github.io/udlbook/" url = "http://udlbook.com"
} }
</code></pre> </code></pre>
<h2>Resources for instructors </h2>
<ul>
<li> Chapter 1 - Introduction
<li> Chapter 2 - Supervised learning: Slides / Notebooks / <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap2PDF.zip">PDF Figures</a> / <a href="https://github.com/udlbook/udlbook/raw/main/Slides/UDLChap2.pptx">PowerPoint Figures</a>
<li> Chapter 3 - Shallow neural networks: Slides / Notebooks / <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap3PDF.zip">PDF Figures</a> / <a href="https://github.com/udlbook/udlbook/raw/main/Slides/UDLChap3.pptx">PowerPoint Figures</a>
<li> Chapter 4 - Deep neural networks: Slides / Notebooks / <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap4PDF.zip">PDF Figures</a> / <a href="https://github.com/udlbook/udlbook/raw/main/Slides/UDLChap4.pptx">PowerPoint Figures</a>
<li> Chapter 5 - Loss functions: Slides / Notebooks / <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap5PDF.zip">PDF Figures</a> / <a href="https://github.com/udlbook/udlbook/raw/main/Slides/UDLChap5.pptx">PowerPoint Figures</a>
<li> Chapter 6 - Training models: Slides / Notebooks / <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap6PDF.zip">PDF Figures</a> / <a href="https://github.com/udlbook/udlbook/raw/main/Slides/UDLChap6.pptx">PowerPoint Figures</a>
<li> Chapter 7 - Gradients and initialization: Slides / Notebooks / <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap7PDF.zip">PDF Figures</a> / <a href="https://github.com/udlbook/udlbook/raw/main/Slides/UDLChap7.pptx">PowerPoint Figures</a>
<li> Chapter 8 - Measuring performance: Slides / Notebooks / <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap8PDF.zip">PDF Figures</a> / <a href="https://github.com/udlbook/udlbook/raw/main/Slides/UDLChap8.pptx">PowerPoint Figures</a>
<li> Chapter 9 - Regularization: Slides / Notebooks / <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap9PDF.zip">PDF Figures</a> / <a href="https://github.com/udlbook/udlbook/raw/main/Slides/UDLChap9.pptx">PowerPoint Figures</a>
<li> Chapter 10 - Convolutional networks: Slides / Notebooks / <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap10PDF.zip">PDF Figures</a> / <a href="https://github.com/udlbook/udlbook/raw/main/Slides/UDLChap10.pptx">PowerPoint Figures</a>
<li> Chapter 11 - Residual networks: Slides / Notebooks / <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap11PDF.zip">PDF Figures</a> / <a href="https://github.com/udlbook/udlbook/raw/main/Slides/UDLChap11.pptx">PowerPoint Figures</a>
<li> Chapter 12 - Transformers: Slides / Notebooks / <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap12PDF.zip">PDF Figures</a> / <a href="https://github.com/udlbook/udlbook/raw/main/Slides/UDLChap12.pptx">PowerPoint Figures</a>
<li> Chapter 13 - Graph neural networks: Slides / Notebooks / <a href="https://github.com/udlbook/udlbook/raw/main/PDFFigures/UDLChap13PDF.zip">PDF Figures</a> / <a href="https://github.com/udlbook/udlbook/raw/main/Slides/UDLChap13.pptx">PowerPoint Figures</a>
<li> Chapter 14 - Unsupervised learning: Slides / Notebooks / PDF Figures / Powerpoint Figures
<li> Chapter 15 - Generative adversarial networks: Slides / Notebooks / PDF Figures / PowerPoint Figures
<li> Chapter 16 - Normalizing flows: Slides / Notebooks / PDF Figures / PowerPoint Figures
<li> Chapter 17 - Variational auto-encoders: Slides / Notebooks / PDF Figures / PowerPoint Figures
<li> Chapter 18 - Diffusion models: Slides / Notebooks / PDF Figures / PowerPoint Figures
<li> Chapter 19 - Deep reinforcement learning: Slides / Notebooks / PDF Figures / PowerPoint Figures
<li> Chapter 20 - Why does deep learning work?: Slides / Notebooks / PDF Figures / PowerPoint Figures
</ul>