Compare commits

...

159 Commits

Author SHA1 Message Date
udlbook
51444a4bbb Add files via upload 2026-02-08 12:40:52 -05:00
udlbook
987df8cd88 Merge pull request #311 from jalaneunos/main
Fix terminal state check in 19.3 and 19.4, fix typo in 19.4
2026-02-08 10:28:49 -05:00
udlbook
9873b8b20d Created using Colab 2026-02-08 09:43:18 -05:00
udlbook
bc0ca18695 Created using Colab 2026-02-08 09:38:26 -05:00
udlbook
d66ba78862 Created using Colab 2026-02-08 09:37:20 -05:00
udlbook
a8fe82b5e1 Created using Colab 2026-01-20 12:49:16 -05:00
jalaneunos
ac540f1294 fix: correct terminal state in 19.4, fix typo 2026-01-05 17:47:40 +08:00
jalaneunos
080bdd319d fix: correct terminal state 2026-01-05 10:47:35 +08:00
udlbook
60d50aa9d2 Created using Colab 2026-01-01 15:33:00 -05:00
udlbook
d45cba5c95 Merge pull request #309 from fxwin/main
Add CUDA Support for Notebook 10.5
2026-01-01 15:22:51 -05:00
udlbook
e9f75027bb Merge pull request #308 from forestschao/patch-1
Update 11_1_Shattered_Gradients.ipynb
2026-01-01 15:22:28 -05:00
udlbook
9de32ff327 Delete notebooks/SAT2/EfficientBinarySearch.ipynb 2025-12-15 15:15:33 -05:00
udlbook
871304357c Created using Colab 2025-12-15 15:15:18 -05:00
udlbook
c385687d8a Created using Colab 2025-12-03 11:05:25 -05:00
Felix Winterhalter
207ff5e636 Fix unintended changes
A prior commit had removed parts of the code for drawing a handful of training samples.
2025-11-29 21:31:30 +01:00
Felix Winterhalter
cc9c695ff7 Add CUDA support to notebook 10.5 2025-11-29 21:20:38 +01:00
udlbook
75646c2c8e Delete notebooks/ShallowNN/LinearRegions_Answers.ipynb 2025-11-27 16:26:50 -05:00
udlbook
5552890706 Created using Colab 2025-11-27 16:25:42 -05:00
udlbook
01755deefe Created using Colab 2025-11-27 16:16:07 -05:00
udlbook
afb9ead4d8 Created using Colab 2025-11-27 16:07:59 -05:00
udlbook
57151930de Created using Colab 2025-11-27 15:50:21 -05:00
udlbook
ca85255c74 Delete notebooks/ShallowNN/ActivationFunctions.ipynb 2025-11-18 12:48:17 -05:00
udlbook
3003437b04 Created using Colab 2025-11-18 12:48:05 -05:00
forestschao
5e726fcf4e Update 11_1_Shattered_Gradients.ipynb
Fix the comments: K is depth.
2025-11-11 17:44:46 -08:00
udlbook
6a8273459f Created using Colab 2025-11-05 14:25:21 -05:00
udlbook
1c2e19aa3b Created using Colab 2025-11-05 10:55:06 -05:00
udlbook
e818dfe054 Created using Colab 2025-11-05 10:29:00 -05:00
udlbook
4a08818706 Created using Colab 2025-11-05 09:48:13 -05:00
udlbook
16b72a8a9e Created using Colab 2025-11-05 09:43:30 -05:00
udlbook
44a3e5f678 Created using Colab 2025-08-26 07:47:09 -04:00
udlbook
a644267053 Created using Colab 2025-08-26 07:09:18 -04:00
Simon Prince
69a2b00c9d Removing Deeper Insights 2025-08-19 17:27:07 -04:00
Simon Prince
9f0570e26f Deleted Deeper Insights podcast 2025-08-19 17:25:06 -04:00
udlbook
e3a8bb9ac4 Delete notebooks/SAT_Exhaustive.ipynb 2025-07-07 11:19:56 -04:00
udlbook
49da623d86 Created using Colab 2025-07-07 11:18:57 -04:00
udlbook
0c771fd677 Add files via upload 2025-06-18 15:59:09 -04:00
udlbook
5302b32929 Created using Colab 2025-05-22 13:04:04 -04:00
udlbook
d5586e57fc Created using Colab 2025-05-22 12:12:42 -04:00
udlbook
d0acc42d81 Created using Colab 2025-05-22 12:11:38 -04:00
udlbook
f3188ac35a Created using Colab 2025-05-16 15:45:18 -04:00
udlbook
ad1b6a558b Created using Colab 2025-05-16 15:39:03 -04:00
udlbook
7eadd56eaa Created using Colab 2025-05-16 15:32:56 -04:00
udlbook
53c1357df7 Created using Colab 2025-05-16 15:21:19 -04:00
udlbook
8d862ede26 Created using Colab 2025-05-16 12:20:43 -04:00
udlbook
44bbfbed91 Created using Colab 2025-04-20 10:42:09 -04:00
udlbook
f65f0b1ddf Created using Colab 2025-04-20 10:25:38 -04:00
udlbook
1d6d6b6fbe Update SAT_Sudoku.ipynb 2025-04-20 10:24:02 -04:00
udlbook
62779ec260 Created using Colab 2025-04-20 10:23:27 -04:00
udlbook
be3edb60f9 Created using Colab 2025-04-19 16:35:23 -04:00
udlbook
b9403e091b Created using Colab 2025-04-19 16:35:10 -04:00
udlbook
2c916d9a87 Created using Colab 2025-04-19 16:30:00 -04:00
udlbook
310b71e203 Created using Colab 2025-04-19 12:48:29 -04:00
udlbook
fcb1333aed Created using Colab 2025-04-19 12:44:38 -04:00
udlbook
c39267b3b4 Created using Colab 2025-04-19 12:43:00 -04:00
udlbook
4291ed453c Created using Colab 2025-04-19 12:19:24 -04:00
udlbook
ab2ff3177a Add files via upload 2025-04-09 12:58:10 -04:00
udlbook
c2a4d40da3 Created using Colab 2025-04-04 15:15:23 -04:00
udlbook
aa75d3ad73 Created using Colab 2025-04-03 17:11:57 -04:00
udlbook
1f0c224a7d Created using Colab 2025-04-03 17:05:24 -04:00
udlbook
eb29a28284 Created using Colab 2025-04-03 16:48:48 -04:00
udlbook
7648203767 Created using Colab 2025-04-03 16:35:24 -04:00
udlbook
64e1d82d04 Created using Colab 2025-03-31 18:10:24 -04:00
udlbook
f7450d1875 Created using Colab 2025-03-31 18:07:05 -04:00
Simon Prince
884a7e358b Merge branch 'main' of https://github.com/udlbook/udlbook
commit.
2025-03-28 14:45:38 -04:00
Simon Prince
2016977f30 New release 2025-03-28 14:44:01 -04:00
udlbook
f88127c0d2 Created using Colab 2025-03-27 17:56:09 -04:00
udlbook
a637eec888 Created using Colab 2025-03-27 17:52:22 -04:00
udlbook
ddd6bf9149 Created using Colab 2025-03-27 17:47:29 -04:00
udlbook
0b41646bf3 Add files via upload 2025-03-27 12:57:57 -04:00
udlbook
16afbcdf83 Created using Colab 2025-03-24 15:35:15 -04:00
udlbook
b0add1f8e2 Merge pull request #277 from ullizen/patch-1
Update 4_2_Clipping_functions.ipynb
2025-03-24 15:31:02 -04:00
ullizen
03ebe5a039 Update 4_2_Clipping_functions.ipynb 2025-03-08 10:52:03 +01:00
udlbook
41e8262f20 Created using Colab 2025-03-04 16:39:17 -05:00
udlbook
2c6e1cb9f8 Created using Colab 2025-03-04 16:32:31 -05:00
udlbook
6c99c6b7eb Created using Colab 2025-03-04 14:31:39 -05:00
udlbook
0988ae8bd0 Merge pull request #273 from fredhsu/patch-1
Update 7_2_Backpropagation.ipynb to fix equation references
2025-03-04 14:00:59 -05:00
Fred Hsu
2cca6dec75 Update 7_2_Backpropagation.ipynb to fix equation references
Some off by one errors in the equation references.
2025-02-27 15:39:46 -08:00
udlbook
49d74b66a9 Created using Colab 2025-02-16 10:25:23 -05:00
udlbook
13c0ad30fe Merge pull request #270 from MarkGotham/main
"TO DO" > "TODO
2025-02-16 10:22:59 -05:00
udlbook
95549683c4 Created using Colab 2025-02-11 15:13:30 -05:00
Mark Gotham
9649ce382b "TO DO" > "TODO
In [commit 6072ad4](6072ad4), @KajvanRijn kindly changed all "TO DO" to "TODO" in the code blocks. That's useful. In addition, it should be changed (as here) in the instructions. Then there's no doubt or issue for anyone searching all instances.
2025-02-11 15:11:06 +00:00
udlbook
666cbb02d5 Created using Colab 2025-02-01 14:56:25 -05:00
udlbook
f0337130cb Created using Colab 2025-01-30 11:35:39 -05:00
udlbook
472571aef0 Created using Colab 2025-01-29 10:39:29 -05:00
udlbook
13b39c2f72 Created using Colab 2025-01-29 10:32:57 -05:00
udlbook
84a11d68ed Created using Colab 2025-01-29 10:29:54 -05:00
udlbook
653d2f7b84 Created using Colab 2025-01-29 10:28:29 -05:00
udlbook
a7ed3e2c34 Created using Colab 2025-01-29 10:24:36 -05:00
udlbook
40a2c3ca8b Created using Colab 2025-01-29 10:17:58 -05:00
udlbook
fb66cd682d Created using Colab 2025-01-28 11:43:39 -05:00
udlbook
88e8526fa7 Created using Colab 2025-01-28 10:59:00 -05:00
udlbook
667346fbdd Created using Colab 2025-01-28 10:57:32 -05:00
udlbook
4e564088a1 Created using Colab 2025-01-28 10:50:31 -05:00
udlbook
f1c07f53bf Created using Colab 2025-01-28 10:48:39 -05:00
udlbook
623b9782e7 Created using Colab 2025-01-28 10:36:43 -05:00
udlbook
60c5a48477 Delete Trees/LinearRegression_LeastSquares.ipynb 2025-01-27 17:40:21 -05:00
udlbook
b4688bda68 Created using Colab 2025-01-27 17:38:54 -05:00
Simon Prince
faf34e0887 fixed typo 2025-01-23 16:52:43 -05:00
Simon Prince
8f2ef53eab Merge branch 'main' of https://github.com/udlbook/udlbook
Trying to fix website problems
2025-01-23 16:25:08 -05:00
Simon Prince
2f0339341c bib file, eqns 2025-01-23 16:11:01 -05:00
udlbook
f8acbaab82 Add files via upload 2025-01-23 15:49:08 -05:00
udlbook
2aaaef0838 Delete UDL_Equations.pdf 2025-01-23 15:47:55 -05:00
udlbook
9a2039d392 Add files via upload 2025-01-23 15:40:43 -05:00
udlbook
6d76e47849 Created using Colab 2024-12-29 17:13:26 -05:00
udlbook
b5c65665b6 Update 10_4_Downsampling_and_Upsampling.ipynb 2024-12-18 09:06:15 -05:00
udlbook
dd9a56d96b Created using Colab 2024-12-16 16:06:30 -05:00
udlbook
9b71ac0487 Merge pull request #243 from aleksandrskoselevs/patch-2
Update 15_2_Wasserstein_Distance.ipynb
2024-12-02 15:52:24 -05:00
udlbook
eaff933ff7 Created using Colab 2024-12-02 15:43:55 -05:00
udlbook
c3dfe95700 Merge pull request #249 from ThePiep/fix-TODO
Change "TO DO" in comments to "TODO"
2024-12-02 15:19:54 -05:00
Kaj van Rijn
7082ae8620 Merge branch 'main' of github.com:ThePiep/udlbook-piep 2024-11-22 15:36:33 +01:00
Kaj van Rijn
6072ad4450 Change all TO DO to TODO 2024-11-22 15:34:52 +01:00
udlbook
33197fde36 Add files via upload 2024-11-21 16:45:29 -05:00
udlbook
6d425c04d4 Update 3_3_Shallow_Network_Regions.ipynb 2024-11-18 15:33:42 -05:00
udlbook
57c95132d3 Created using Colab 2024-11-12 17:11:44 -05:00
udlbook
2b0ac95740 Created using Colab 2024-11-08 12:31:21 -05:00
udlbook
d5f198f2d8 Add files via upload 2024-11-04 15:25:38 -05:00
udlbook
4edd8c923d Add files via upload 2024-10-30 16:51:41 -04:00
aleksandrskoselevs
1adb96e006 Update 15_2_Wasserstein_Distance.ipynb 2024-10-30 09:19:22 +01:00
udlbook
3801b8d52d Created using Colab 2024-10-24 16:45:43 -04:00
udlbook
dc6b346bda Created using Colab 2024-10-24 16:43:14 -04:00
udlbook
5eb264540d Created using Colab 2024-10-24 16:40:27 -04:00
udlbook
7ba844f2b5 Created using Colab 2024-10-24 16:04:27 -04:00
aleksandrskoselevs
be86733a93 Update 15_2_Wasserstein_Distance.ipynb
Scaling of the distance matrix was not mentioned in the book.
2024-10-22 12:11:15 +02:00
udlbook
d101aa428b Merge pull request #236 from aleksandrskoselevs/patch-1
Update 13_4_Graph_Attention_Networks.ipynb
2024-10-15 17:24:40 -04:00
aleksandrskoselevs
8c6e40daee Update 13_4_Graph_Attention_Networks.ipynb
`phi` is defined in the book as a column vector
2024-10-11 10:54:05 +02:00
udlbook
efafb942eb Add files via upload 2024-10-01 15:14:01 -04:00
udlbook
b10a2b6940 Delete UDL_Answer_Booklet.pdf 2024-10-01 15:13:35 -04:00
udlbook
ede7247a0c Add files via upload 2024-10-01 15:13:14 -04:00
udlbook
c3b97af456 Created using Colab 2024-09-16 09:21:22 -04:00
udlbook
e1df2156a3 Created using Colab 2024-09-16 09:19:49 -04:00
udlbook
f887835646 Created using Colab 2024-09-16 09:18:12 -04:00
udlbook
e9c8d846f2 Created using Colab 2024-09-16 07:36:27 -04:00
udlbook
b7869e8b41 Add files via upload 2024-08-28 13:01:31 -04:00
udlbook
747ec9efe1 Merge pull request #227 from aleksandrskoselevs/main
Notebook 9_5_Augmentation - Removed duplicate weight initialization
2024-08-23 18:17:17 -04:00
udlbook
58dfb0390c Merge pull request #224 from muddlebee/udlbook
fix(8.1) : error in Chap08\8_1_MNIST_1D_Performance.ipynb
2024-08-23 14:24:32 -04:00
aleksandrskoselevs
3aeb8db4cd cleaner diff 2024-08-23 10:29:52 +02:00
aleksandrskoselevs
305a055079 Revert "Remove duplicate weight initialization"
This reverts commit 87cf590af9.
2024-08-23 10:29:04 +02:00
aleksandrskoselevs
87cf590af9 Remove duplicate weight initialization 2024-08-23 09:57:38 +02:00
muddlebee
ccedbb72e7 fix(8.1) : error in Chap08\8_1_MNIST_1D_Performance.ipynb 2024-08-17 19:20:02 +05:30
muddlebee
b423a67855 fix(8.1) : error in Chap08\8_1_MNIST_1D_Performance.ipynb 2024-08-17 03:50:15 +05:30
muddlebee
3c8dab14e6 fix(8.1) : error in Chap08\8_1_MNIST_1D_Performance.ipynb 2024-08-17 03:48:56 +05:30
udlbook
ab73ae785b Add files via upload 2024-08-05 18:47:05 -04:00
udlbook
df86bbba04 Merge pull request #219 from jhrcek/jhrcek/fix-duplicate-words
Fix duplicate word occurrences in notebooks
2024-07-30 16:07:03 -04:00
udlbook
a9868e6da8 Rename README.md to src/README.md 2024-07-30 16:01:39 -04:00
Jan Hrček
fed3962bce Fix markdown headings 2024-07-30 11:25:47 +02:00
Jan Hrček
c5fafbca97 Fix duplicate word occurrences in notebooks 2024-07-30 11:16:30 +02:00
udlbook
5f16e0f9bc Fixed problem with example label. 2024-07-29 18:52:49 -04:00
udlbook
121c81a04e Update index.html 2024-07-22 18:42:22 -04:00
udlbook
e968741846 Add files via upload 2024-07-22 17:09:30 -04:00
udlbook
37011065d7 Add files via upload 2024-07-22 17:09:15 -04:00
udlbook
afd20d0364 Update 17_1_Latent_Variable_Models.ipynb 2024-07-22 15:03:17 -04:00
udlbook
0d135f1ee7 Fixed problems with MNIST1D 2024-07-19 15:55:44 -04:00
udlbook
54a020304e Merge pull request #211 from qualiaMachine/patch-1
Update 8_3_Double_Descent.ipynb
2024-07-10 15:53:00 -04:00
Chris Endemann
ccbbc4126e Update 8_3_Double_Descent.ipynb
Apologies, accidentally removed the "open in colab" button in the pull request you accepted earlier today. This corrects the mistake!
2024-07-10 14:15:21 -05:00
udlbook
d3273c99e2 Merge pull request #210 from qualiaMachine/main
Add vertical line to double descent plot indicating where count(weights) = count(train)
2024-07-10 14:33:31 -04:00
Chris Endemann
f9e45c976c Merge branch 'udlbook:main' into main 2024-07-10 09:43:18 -05:00
Chris Endemann
b005cec9c1 Update 8_3_Double_Descent.ipynb
I added a little code to include a vertical dashed line on the plot representing where total_weights = number of train observations.  I also moved n_epochs as an argument to fit_model() so learners can play around with the impact of n_epochs more easily.
2024-07-10 09:42:38 -05:00
udlbook
b8a91ad34d Merge pull request #208 from SwayStar123/patch-4
Update 12_2_Multihead_Self_Attention.ipynb
2024-07-09 17:53:31 -04:00
SwayStar123
a2a86c27bc Update 12_2_Multihead_Self_Attention.ipynb
title number is incorrect, its actually 12.2
2024-07-06 17:19:13 +05:30
110 changed files with 20621 additions and 503 deletions

View File

@@ -31,7 +31,7 @@
"source": [
"# Gradient flow\n",
"\n",
"This notebook replicates some of the results in the the Borealis AI [blog](https://www.borealisai.com/research-blogs/gradient-flow/) on gradient flow. \n"
"This notebook replicates some of the results in the Borealis AI [blog](https://www.borealisai.com/research-blogs/gradient-flow/) on gradient flow. \n"
],
"metadata": {
"id": "ucrRRJ4dq8_d"

View File

@@ -166,7 +166,7 @@
{
"cell_type": "markdown",
"source": [
"Routines to calculate the empirical and analytical NTK (i.e. the NTK with infinite hidden units) for the the shallow network"
"Routines to calculate the empirical and analytical NTK (i.e. the NTK with infinite hidden units) for the shallow network"
],
"metadata": {
"id": "mxW8E5kYIzlj"

View File

@@ -0,0 +1,432 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Blogs/BorealisODENumerical.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "JXsO7ce7oqeq"
},
"source": [
"# Numerical methods for ODEs\n",
"\n",
"This blog contains code that accompanies the RBC Borealis blog on numerical methods for ODEs. Contact udlbookmail@gmail.com if you find any problems."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "AnvAKtP_oqes"
},
"source": [
"Import relevant libraries"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "UF-gJyZggyrl"
},
"outputs": [],
"source": [
"import numpy as np\n",
"import matplotlib.pyplot as plt"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "szWLVrSSoqet"
},
"source": [
"Define the ODE that we will be experimenting with."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "NkrGZLL6iM3P"
},
"outputs": [],
"source": [
"# The ODE that we will experiment with\n",
"def ode_lin_homog(t,x):\n",
" return 0.5 * x ;\n",
"\n",
"# The derivative of the ODE function with respect to x (needed for Taylor's method)\n",
"def ode_lin_homog_deriv_x(t,x):\n",
" return 0.5 ;\n",
"\n",
"# The derivative of the ODE function with respect to t (needed for Taylor's method)\n",
"def ode_lin_homog_deriv_t(t,x):\n",
" return 0.0 ;\n",
"\n",
"# The closed form solution (so we can measure the error)\n",
"def ode_lin_homog_soln(t,C=0.5):\n",
" return C * np.exp(0.5 * t) ;"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "In1C9wZkoqet"
},
"source": [
"This is a generic method that runs the numerical methods. It takes the initial conditions ($t_0$, $x_0$), the final time $t_1$ and the step size $h$. It also takes the ODE function itself and its derivatives (only used for Taylor's method). Finally, the parameter \"step_function\" is the method used to update (e.g., Euler's methods, Runge-Kutte 4-step)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "VZfZDJAfmyrf"
},
"outputs": [],
"source": [
"def run_numerical(x_0, t_0, t_1, h, ode_func, ode_func_deriv_x, ode_func_deriv_t, ode_soln, step_function):\n",
" x = [x_0]\n",
" t = [t_0]\n",
" while (t[-1] <= t_1):\n",
" x = x+[step_function(x[-1],t[-1],h, ode_func, ode_func_deriv_x, ode_func_deriv_t)]\n",
" t = t + [t[-1]+h]\n",
"\n",
" # Returns x,y plot plus total numerical error at last point.\n",
" return t, x, np.abs(ode_soln(t[-1])-x[-1])"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Vfkc3-_7oqet"
},
"source": [
"Run the numerical method with step sizes of 2.0, 1.0, 0.5, 0.25, 0.125, 0.0675 and plot the results"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "1tyGbMZhoqeu"
},
"outputs": [],
"source": [
"def run_and_plot(ode, ode_deriv_x, ode_deriv_t, ode_solution, step_function):\n",
" # Specify the grid of points to draw the ODE\n",
" t = np.arange(0.04, 4.0, 0.2)\n",
" x = np.arange(0.04, 4.0, 0.2)\n",
" T, X = np.meshgrid(t,x)\n",
"\n",
" # ODE equation at these grid points (used to draw quiver-plot)\n",
" dx = ode(T,X)\n",
" dt = np.ones(dx.shape)\n",
"\n",
" # The ground truth solution\n",
" t2= np.arange(0,10,0.1)\n",
" x2 = ode_solution(t2)\n",
"\n",
" #####################################x_0, t_0, t_1, h #################################################\n",
" t_sim1,x_sim1,error1 = run_numerical(0.5, 0.0, 4.0, 2.0000, ode, ode_deriv_x, ode_deriv_t, ode_solution, step_function)\n",
" t_sim2,x_sim2,error2 = run_numerical(0.5, 0.0, 4.0, 1.0000, ode, ode_deriv_x, ode_deriv_t, ode_solution, step_function)\n",
" t_sim3,x_sim3,error3 = run_numerical(0.5, 0.0, 4.0, 0.5000, ode, ode_deriv_x, ode_deriv_t, ode_solution, step_function)\n",
" t_sim4,x_sim4,error4 = run_numerical(0.5, 0.0, 4.0, 0.2500, ode, ode_deriv_x, ode_deriv_t, ode_solution, step_function)\n",
" t_sim5,x_sim5,error5 = run_numerical(0.5, 0.0, 4.0, 0.1250, ode, ode_deriv_x, ode_deriv_t, ode_solution, step_function)\n",
" t_sim6,x_sim6,error6 = run_numerical(0.5, 0.0, 4.0, 0.0675, ode, ode_deriv_x, ode_deriv_t, ode_solution, step_function)\n",
"\n",
" # Plot the ODE and ground truth solution\n",
" fig,ax = plt.subplots()\n",
" ax.quiver(T,X,dt,dx, scale=35.0)\n",
" ax.plot(t2,x2,'r-')\n",
"\n",
" # Plot the numerical approximations\n",
" ax.plot(t_sim1,x_sim1,'.-',markeredgecolor='#773c23ff',markerfacecolor='#d18362', color='#d18362', markersize=10)\n",
" ax.plot(t_sim2,x_sim2,'.-',markeredgecolor='#773c23ff',markerfacecolor='#d18362', color='#d18362', markersize=10)\n",
" ax.plot(t_sim3,x_sim3,'.-',markeredgecolor='#773c23ff',markerfacecolor='#d18362', color='#d18362', markersize=10)\n",
" ax.plot(t_sim4,x_sim4,'.-',markeredgecolor='#773c23ff',markerfacecolor='#d18362', color='#d18362', markersize=10)\n",
" ax.plot(t_sim5,x_sim5,'.-',markeredgecolor='#773c23ff',markerfacecolor='#d18362', color='#d18362', markersize=10)\n",
" ax.plot(t_sim6,x_sim6,'.-',markeredgecolor='#773c23ff',markerfacecolor='#d18362', color='#d18362', markersize=10)\n",
"\n",
" ax.set_aspect('equal')\n",
" ax.set_xlim(0,4)\n",
" ax.set_ylim(0,4)\n",
"\n",
" plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "JYrq8QIwvOIy"
},
"source": [
"# Euler Method\n",
"\n",
"Define the Euler method and set up functions for plotting."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "N73xMnCukVVX"
},
"outputs": [],
"source": [
"def euler_step(x_0, t_0, h, ode_func, ode_func_deriv_x=None, ode_func_deriv_t=None):\n",
" return x_0 + h * ode_func(t_0, x_0) ;"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "4B1_PGEcsZ9H"
},
"outputs": [],
"source": [
"run_and_plot(ode_lin_homog, None, None, ode_lin_homog_soln, euler_step)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "FfwNihtkvJeX"
},
"source": [
"# Heun's Method"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "srHfNDcDxI1o"
},
"outputs": [],
"source": [
"def heun_step(x_0, t_0, h, ode_func, ode_func_deriv_x=None, ode_func_deriv_t=None):\n",
" f_x0_t0 = ode_func(t_0, x_0)\n",
" return x_0 + h/2 * ( f_x0_t0 + ode_func(t_0+h, x_0+h*f_x0_t0)) ;"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "WOApHz9xoqev"
},
"outputs": [],
"source": [
"run_and_plot(ode_lin_homog, None, None, ode_lin_homog_soln, heun_step)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "0XSzzFDIvRhm"
},
"source": [
"# Modified Euler method"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "fSXprgVJ5Yep"
},
"outputs": [],
"source": [
"def modified_euler_step(x_0, t_0, h, ode_func, ode_func_deriv_x=None, ode_func_deriv_t=None):\n",
" f_x0_t0 = ode_func(t_0, x_0)\n",
" return x_0 + h * ode_func(t_0+h/2, x_0+ h * f_x0_t0/2) ;"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "8LKSrCD2oqev"
},
"outputs": [],
"source": [
"run_and_plot(ode_lin_homog, None, None, ode_lin_homog_soln, modified_euler_step)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "yp8ZBpwooqev"
},
"source": [
"# Second order Taylor's method"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "NtBBgzWLoqev"
},
"outputs": [],
"source": [
"def taylor_2nd_order(x_0, t_0, h, ode_func, ode_func_deriv_x, ode_func_deriv_t):\n",
" f1 = ode_func(t_0, x_0)\n",
" return x_0 + h * ode_func(t_0, x_0) + (h*h/2) * (ode_func_deriv_x(t_0,x_0) * ode_func(t_0, x_0) + ode_func_deriv_t(t_0, x_0))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ioeeIohUoqev"
},
"outputs": [],
"source": [
"run_and_plot(ode_lin_homog, ode_lin_homog_deriv_x, ode_lin_homog_deriv_t, ode_lin_homog_soln, taylor_2nd_order)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "WcuhV5lL1zAJ"
},
"source": [
"# Fourth Order Runge Kutta"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "0NZN81Bpwu56"
},
"outputs": [],
"source": [
"def runge_kutta_4_step(x_0, t_0, h, ode_func, ode_func_deriv_x=None, ode_func_deriv_t=None):\n",
" f1 = ode_func(t_0, x_0)\n",
" f2 = ode_func(t_0+h/2,x_0+f1 * h/2)\n",
" f3 = ode_func(t_0+h/2,x_0+f2 * h/2)\n",
" f4 = ode_func(t_0+h, x_0+ f3*h)\n",
" return x_0 + (h/6) * (f1 + 2*f2 + 2*f3+f4)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "K-OxE9E6oqew"
},
"outputs": [],
"source": [
"run_and_plot(ode_lin_homog, None, None, ode_lin_homog_soln, runge_kutta_4_step)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "7JifxBhhoqew"
},
"source": [
"# Plot the error as a function of step size"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ZoEpmlCfsi9P"
},
"outputs": [],
"source": [
"# Run systematically with a number of different step sizes and store errors for each\n",
"def get_errors(ode, ode_deriv_x, ode_deriv_t, ode_solution, step_function):\n",
" # Choose the step size h to divide the plotting interval into 1,2,4,8... segments.\n",
" # The plots in the article add a few more smaller step sizes, but this takes a while to compute.\n",
" # Add them back in if you want the full plot.\n",
" all_h = (1./np.array([1,2,4,8,16,32,64,128,256,512,1024,2048,4096])).tolist()\n",
" all_err = []\n",
"\n",
" for i in range(len(all_h)):\n",
" t_sim,x_sim,err = run_numerical(0.5, 0.0, 4.0, all_h[i], ode, ode_deriv_x, ode_deriv_t, ode_solution, step_function)\n",
" all_err = all_err + [err]\n",
"\n",
" return all_h, all_err"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "X0O0KK47xF28"
},
"outputs": [],
"source": [
"# Plot the errors\n",
"all_h, all_err_euler = get_errors(ode_lin_homog, ode_lin_homog_deriv_x, ode_lin_homog_deriv_t, ode_lin_homog_soln, euler_step)\n",
"all_h, all_err_heun = get_errors(ode_lin_homog, ode_lin_homog_deriv_x, ode_lin_homog_deriv_t, ode_lin_homog_soln, heun_step)\n",
"all_h, all_err_mod_euler = get_errors(ode_lin_homog, ode_lin_homog_deriv_x, ode_lin_homog_deriv_t, ode_lin_homog_soln, modified_euler_step)\n",
"all_h, all_err_taylor = get_errors(ode_lin_homog, ode_lin_homog_deriv_x, ode_lin_homog_deriv_t, ode_lin_homog_soln, taylor_2nd_order)\n",
"all_h, all_err_rk = get_errors(ode_lin_homog, ode_lin_homog_deriv_x, ode_lin_homog_deriv_t, ode_lin_homog_soln, runge_kutta_4_step)\n",
"\n",
"\n",
"fig, ax = plt.subplots()\n",
"ax.loglog(all_h, all_err_euler,'ro-')\n",
"ax.loglog(all_h, all_err_heun,'bo-')\n",
"ax.loglog(all_h, all_err_mod_euler,'go-')\n",
"ax.loglog(all_h, all_err_taylor,'co-')\n",
"ax.loglog(all_h, all_err_rk,'mo-')\n",
"ax.set_ylim(1e-13,1e1)\n",
"ax.set_xlim(1e-6,1e1)\n",
"ax.set_aspect(0.5)\n",
"ax.set_xlabel('Step size, $h$')\n",
"ax.set_ylabel('Error')\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "BttOqpeo9MsJ"
},
"source": [
"Note that for this ODE, the Heun, Modified Euler and Taylor methods provide EXACTLY the same updates, and so the error curves for all three are identical (subject to difference is numerical rounding errors). This is not in general the case, although the general trend would be the same for each."
]
}
],
"metadata": {
"colab": {
"provenance": [],
"include_colab_link": true
},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.10"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

View File

@@ -128,7 +128,7 @@
"\n",
"In part (b) of the practical we calculate the volume of a hypersphere of radius 0.5 (i.e., of diameter 1) as a function of the radius. You will find that the volume decreases to almost nothing in high dimensions. All of the volume is in the corners of the unit hypercube (which always has volume 1). Double weird.\n",
"\n",
"Note that you you can check your answer by doing the calculation for 2D using the standard formula for the area of a circle and making sure it matches."
"Note that you can check your answer by doing the calculation for 2D using the standard formula for the area of a circle and making sure it matches."
],
"metadata": {
"id": "b2FYKV1SL4Z7"

View File

@@ -199,7 +199,7 @@
{
"cell_type": "markdown",
"source": [
"The left is model output and the right is the model output after the sigmoid has been applied, so it now lies in the range [0,1] and represents the probability, that y=1. The black dots show the training data. We'll compute the the likelihood and the negative log likelihood."
"The left is model output and the right is the model output after the sigmoid has been applied, so it now lies in the range [0,1] and represents the probability, that y=1. The black dots show the training data. We'll compute the likelihood and the negative log likelihood."
],
"metadata": {
"id": "MvVX6tl9AEXF"

View File

@@ -218,7 +218,7 @@
{
"cell_type": "markdown",
"source": [
"The left is model output and the right is the model output after the softmax has been applied, so it now lies in the range [0,1] and represents the probability, that y=0 (red), 1 (green) and 2 (blue) The dots at the bottom show the training data with the same color scheme. So we want the red curve to be high where there are red dots, the green curve to be high where there are green dotsmand the blue curve to be high where there are blue dots We'll compute the the likelihood and the negative log likelihood."
"The left is model output and the right is the model output after the softmax has been applied, so it now lies in the range [0,1] and represents the probability, that y=0 (red), 1 (green) and 2 (blue) The dots at the bottom show the training data with the same color scheme. So we want the red curve to be high where there are red dots, the green curve to be high where there are green dotsmand the blue curve to be high where there are blue dots We'll compute the likelihood and the negative log likelihood."
],
"metadata": {
"id": "MvVX6tl9AEXF"

View File

@@ -128,7 +128,7 @@
"\n",
"In part (b) of the practical we calculate the volume of a hypersphere of radius 0.5 (i.e., of diameter 1) as a function of the radius. You will find that the volume decreases to almost nothing in high dimensions. All of the volume is in the corners of the unit hypercube (which always has volume 1). Double weird.\n",
"\n",
"Note that you you can check your answer by doing the calculation for 2D using the standard formula for the area of a circle and making sure it matches."
"Note that you can check your answer by doing the calculation for 2D using the standard formula for the area of a circle and making sure it matches."
],
"metadata": {
"id": "b2FYKV1SL4Z7"

View File

@@ -214,7 +214,7 @@
{
"cell_type": "code",
"source": [
"# Compute the derivative of the the loss with respect to the function output f_val\n",
"# Compute the derivative of the loss with respect to the function output f_val\n",
"def dl_df(f_val,y):\n",
" # Compute sigmoid of network output\n",
" sig_f_val = sig(f_val)\n",

View File

@@ -295,7 +295,7 @@
"\n",
"Throughout the book, we'll be using some special functions (see Appendix B.1.3). The most important of these are the logarithm and exponential functions. Let's investigate their properties.\n",
"\n",
"We'll start with the exponential function $y=\\exp[x]=e^x$ which maps the real line $[-\\infty,+\\infty]$ to non-negative numbers $[0,+\\infty]$."
"We'll start with the exponential function $y=\\exp[x]=e^x$ which maps the real line $(-\\infty,+\\infty)$ to positive numbers $(0,+\\infty)$."
]
},
{

File diff suppressed because one or more lines are too long

View File

@@ -4,7 +4,6 @@
"metadata": {
"colab": {
"provenance": [],
"authorship_tag": "ABX9TyNioITtfAcfxEfM3UOfQyb9",
"include_colab_link": true
},
"kernelspec": {
@@ -62,7 +61,7 @@
"source": [
"The number of regions $N$ created by a shallow neural network with $D_i$ inputs and $D$ hidden units is given by Zaslavsky's formula:\n",
"\n",
"\\begin{equation}N = \\sum_{j=0}^{D_{i}}\\binom{D}{j}=\\sum_{j=0}^{D_{i}} \\frac{D!}{(D-j)!j!} \\end{equation} <br>\n",
"\\begin{equation}N = \\sum_{j=0}^{D_{i}}\\binom{D}{j}=\\sum_{j=0}^{D_{i}} \\frac{D!}{(D-j)!j!} \\end{equation} \n",
"\n"
],
"metadata": {
@@ -221,7 +220,7 @@
{
"cell_type": "code",
"source": [
"# Now let's plot the graph from figure 3.9a (takes ~1min)\n",
"# Now let's plot the graph from figure 3.9b (takes ~1min)\n",
"dims = np.array([1,5,10,50,100])\n",
"regions = np.zeros((dims.shape[0], 200))\n",
"params = np.zeros((dims.shape[0], 200))\n",

View File

@@ -169,7 +169,7 @@
{
"cell_type": "code",
"source": [
"# Define parameters (note first dimension of theta and phi is padded to make indices match\n",
"# Define parameters (note first dimension of theta and psi is padded to make indices match\n",
"# notation in book)\n",
"theta = np.zeros([4,2])\n",
"psi = np.zeros([4,4])\n",

View File

@@ -4,7 +4,6 @@
"metadata": {
"colab": {
"provenance": [],
"authorship_tag": "ABX9TyO2DaD75p+LGi7WgvTzjrk1",
"include_colab_link": true
},
"kernelspec": {
@@ -31,7 +30,7 @@
"source": [
"# **Notebook 4.3 Deep neural networks**\n",
"\n",
"This network investigates converting neural networks to matrix form.\n",
"This notebook investigates converting neural networks to matrix form.\n",
"\n",
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
"\n",
@@ -150,7 +149,7 @@
{
"cell_type": "markdown",
"source": [
"Now we'll define the same neural network, but this time, we will use matrix form. When you get this right, it will draw the same plot as above."
"Now we'll define the same neural network, but this time, we will use matrix form as in equation 4.15. When you get this right, it will draw the same plot as above."
],
"metadata": {
"id": "XCJqo_AjfAra"
@@ -176,8 +175,8 @@
"n1_in_mat = np.reshape(n1_in,(n_dim_in,n_data))\n",
"\n",
"# This runs the network for ALL of the inputs, x at once so we can draw graph\n",
"h1 = ReLU(np.matmul(beta_0,np.ones((1,n_data))) + np.matmul(Omega_0,n1_in_mat))\n",
"n1_out = np.matmul(beta_1,np.ones((1,n_data))) + np.matmul(Omega_1,h1)\n",
"h1 = ReLU(beta_0 + np.matmul(Omega_0,n1_in_mat))\n",
"n1_out = beta_1 + np.matmul(Omega_1,h1)\n",
"\n",
"# Draw the network and check that it looks the same as the non-matrix case\n",
"plot_neural(n1_in, n1_out)"
@@ -247,9 +246,9 @@
"n1_in_mat = np.reshape(n1_in,(n_dim_in,n_data))\n",
"\n",
"# This runs the network for ALL of the inputs, x at once so we can draw graph (hence extra np.ones term)\n",
"h1 = ReLU(np.matmul(beta_0,np.ones((1,n_data))) + np.matmul(Omega_0,n1_in_mat))\n",
"h2 = ReLU(np.matmul(beta_1,np.ones((1,n_data))) + np.matmul(Omega_1,h1))\n",
"n1_out = np.matmul(beta_2,np.ones((1,n_data))) + np.matmul(Omega_2,h2)\n",
"h1 = ReLU(beta_0 + np.matmul(Omega_0,n1_in_mat))\n",
"h2 = ReLU(beta_1 + np.matmul(Omega_1,h1))\n",
"n1_out = beta_2 + np.matmul(Omega_2,h2)\n",
"\n",
"# Draw the network and check that it looks the same as the non-matrix version\n",
"plot_neural(n1_in, n1_out)"
@@ -291,10 +290,10 @@
"\n",
"\n",
"# If you set the parameters to the correct sizes, the following code will run\n",
"h1 = ReLU(np.matmul(beta_0,np.ones((1,n_data))) + np.matmul(Omega_0,x));\n",
"h2 = ReLU(np.matmul(beta_1,np.ones((1,n_data))) + np.matmul(Omega_1,h1));\n",
"h3 = ReLU(np.matmul(beta_2,np.ones((1,n_data))) + np.matmul(Omega_2,h2));\n",
"y = np.matmul(beta_3,np.ones((1,n_data))) + np.matmul(Omega_3,h3)\n",
"h1 = ReLU(beta_0 + np.matmul(Omega_0,x));\n",
"h2 = ReLU(beta_1 + np.matmul(Omega_1,h1));\n",
"h3 = ReLU(beta_2 + np.matmul(Omega_2,h2));\n",
"y = beta_3 + np.matmul(Omega_3,h3)\n",
"\n",
"if h1.shape[0] is not D_1 or h1.shape[1] is not n_data:\n",
" print(\"h1 is wrong shape\")\n",

View File

@@ -211,7 +211,7 @@
"id": "MvVX6tl9AEXF"
},
"source": [
"The left is model output and the right is the model output after the softmax has been applied, so it now lies in the range [0,1] and represents the probability, that y=0 (red), 1 (green) and 2 (blue). The dots at the bottom show the training data with the same color scheme. So we want the red curve to be high where there are red dots, the green curve to be high where there are green dots, and the blue curve to be high where there are blue dots We'll compute the the likelihood and the negative log likelihood."
"The left is model output and the right is the model output after the softmax has been applied, so it now lies in the range [0,1] and represents the probability, that y=0 (red), 1 (green) and 2 (blue). The dots at the bottom show the training data with the same color scheme. So we want the red curve to be high where there are red dots, the green curve to be high where there are green dots, and the blue curve to be high where there are blue dots We'll compute the likelihood and the negative log likelihood."
]
},
{
@@ -236,11 +236,10 @@
},
"outputs": [],
"source": [
"# Let's double check we get the right answer before proceeding\n",
"print(\"Correct answer = %3.3f, Your answer = %3.3f\"%(0.2,categorical_distribution(np.array([[0]]),np.array([[0.2],[0.5],[0.3]]))))\n",
"print(\"Correct answer = %3.3f, Your answer = %3.3f\"%(0.5,categorical_distribution(np.array([[1]]),np.array([[0.2],[0.5],[0.3]]))))\n",
"print(\"Correct answer = %3.3f, Your answer = %3.3f\"%(0.3,categorical_distribution(np.array([[2]]),np.array([[0.2],[0.5],[0.3]]))))\n",
"\n"
"# Here are three examples\n",
"print(categorical_distribution(np.array([[0]]),np.array([[0.2],[0.5],[0.3]])))\n",
"print(categorical_distribution(np.array([[1]]),np.array([[0.2],[0.5],[0.3]])))\n",
"print(categorical_distribution(np.array([[2]]),np.array([[0.2],[0.5],[0.3]])))"
]
},
{

View File

@@ -130,7 +130,8 @@
"\n",
" print('Iter %d, a=%3.3f, b=%3.3f, c=%3.3f, d=%3.3f'%(n_iter, a,b,c,d))\n",
"\n",
" # Rule #1 If the HEIGHT at point A is less than the HEIGHT at points B, C, and D then halve values of B, C, and D\n",
" # Rule #1 If the HEIGHT at point A is less than the HEIGHT at points B, C, and D then move them to they are half\n",
" # as far from A as they start\n",
" # i.e. bring them closer to the original point\n",
" # TODO REPLACE THE BLOCK OF CODE BELOW WITH THIS RULE\n",
" if (0):\n",

View File

@@ -1,18 +1,16 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "view-in-github"
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap06/6_2_Gradient_Descent.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "el8l05WQEO46"
@@ -111,7 +109,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "QU5mdGvpTtEG"
@@ -140,7 +137,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "eB5DQvU5hYNx"
@@ -162,7 +158,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "F3trnavPiHpH"
@@ -218,7 +213,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "s9Duf05WqqSC"
@@ -252,7 +246,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "RS1nEcYVuEAM"
@@ -290,7 +283,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "5EIjMM9Fw2eT"
@@ -309,7 +301,7 @@
"source": [
"def loss_function_1D(dist_prop, data, model, phi_start, search_direction):\n",
" # Return the loss after moving this far\n",
" return compute_loss(data[0,:], data[1,:], model, phi_start+ search_direction * dist_prop)\n",
" return compute_loss(data[0,:], data[1,:], model, phi_start - search_direction * dist_prop)\n",
"\n",
"def line_search(data, model, phi, gradient, thresh=.00001, max_dist = 0.1, max_iter = 15, verbose=False):\n",
" # Initialize four points along the range we are going to search\n",
@@ -333,11 +325,11 @@
" print('Iter %d, a=%3.3f, b=%3.3f, c=%3.3f, d=%3.3f'%(n_iter, a,b,c,d))\n",
" print('a %f, b%f, c%f, d%f'%(lossa,lossb,lossc,lossd))\n",
"\n",
" # Rule #1 If point A is less than points B, C, and D then halve points B,C, and D\n",
" # Rule #1 If point A is less than points B, C, and D then halve distance from A to points B,C, and D\n",
" if np.argmin((lossa,lossb,lossc,lossd))==0:\n",
" b = b/2\n",
" c = c/2\n",
" d = d/2\n",
" b = a+ (b-a)/2\n",
" c = a+ (c-a)/2\n",
" d = a+ (d-a)/2\n",
" continue;\n",
"\n",
" # Rule #2 If point b is less than point c then\n",
@@ -373,7 +365,7 @@
"def gradient_descent_step(phi, data, model):\n",
" # TODO -- update Phi with the gradient descent step (equation 6.3)\n",
" # 1. Compute the gradient (you wrote this function above)\n",
" # 2. Find the best step size alpha using line search function (above) -- use negative gradient as going downhill\n",
" # 2. Find the best step size alpha using line search function (above)\n",
" # 3. Update the parameters phi based on the gradient and the step size alpha.\n",
"\n",
" return phi"
@@ -412,8 +404,8 @@
],
"metadata": {
"colab": {
"include_colab_link": true,
"provenance": []
"provenance": [],
"include_colab_link": true
},
"kernelspec": {
"display_name": "Python 3",

View File

@@ -1,18 +1,16 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "view-in-github"
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap06/6_3_Stochastic_Gradient_Descent.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "el8l05WQEO46"
@@ -122,7 +120,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "QU5mdGvpTtEG"
@@ -150,7 +147,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "eB5DQvU5hYNx"
@@ -172,7 +168,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "F3trnavPiHpH"
@@ -228,7 +223,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "s9Duf05WqqSC"
@@ -279,7 +273,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "RS1nEcYVuEAM"
@@ -316,7 +309,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "5EIjMM9Fw2eT"
@@ -359,11 +351,11 @@
" print('Iter %d, a=%3.3f, b=%3.3f, c=%3.3f, d=%3.3f'%(n_iter, a,b,c,d))\n",
" print('a %f, b%f, c%f, d%f'%(lossa,lossb,lossc,lossd))\n",
"\n",
" # Rule #1 If point A is less than points B, C, and D then halve points B,C, and D\n",
" # Rule #1 If point A is less than points B, C, and D then change B,C,D so they are half their current distance from A\n",
" if np.argmin((lossa,lossb,lossc,lossd))==0:\n",
" b = b/2\n",
" c = c/2\n",
" d = d/2\n",
" b = a+ (b-a)/2\n",
" c = a+ (c-a)/2\n",
" d = a+ (d-a)/2\n",
" continue;\n",
"\n",
" # Rule #2 If point b is less than point c then\n",
@@ -577,9 +569,8 @@
],
"metadata": {
"colab": {
"authorship_tag": "ABX9TyNk5FN4qlw3pk8BwDVWw1jN",
"include_colab_link": true,
"provenance": []
"provenance": [],
"include_colab_link": true
},
"kernelspec": {
"display_name": "Python 3",

View File

@@ -4,7 +4,6 @@
"metadata": {
"colab": {
"provenance": [],
"authorship_tag": "ABX9TyM2kkHLr00J4Jeypw41sTkQ",
"include_colab_link": true
},
"kernelspec": {
@@ -68,7 +67,7 @@
"# Set seed so we always get the same random numbers\n",
"np.random.seed(0)\n",
"\n",
"# Number of layers\n",
"# Number of hidden layers\n",
"K = 5\n",
"# Number of neurons per layer\n",
"D = 6\n",
@@ -115,7 +114,7 @@
{
"cell_type": "markdown",
"source": [
"Now let's run our random network. The weight matrices $\\boldsymbol\\Omega_{1\\ldots K}$ are the entries of the list \"all_weights\" and the biases $\\boldsymbol\\beta_{1\\ldots K}$ are the entries of the list \"all_biases\"\n",
"Now let's run our random network. The weight matrices $\\boldsymbol\\Omega_{0\\ldots K}$ are the entries of the list \"all_weights\" and the biases $\\boldsymbol\\beta_{0\\ldots K}$ are the entries of the list \"all_biases\"\n",
"\n",
"We know that we will need the preactivations $\\mathbf{f}_{0\\ldots K}$ and the activations $\\mathbf{h}_{1\\ldots K}$ for the forward pass of backpropagation, so we'll store and return these as well.\n"
],
@@ -142,7 +141,7 @@
"\n",
" # Run through the layers, calculating all_f[0...K-1] and all_h[1...K]\n",
" for layer in range(K):\n",
" # Update preactivations and activations at this layer according to eqn 7.16\n",
" # Update preactivations and activations at this layer according to eqn 7.17\n",
" # Remember to use np.matmul for matrix multiplications\n",
" # TODO -- Replace the lines below\n",
" all_f[layer] = all_h[layer]\n",
@@ -230,8 +229,8 @@
"# We'll need the indicator function\n",
"def indicator_function(x):\n",
" x_in = np.array(x)\n",
" x_in[x_in>=0] = 1\n",
" x_in[x_in<0] = 0\n",
" x_in[x_in>0] = 1\n",
" x_in[x_in<=0] = 0\n",
" return x_in\n",
"\n",
"# Main backward pass routine\n",
@@ -249,23 +248,23 @@
"\n",
" # Now work backwards through the network\n",
" for layer in range(K,-1,-1):\n",
" # TODO Calculate the derivatives of the loss with respect to the biases at layer from all_dl_df[layer]. (eq 7.21)\n",
" # TODO Calculate the derivatives of the loss with respect to the biases at layer from all_dl_df[layer]. (eq 7.22)\n",
" # NOTE! To take a copy of matrix X, use Z=np.array(X)\n",
" # REPLACE THIS LINE\n",
" all_dl_dbiases[layer] = np.zeros_like(all_biases[layer])\n",
"\n",
" # TODO Calculate the derivatives of the loss with respect to the weights at layer from all_dl_df[layer] and all_h[layer] (eq 7.22)\n",
" # TODO Calculate the derivatives of the loss with respect to the weights at layer from all_dl_df[layer] and all_h[layer] (eq 7.23)\n",
" # Don't forget to use np.matmul\n",
" # REPLACE THIS LINE\n",
" all_dl_dweights[layer] = np.zeros_like(all_weights[layer])\n",
"\n",
" # TODO: calculate the derivatives of the loss with respect to the activations from weight and derivatives of next preactivations (second part of last line of eq 7.24)\n",
" # TODO: calculate the derivatives of the loss with respect to the activations from weight and derivatives of next preactivations (second part of last line of eq 7.25)\n",
" # REPLACE THIS LINE\n",
" all_dl_dh[layer] = np.zeros_like(all_h[layer])\n",
"\n",
"\n",
" if layer > 0:\n",
" # TODO Calculate the derivatives of the loss with respect to the pre-activation f (use derivative of ReLu function, first part of last line of eq. 7.24)\n",
" # TODO Calculate the derivatives of the loss with respect to the pre-activation f (use derivative of ReLu function, first part of last line of eq. 7.25)\n",
" # REPLACE THIS LINE\n",
" all_dl_df[layer-1] = np.zeros_like(all_f[layer-1])\n",
"\n",
@@ -300,7 +299,7 @@
"delta_fd = 0.000001\n",
"\n",
"# Test the dervatives of the bias vectors\n",
"for layer in range(K):\n",
"for layer in range(K+1):\n",
" dl_dbias = np.zeros_like(all_dl_dbiases[layer])\n",
" # For every element in the bias\n",
" for row in range(all_biases[layer].shape[0]):\n",
@@ -324,7 +323,7 @@
"\n",
"\n",
"# Test the derivatives of the weights matrices\n",
"for layer in range(K):\n",
"for layer in range(K+1):\n",
" dl_dweight = np.zeros_like(all_dl_dweights[layer])\n",
" # For every element in the bias\n",
" for row in range(all_weights[layer].shape[0]):\n",

View File

@@ -325,7 +325,7 @@
" for layer in range(1,K):\n",
" aggregate_dl_df[layer][:,c_data] = np.squeeze(all_dl_df[layer])\n",
"\n",
"for layer in range(1,K):\n",
"for layer in reversed(range(1,K)):\n",
" print(\"Layer %d, std of dl_dh = %3.3f\"%(layer, np.std(aggregate_dl_df[layer].ravel())))\n"
],
"metadata": {

View File

@@ -1,28 +1,10 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"gpuType": "T4",
"authorship_tag": "ABX9TyOuKMUcKfOIhIL2qTX9jJCy",
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
},
"accelerator": "GPU"
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
"colab_type": "text",
"id": "view-in-github"
},
"source": [
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap08/8_1_MNIST_1D_Performance.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
@@ -30,6 +12,9 @@
},
{
"cell_type": "markdown",
"metadata": {
"id": "L6chybAVFJW2"
},
"source": [
"# **Notebook 8.1: MNIST_1D_Performance**\n",
"\n",
@@ -38,25 +23,27 @@
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
"\n",
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
],
"metadata": {
"id": "L6chybAVFJW2"
}
]
},
{
"cell_type": "code",
"source": [
"# Run this if you're in a Colab to install MNIST 1D repository\n",
"%pip install git+https://github.com/greydanus/mnist1d"
],
"execution_count": null,
"metadata": {
"id": "ifVjS4cTOqKz"
},
"execution_count": null,
"outputs": []
"outputs": [],
"source": [
"# Run this if you're in a Colab to install MNIST 1D repository\n",
"%pip install git+https://github.com/greydanus/mnist1d"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "qyE7G1StPIqO"
},
"outputs": [],
"source": [
"import torch, torch.nn as nn\n",
"from torch.utils.data import TensorDataset, DataLoader\n",
@@ -64,44 +51,42 @@
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"import mnist1d"
],
"metadata": {
"id": "qyE7G1StPIqO"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "markdown",
"source": [
"Let's generate a training and test dataset using the MNIST1D code. The dataset gets saved as a .pkl file so it doesn't have to be regenerated each time."
],
"metadata": {
"id": "F7LNq72SP6jO"
}
},
"source": [
"Let's generate a training and test dataset using the MNIST1D code. The dataset gets saved as a .pkl file so it doesn't have to be regenerated each time."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "YLxf7dJfPaqw"
},
"outputs": [],
"source": [
"!mkdir ./sample_data\n",
"\n",
"args = mnist1d.data.get_dataset_args()\n",
"data = mnist1d.data.get_dataset(args, path='./sample_data/mnist1d_data.pkl', download=False, regenerate=False)\n",
"data = mnist1d.data.get_dataset(args, path='./mnist1d_data.pkl', download=False, regenerate=False)\n",
"\n",
"# The training and test input and outputs are in\n",
"# data['x'], data['y'], data['x_test'], and data['y_test']\n",
"print(\"Examples in training set: {}\".format(len(data['y'])))\n",
"print(\"Examples in test set: {}\".format(len(data['y_test'])))\n",
"print(\"Length of each example: {}\".format(data['x'].shape[-1]))"
],
"metadata": {
"id": "YLxf7dJfPaqw"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "FxaB5vc0uevl"
},
"outputs": [],
"source": [
"D_i = 40 # Input dimensions\n",
"D_k = 100 # Hidden dimensions\n",
@@ -122,15 +107,15 @@
"\n",
"# Call the function you just defined\n",
"model.apply(weights_init)\n"
],
"metadata": {
"id": "FxaB5vc0uevl"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "_rX6N3VyyQTY"
},
"outputs": [],
"source": [
"# choose cross entropy loss function (equation 5.24)\n",
"loss_function = torch.nn.CrossEntropyLoss()\n",
@@ -139,9 +124,9 @@
"# object that decreases learning rate by half every 10 epochs\n",
"scheduler = StepLR(optimizer, step_size=10, gamma=0.5)\n",
"x_train = torch.tensor(data['x'].astype('float32'))\n",
"y_train = torch.tensor(data['y'].transpose().astype('long'))\n",
"y_train = torch.tensor(data['y'].transpose().astype('int64'))\n",
"x_test= torch.tensor(data['x_test'].astype('float32'))\n",
"y_test = torch.tensor(data['y_test'].astype('long'))\n",
"y_test = torch.tensor(data['y_test'].astype('int64'))\n",
"\n",
"# load the data into a class that creates the batches\n",
"data_loader = DataLoader(TensorDataset(x_train,y_train), batch_size=100, shuffle=True, worker_init_fn=np.random.seed(1))\n",
@@ -186,15 +171,15 @@
"\n",
" # tell scheduler to consider updating learning rate\n",
" scheduler.step()"
],
"metadata": {
"id": "_rX6N3VyyQTY"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "yI-l6kA_EH9G"
},
"outputs": [],
"source": [
"# Plot the results\n",
"fig, ax = plt.subplots()\n",
@@ -215,25 +200,38 @@
"ax.set_title('Train loss %3.2f, Test loss %3.2f'%(losses_train[-1],losses_test[-1]))\n",
"ax.legend()\n",
"plt.show()"
],
"metadata": {
"id": "yI-l6kA_EH9G"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "q-yT6re6GZS4"
},
"source": [
"**TODO**\n",
"\n",
"Play with the model -- try changing the number of layers, hidden units, learning rate, batch size, momentum or anything else you like. See if you can improve the test results.\n",
"\n",
"Is it a good idea to optimize the hyperparameters in this way? Will the final result be a good estimate of the true test performance?"
],
"metadata": {
"id": "q-yT6re6GZS4"
}
}
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"authorship_tag": "ABX9TyOuKMUcKfOIhIL2qTX9jJCy",
"gpuType": "T4",
"include_colab_link": true,
"provenance": []
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

View File

@@ -293,7 +293,8 @@
"cell_type": "code",
"source": [
"# Plot the noise, bias and variance as a function of capacity\n",
"hidden_variables = [1,2,3,4,5,6,7,8,9,10,11,12]\n",
"n_hidden = 12\n",
"hidden_variables = list(range(1, n_hidden + 1))\n",
"bias = np.zeros((len(hidden_variables),1)) ;\n",
"variance = np.zeros((len(hidden_variables),1)) ;\n",
"\n",
@@ -321,7 +322,7 @@
"ax.plot(hidden_variables, variance, 'k-')\n",
"ax.plot(hidden_variables, bias, 'r-')\n",
"ax.plot(hidden_variables, variance+bias, 'g-')\n",
"ax.set_xlim(0,12)\n",
"ax.set_xlim(0,n_hidden)\n",
"ax.set_ylim(0,0.5)\n",
"ax.set_xlabel(\"Model capacity\")\n",
"ax.set_ylabel(\"Variance\")\n",
@@ -333,15 +334,6 @@
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [],
"metadata": {
"id": "WKUyOAywL_b2"
},
"execution_count": null,
"outputs": []
}
]
}

View File

@@ -99,7 +99,7 @@
"# data['x'], data['y'], data['x_test'], and data['y_test']\n",
"print(\"Examples in training set: {}\".format(len(data['y'])))\n",
"print(\"Examples in test set: {}\".format(len(data['y_test'])))\n",
"print(\"Length of each example: {}\".format(data['x'].shape[-1]))"
"print(\"Dimensionality of each example: {}\".format(data['x'].shape[-1]))"
],
"metadata": {
"id": "PW2gyXL5UkLU"
@@ -147,7 +147,7 @@
{
"cell_type": "code",
"source": [
"def fit_model(model, data):\n",
"def fit_model(model, data, n_epoch):\n",
"\n",
" # choose cross entropy loss function (equation 5.24)\n",
" loss_function = torch.nn.CrossEntropyLoss()\n",
@@ -164,9 +164,6 @@
" # load the data into a class that creates the batches\n",
" data_loader = DataLoader(TensorDataset(x_train,y_train), batch_size=100, shuffle=True, worker_init_fn=np.random.seed(1))\n",
"\n",
" # loop over the dataset n_epoch times\n",
" n_epoch = 1000\n",
"\n",
" for epoch in range(n_epoch):\n",
" # loop over batches\n",
" for i, batch in enumerate(data_loader):\n",
@@ -203,6 +200,18 @@
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"def count_parameters(model):\n",
" return sum(p.numel() for p in model.parameters() if p.requires_grad)"
],
"metadata": {
"id": "AQNCmFNV6JpV"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
@@ -226,19 +235,27 @@
"# This code will take a while (~30 mins on GPU) to run! Go and make a cup of coffee!\n",
"\n",
"hidden_variables = np.array([2,4,6,8,10,14,18,22,26,30,35,40,45,50,55,60,70,80,90,100,120,140,160,180,200,250,300,400]) ;\n",
"\n",
"errors_train_all = np.zeros_like(hidden_variables)\n",
"errors_test_all = np.zeros_like(hidden_variables)\n",
"total_weights_all = np.zeros_like(hidden_variables)\n",
"\n",
"# loop over the dataset n_epoch times\n",
"n_epoch = 1000\n",
"\n",
"# For each hidden variable size\n",
"for c_hidden in range(len(hidden_variables)):\n",
" print(f'Training model with {hidden_variables[c_hidden]:3d} hidden variables')\n",
" # Get a model\n",
" model = get_model(hidden_variables[c_hidden]) ;\n",
" # Count and store number of weights\n",
" total_weights_all[c_hidden] = count_parameters(model)\n",
" # Train the model\n",
" errors_train, errors_test = fit_model(model, data)\n",
" errors_train, errors_test = fit_model(model, data, n_epoch)\n",
" # Store the results\n",
" errors_train_all[c_hidden] = errors_train\n",
" errors_test_all[c_hidden]= errors_test"
" errors_test_all[c_hidden]= errors_test\n",
"\n"
],
"metadata": {
"id": "K4OmBZGHWXpk"
@@ -249,12 +266,29 @@
{
"cell_type": "code",
"source": [
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"\n",
"# Assuming data['y'] is available and contains the training examples\n",
"num_training_examples = len(data['y'])\n",
"\n",
"# Find the index where total_weights_all is closest to num_training_examples\n",
"closest_index = np.argmin(np.abs(np.array(total_weights_all) - num_training_examples))\n",
"\n",
"# Get the corresponding value of hidden variables\n",
"hidden_variable_at_num_training_examples = hidden_variables[closest_index]\n",
"\n",
"# Plot the results\n",
"fig, ax = plt.subplots()\n",
"ax.plot(hidden_variables, errors_train_all, 'r-', label='train')\n",
"ax.plot(hidden_variables, errors_test_all, 'b-', label='test')\n",
"ax.set_ylim(0,100);\n",
"ax.set_xlabel('No hidden variables'); ax.set_ylabel('Error')\n",
"\n",
"# Add a vertical line at the point where total weights equal the number of training examples\n",
"ax.axvline(x=hidden_variable_at_num_training_examples, color='g', linestyle='--', label='N(weights) = N(train)')\n",
"\n",
"ax.set_ylim(0, 100)\n",
"ax.set_xlabel('No. hidden variables')\n",
"ax.set_ylabel('Error')\n",
"ax.legend()\n",
"plt.show()\n"
],
@@ -263,6 +297,24 @@
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [],
"metadata": {
"id": "KT4X8_hE5NFb"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [],
"metadata": {
"id": "iGKZSfVF2r4z"
},
"execution_count": null,
"outputs": []
}
]
}

View File

@@ -134,7 +134,7 @@
"source": [
"# Volume of a hypersphere\n",
"\n",
"In the second part of this notebook we calculate the volume of a hypersphere of radius 0.5 (i.e., of diameter 1) as a function of the radius. Note that you you can check your answer by doing the calculation for 2D using the standard formula for the area of a circle and making sure it matches."
"In the second part of this notebook we calculate the volume of a hypersphere of radius 0.5 (i.e., of diameter 1) as a function of the radius. Note that you can check your answer by doing the calculation for 2D using the standard formula for the area of a circle and making sure it matches."
],
"metadata": {
"id": "b2FYKV1SL4Z7"

View File

@@ -4,7 +4,6 @@
"metadata": {
"colab": {
"provenance": [],
"authorship_tag": "ABX9TyPJzymRTuvoWggIskM2Kamc",
"include_colab_link": true
},
"kernelspec": {
@@ -458,14 +457,14 @@
{
"cell_type": "code",
"source": [
"def dldphi0(phi, lambda_):\n",
"def dregdphi0(phi, lambda_):\n",
" # TODO compute the derivative with respect to phi0\n",
" # Replace this line:]\n",
" deriv = 0\n",
"\n",
" return deriv\n",
"\n",
"def dldphi1(phi, lambda_):\n",
"def dregdphi1(phi, lambda_):\n",
" # TODO compute the derivative with respect to phi1\n",
" # Replace this line:]\n",
" deriv = 0\n",
@@ -475,8 +474,8 @@
"\n",
"\n",
"def compute_gradient2(data_x, data_y, phi, lambda_):\n",
" dl_dphi0 = gabor_deriv_phi0(data_x, data_y, phi[0],phi[1])+dldphi0(np.squeeze(phi), lambda_)\n",
" dl_dphi1 = gabor_deriv_phi1(data_x, data_y, phi[0],phi[1])+dldphi1(np.squeeze(phi), lambda_)\n",
" dl_dphi0 = gabor_deriv_phi0(data_x, data_y, phi[0],phi[1])+dregdphi0(np.squeeze(phi), lambda_)\n",
" dl_dphi1 = gabor_deriv_phi1(data_x, data_y, phi[0],phi[1])+dregdphi1(np.squeeze(phi), lambda_)\n",
" # Return the gradient\n",
" return np.array([[dl_dphi0],[dl_dphi1]])\n",
"\n",

View File

@@ -342,7 +342,7 @@
"[\\mathbf{h}^*;1]\\biggr],\n",
"\\end{align}\n",
"\n",
"where the notation $[\\mathbf{h}^{*T},1]$ is a row vector containing $\\mathbf{h}^{T}$ with a one appended to the end and $[\\mathbf{h};1 ]$ is a column vector containing $\\mathbf{h}$ with a one appended to the end.\n",
"where the notation $[\\mathbf{h}^{*T},1]$ is a row vector containing $\\mathbf{h}^{*T}$ with a one appended to the end and $[\\mathbf{h}^{*};1 ]$ is a column vector containing $\\mathbf{h}^{*}$ with a one appended to the end.\n",
"\n",
"\n",
"To compute this, we reformulated the integrand using the relations from appendices C.3.3 and C.3.4 as the product of a normal distribution in $\\boldsymbol\\phi$ and a constant with respect\n",

View File

@@ -107,10 +107,7 @@
" # Initialize the parameters with He initialization\n",
" if isinstance(layer_in, nn.Linear):\n",
" nn.init.kaiming_uniform_(layer_in.weight)\n",
" layer_in.bias.data.fill_(0.0)\n",
"\n",
"# Call the function you just defined\n",
"model.apply(weights_init)"
" layer_in.bias.data.fill_(0.0)\n"
],
"metadata": {
"id": "JfIFWFIL33eF"

View File

@@ -4,7 +4,6 @@
"metadata": {
"colab": {
"provenance": [],
"authorship_tag": "ABX9TyMbSR8fzpXvO6TIQdO7bI0H",
"include_colab_link": true
},
"kernelspec": {
@@ -31,7 +30,7 @@
"source": [
"# **Notebook 10.4: Downsampling and Upsampling**\n",
"\n",
"This notebook investigates the down sampling and downsampling methods discussed in section 10.4 of the book.\n",
"This notebook investigates the upsampling and downsampling methods discussed in section 10.4 of the book.\n",
"\n",
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
"\n",
@@ -71,9 +70,9 @@
{
"cell_type": "code",
"source": [
"def subsample(x_in):\n",
"def downsample(x_in):\n",
" x_out = np.zeros(( int(np.ceil(x_in.shape[0]/2)), int(np.ceil(x_in.shape[1]/2)) ))\n",
" # TO DO -- write the subsampling routine\n",
" # TODO -- write the downsampling routine\n",
" # Replace this line\n",
" x_out = x_out\n",
"\n",
@@ -91,8 +90,8 @@
"source": [
"print(\"Original:\")\n",
"print(orig_4_4)\n",
"print(\"Subsampled:\")\n",
"print(subsample(orig_4_4))"
"print(\"Downsampled:\")\n",
"print(downsample(orig_4_4))"
],
"metadata": {
"id": "O_i0y72_JwGZ"
@@ -127,24 +126,24 @@
"image = Image.open('test_image.png')\n",
"# convert image to numpy array\n",
"data = asarray(image)\n",
"data_subsample = subsample(data);\n",
"data_downsample = downsample(data);\n",
"\n",
"plt.figure(figsize=(5,5))\n",
"plt.imshow(data, cmap='gray')\n",
"plt.show()\n",
"\n",
"plt.figure(figsize=(5,5))\n",
"plt.imshow(data_subsample, cmap='gray')\n",
"plt.imshow(data_downsample, cmap='gray')\n",
"plt.show()\n",
"\n",
"data_subsample2 = subsample(data_subsample)\n",
"data_downsample2 = downsample(data_downsample)\n",
"plt.figure(figsize=(5,5))\n",
"plt.imshow(data_subsample2, cmap='gray')\n",
"plt.imshow(data_downsample2, cmap='gray')\n",
"plt.show()\n",
"\n",
"data_subsample3 = subsample(data_subsample2)\n",
"data_downsample3 = downsample(data_downsample2)\n",
"plt.figure(figsize=(5,5))\n",
"plt.imshow(data_subsample3, cmap='gray')\n",
"plt.imshow(data_downsample3, cmap='gray')\n",
"plt.show()"
],
"metadata": {
@@ -345,11 +344,11 @@
{
"cell_type": "code",
"source": [
"# Let's re-upsample, sub-sampled rick\n",
"data_duplicate = duplicate(data_subsample3);\n",
"# Let's re-upsample, downsampled rick\n",
"data_duplicate = duplicate(data_downsample3);\n",
"\n",
"plt.figure(figsize=(5,5))\n",
"plt.imshow(data_subsample3, cmap='gray')\n",
"plt.imshow(data_downsample3, cmap='gray')\n",
"plt.show()\n",
"\n",
"plt.figure(figsize=(5,5))\n",
@@ -388,7 +387,7 @@
"# The input x_high_res is the original high res image, from which you can deduce the position of the maximum index\n",
"def max_unpool(x_in, x_high_res):\n",
" x_out = np.zeros(( x_in.shape[0]*2, x_in.shape[1]*2 ))\n",
" # TO DO -- write the subsampling routine\n",
" # TODO -- write the unpooling routine\n",
" # Replace this line\n",
" x_out = x_out\n",
"\n",
@@ -417,7 +416,7 @@
{
"cell_type": "code",
"source": [
"# Let's re-upsample, sub-sampled rick\n",
"# Let's re-upsample, down-sampled rick\n",
"data_max_unpool= max_unpool(data_maxpool3,data_maxpool2);\n",
"\n",
"plt.figure(figsize=(5,5))\n",
@@ -489,7 +488,7 @@
{
"cell_type": "code",
"source": [
"# Let's re-upsample, sub-sampled rick\n",
"# Let's re-upsample, down-sampled rick\n",
"data_bilinear = bilinear(data_meanpool3);\n",
"\n",
"plt.figure(figsize=(5,5))\n",

View File

@@ -1,26 +1,10 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"authorship_tag": "ABX9TyNAcc98STMeyQgh9SbVHWG+",
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
"colab_type": "text",
"id": "view-in-github"
},
"source": [
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap10/10_5_Convolution_For_MNIST.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
@@ -28,6 +12,9 @@
},
{
"cell_type": "markdown",
"metadata": {
"id": "t9vk9Elugvmi"
},
"source": [
"# **Notebook 10.5: Convolution for MNIST**\n",
"\n",
@@ -37,14 +24,18 @@
"\n",
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
"\n",
"If you are using Google Colab, you can change your runtime to an instance with GPU support to speed up training, e.g. a T4 GPU. If you do this, the cell below should output ``device(type='cuda')``\n",
"\n",
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n"
],
"metadata": {
"id": "t9vk9Elugvmi"
}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "YrXWAH7sUWvU"
},
"outputs": [],
"source": [
"import torch\n",
"import torchvision\n",
@@ -52,23 +43,34 @@
"import torch.nn.functional as F\n",
"import torch.optim as optim\n",
"import matplotlib.pyplot as plt\n",
"import random"
],
"metadata": {
"id": "YrXWAH7sUWvU"
},
"execution_count": null,
"outputs": []
"import random\n",
"device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n",
"device"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "wScBGXXFVadm"
},
"outputs": [],
"source": [
"# Run this once to load the train and test data straight into a dataloader class\n",
"# that will provide the batches\n",
"\n",
"# (It may complain that some files are missing because the files seem to have been\n",
"# reorganized on the underlying website, but it still seems to work). If everything is working\n",
"# properly, then the whole notebook should run to the end without further problems\n",
"# even before you make changes.\n",
"batch_size_train = 64\n",
"batch_size_test = 1000\n",
"\n",
"# TODO Change this directory to point towards an existing directory (No change needed if using Google Colab)\n",
"myDir = '/files/'\n",
"\n",
"train_loader = torch.utils.data.DataLoader(\n",
" torchvision.datasets.MNIST('/files/', train=True, download=True,\n",
" torchvision.datasets.MNIST(myDir, train=True, download=True,\n",
" transform=torchvision.transforms.Compose([\n",
" torchvision.transforms.ToTensor(),\n",
" torchvision.transforms.Normalize(\n",
@@ -77,22 +79,22 @@
" batch_size=batch_size_train, shuffle=True)\n",
"\n",
"test_loader = torch.utils.data.DataLoader(\n",
" torchvision.datasets.MNIST('/files/', train=False, download=True,\n",
" torchvision.datasets.MNIST(myDir, train=False, download=True,\n",
" transform=torchvision.transforms.Compose([\n",
" torchvision.transforms.ToTensor(),\n",
" torchvision.transforms.Normalize(\n",
" (0.1307,), (0.3081,))\n",
" ])),\n",
" batch_size=batch_size_test, shuffle=True)"
],
"metadata": {
"id": "wScBGXXFVadm"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "8bKADvLHbiV5"
},
"outputs": [],
"source": [
"# Let's draw some of the training data\n",
"examples = enumerate(test_loader)\n",
@@ -107,24 +109,24 @@
" plt.xticks([])\n",
" plt.yticks([])\n",
"plt.show()"
],
"metadata": {
"id": "8bKADvLHbiV5"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "markdown",
"source": [
"Define the network. This is a more typical way to define a network than the sequential structure. We define a class for the network, and define the parameters in the constructor. Then we use a function called forward to actually run the network. It's easy to see how you might use residual connections in this format."
],
"metadata": {
"id": "_sFvRDGrl4qe"
}
},
"source": [
"Define the network. This is a more typical way to define a network than the sequential structure. We define a class for the network, and define the parameters in the constructor. Then we use a function called forward to actually run the network. It's easy to see how you might use residual connections in this format."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "EQkvw2KOPVl7"
},
"outputs": [],
"source": [
"from os import X_OK\n",
"# TODO Change this class to implement\n",
@@ -165,52 +167,54 @@
"\n",
"\n",
"\n"
],
"metadata": {
"id": "EQkvw2KOPVl7"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "qWZtkCZcU_dg"
},
"outputs": [],
"source": [
"# He initialization of weights\n",
"def weights_init(layer_in):\n",
" if isinstance(layer_in, nn.Linear):\n",
" nn.init.kaiming_uniform_(layer_in.weight)\n",
" layer_in.bias.data.fill_(0.0)"
],
"metadata": {
"id": "qWZtkCZcU_dg"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "FslroPJJffrh"
},
"outputs": [],
"source": [
"# Create network\n",
"model = Net()\n",
"model = Net().to(device)\n",
"# Initialize model weights\n",
"model.apply(weights_init)\n",
"# Define optimizer\n",
"optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5)"
],
"metadata": {
"id": "FslroPJJffrh"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "xKQd9PzkQ766"
},
"outputs": [],
"source": [
"# Main training routine\n",
"def train(epoch):\n",
" model.train()\n",
" # Get each\n",
" for batch_idx, (data, target) in enumerate(train_loader):\n",
" data = data.to(device)\n",
" target = target.to(device)\n",
" optimizer.zero_grad()\n",
" output = model(data)\n",
" loss = F.nll_loss(output, target)\n",
@@ -220,15 +224,15 @@
" if batch_idx % 10 == 0:\n",
" print('Train Epoch: {} [{}/{}]\\tLoss: {:.6f}'.format(\n",
" epoch, batch_idx * len(data), len(train_loader.dataset), loss.item()))"
],
"metadata": {
"id": "xKQd9PzkQ766"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Byn-f7qWRLxX"
},
"outputs": [],
"source": [
"# Run on test data\n",
"def test():\n",
@@ -237,6 +241,8 @@
" correct = 0\n",
" with torch.no_grad():\n",
" for data, target in test_loader:\n",
" data = data.to(device)\n",
" target = target.to(device)\n",
" output = model(data)\n",
" test_loss += F.nll_loss(output, target, size_average=False).item()\n",
" pred = output.data.max(1, keepdim=True)[1]\n",
@@ -245,15 +251,15 @@
" print('\\nTest set: Avg. loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\\n'.format(\n",
" test_loss, correct, len(test_loader.dataset),\n",
" 100. * correct / len(test_loader.dataset)))"
],
"metadata": {
"id": "Byn-f7qWRLxX"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "YgLaex1pfhqz"
},
"outputs": [],
"source": [
"# Get initial performance\n",
"test()\n",
@@ -262,15 +268,15 @@
"for epoch in range(1, n_epochs + 1):\n",
" train(epoch)\n",
" test()"
],
"metadata": {
"id": "YgLaex1pfhqz"
},
"execution_count": null,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "o7fRUAy9Se1B"
},
"outputs": [],
"source": [
"# Run network on data we got before and show predictions\n",
"output = model(example_data)\n",
@@ -285,12 +291,23 @@
" plt.xticks([])\n",
" plt.yticks([])\n",
"plt.show()"
],
"metadata": {
"id": "o7fRUAy9Se1B"
},
"execution_count": null,
"outputs": []
}
]
}
],
"metadata": {
"colab": {
"authorship_tag": "ABX9TyORZF8xy4X1yf4oRhRq8Rtm",
"include_colab_link": true,
"provenance": []
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

View File

@@ -65,7 +65,7 @@
{
"cell_type": "code",
"source": [
"# K is width, D is number of hidden units in each layer\n",
"# K is depth, D is number of hidden units in each layer\n",
"def init_params(K, D):\n",
" # Set seed so we always get the same random numbers\n",
" np.random.seed(1)\n",

View File

@@ -28,7 +28,7 @@
{
"cell_type": "markdown",
"source": [
"# **Notebook 12.1: Multihead Self-Attention**\n",
"# **Notebook 12.2: Multihead Self-Attention**\n",
"\n",
"This notebook builds a multihead self-attention mechanism as in figure 12.6\n",
"\n",

View File

@@ -109,7 +109,7 @@
"# Choose random values for the parameters\n",
"omega = np.random.normal(size=(D,D))\n",
"beta = np.random.normal(size=(D,1))\n",
"phi = np.random.normal(size=(1,2*D))"
"phi = np.random.normal(size=(2*D,1))"
],
"metadata": {
"id": "79TSK7oLMobe"

View File

@@ -86,6 +86,7 @@
"cell_type": "code",
"source": [
"# TODO Define the distance matrix from figure 15.8d\n",
"# The index should be normalized before being used in the distance calculation.\n",
"# Replace this line\n",
"dist_mat = np.zeros((10,10))\n",
"\n",

View File

@@ -1,18 +1,16 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "view-in-github"
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap17/17_1_Latent_Variable_Models.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "t9vk9Elugvmi"
@@ -43,7 +41,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "IyVn-Gi-p7wf"
@@ -55,7 +52,7 @@
"Pr(z) = \\text{Norm}_{z}[0,1]\n",
"\\end{equation}\n",
"\n",
"As in figure 17.2, we'll assume that the output is two dimensional, we we need to define a function that maps from the 1D latent variable to two dimensions. Usually, we would use a neural network, but in this case, we'll just define an arbitrary relationship.\n",
"As in figure 17.2, we'll assume that the output is two dimensional, we need to define a function that maps from the 1D latent variable to two dimensions. Usually, we would use a neural network, but in this case, we'll just define an arbitrary relationship.\n",
"\n",
"\\begin{align}\n",
"x_{1} &=& 0.5\\cdot\\exp\\Bigl[\\sin\\bigl[2+ 3.675 z \\bigr]\\Bigr]\\\\\n",
@@ -79,7 +76,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "KB9FU34onW1j"
@@ -145,7 +141,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "sQg2gKR5zMrF"
@@ -223,7 +218,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "0X4NwixzqxtZ"
@@ -254,7 +248,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "25xqXnmFo-PH"
@@ -281,7 +274,7 @@
"# We can't integrate this function in closed form\n",
"# So let's approximate it as a sum over the z values (z = np.arange(-3,3,0.01))\n",
"# You will need the functions get_likelihood() and get_prior()\n",
"# To make this a valid probability distribution, you need to divide\n",
"# To make this a valid probability distribution, you need to multiply\n",
"# By the z-increment (0.01)\n",
"# Replace this line\n",
"pr_x1_x2 = np.zeros_like(x1_mesh)\n",
@@ -292,7 +285,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "W264N7By_h9y"
@@ -320,7 +312,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "D7N7oqLe-eJO"
@@ -388,9 +379,8 @@
],
"metadata": {
"colab": {
"authorship_tag": "ABX9TyOSEQVqxE5KrXmsZVh9M3gq",
"include_colab_link": true,
"provenance": []
"provenance": [],
"include_colab_link": true
},
"kernelspec": {
"display_name": "Python 3",

View File

@@ -1,18 +1,16 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "view-in-github"
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap17/17_3_Importance_Sampling.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "t9vk9Elugvmi"
@@ -40,7 +38,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "f7a6xqKjkmvT"
@@ -126,7 +123,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "Jr4UPcqmnXCS"
@@ -166,8 +162,8 @@
"mean_all = np.zeros_like(n_sample_all)\n",
"variance_all = np.zeros_like(n_sample_all)\n",
"for i in range(len(n_sample_all)):\n",
" print(\"Computing mean and variance for expectation with %d samples\"%(n_sample_all[i]))\n",
" mean_all[i],variance_all[i] = compute_mean_variance(n_sample_all[i])"
" mean_all[i],variance_all[i] = compute_mean_variance(n_sample_all[i])\n",
" print(\"No samples: \", n_sample_all[i], \", Mean: \", mean_all[i], \", Variance: \", variance_all[i])"
]
},
{
@@ -189,7 +185,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "XTUpxFlSuOl7"
@@ -199,7 +194,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "6hxsl3Pxo1TT"
@@ -234,7 +228,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "G9Xxo0OJsIqD"
@@ -283,7 +276,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "2sVDqP0BvxqM"
@@ -313,8 +305,8 @@
"mean_all2 = np.zeros_like(n_sample_all)\n",
"variance_all2 = np.zeros_like(n_sample_all)\n",
"for i in range(len(n_sample_all)):\n",
" print(\"Computing variance for expectation with %d samples\"%(n_sample_all[i]))\n",
" mean_all2[i], variance_all2[i] = compute_mean_variance2(n_sample_all[i])"
" mean_all2[i], variance_all2[i] = compute_mean_variance2(n_sample_all[i])\n",
" print(\"No samples: \", n_sample_all[i], \", Mean: \", mean_all2[i], \", Variance: \", variance_all2[i])"
]
},
{
@@ -348,7 +340,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "EtBP6NeLwZqz"
@@ -360,7 +351,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "_wuF-NoQu1--"
@@ -432,8 +422,8 @@
"mean_all2b = np.zeros_like(n_sample_all)\n",
"variance_all2b = np.zeros_like(n_sample_all)\n",
"for i in range(len(n_sample_all)):\n",
" print(\"Computing variance for expectation with %d samples\"%(n_sample_all[i]))\n",
" mean_all2b[i], variance_all2b[i] = compute_mean_variance2b(n_sample_all[i])"
" mean_all2b[i], variance_all2b[i] = compute_mean_variance2b(n_sample_all[i])\n",
" print(\"No samples: \", n_sample_all[i], \", Mean: \", mean_all2b[i], \", Variance: \", variance_all2b[i])"
]
},
{
@@ -478,7 +468,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "y8rgge9MNiOc"
@@ -490,9 +479,8 @@
],
"metadata": {
"colab": {
"authorship_tag": "ABX9TyNecz9/CDOggPSmy1LjT/Dv",
"include_colab_link": true,
"provenance": []
"provenance": [],
"include_colab_link": true
},
"kernelspec": {
"display_name": "Python 3",

View File

@@ -4,7 +4,6 @@
"metadata": {
"colab": {
"provenance": [],
"authorship_tag": "ABX9TyOlD6kmCxX3SKKuh3oJikKA",
"include_colab_link": true
},
"kernelspec": {
@@ -393,7 +392,7 @@
{
"cell_type": "code",
"source": [
"# Update the state values for the current policy, by making the values at at adjacent\n",
"# Update the state values for the current policy, by making the values at adjacent\n",
"# states compatible with the Bellman equation (equation 19.11)\n",
"def policy_evaluation(policy, state_values, rewards, transition_probabilities_given_action, gamma):\n",
"\n",
@@ -406,6 +405,10 @@
" state_values_new[state] = 3.0\n",
" break\n",
"\n",
" # TODO -- Write this function (from equation 19.11, but bear in mind policy is deterministic here)\n",
" # Replace this line\n",
" state_values_new[state] = 0\n",
"\n",
" return state_values_new\n",
"\n",
"# Greedily choose the action that maximizes the value for each state.\n",

View File

@@ -437,7 +437,7 @@
" new_state = np.random.choice(a=np.arange(0,transition_probabilities_given_action.shape[0]),p = transition_probabilities_given_action[:,state,action])\n",
" # Return the reward\n",
" reward = reward_structure[new_state]\n",
" is_terminal = new_state in [terminal_states]\n",
" is_terminal = new_state in terminal_states\n",
"\n",
" return new_state, reward, action, is_terminal"
]

View File

@@ -265,7 +265,7 @@
"\n",
"In this icy environment the penguin is at one of the discrete cells in the gridworld. The agent starts each episode on a randomly chosen cell. The environment state dynamics are captured by the transition probabilities $Pr(s_{t+1} |s_t, a_t)$ where $s_t$ is the current state, $a_t$ is the action chosen, and $s_{t+1}$ is the next state at decision stage t. At each decision stage, the penguin can move in one of four directions: $a=0$ means try to go upward, $a=1$, right, $a=2$ down and $a=3$ left.\n",
"\n",
"However, the ice is slippery, so we don't always go the direction we want to: every time the agent chooses an action, with 0.25 probability, the environment changes the action taken to a differenct action, which is uniformly sampled from the other available actions.\n",
"However, the ice is slippery, so we don't always go the direction we want to: every time the agent chooses an action, with 0.25 probability, the environment changes the action taken to a different action, which is uniformly sampled from the other available actions.\n",
"\n",
"The rewards are deterministic; the penguin will receive a reward of +3 if it reaches the fish, -2 if it slips into a hole and 0 otherwise.\n",
"\n",
@@ -470,7 +470,7 @@
"\n",
" # Return the reward -- here the reward is for arriving at the state\n",
" reward = reward_structure[new_state]\n",
" is_terminal = new_state in [terminal_states]\n",
" is_terminal = new_state in terminal_states\n",
"\n",
" return new_state, reward, action, is_terminal"
]

View File

@@ -44,7 +44,8 @@
},
"source": [
"# Run this if you're in a Colab to install MNIST 1D repository\n",
"!pip install git+https://github.com/greydanus/mnist1d"
"!pip install git+https://github.com/greydanus/mnist1d\n",
"!git clone https://github.com/greydanus/mnist1d"
],
"execution_count": null,
"outputs": []
@@ -95,6 +96,12 @@
"id": "I-vm_gh5xTJs"
},
"source": [
"from mnist1d.data import get_dataset, get_dataset_args\n",
"from mnist1d.utils import set_seed, to_pickle, from_pickle\n",
"\n",
"import sys ; sys.path.append('./mnist1d/notebooks')\n",
"from train import get_model_args, train_model\n",
"\n",
"args = mnist1d.get_dataset_args()\n",
"data = mnist1d.get_dataset(args=args) # by default, this will download a pre-made dataset from the GitHub repo\n",
"\n",
@@ -210,7 +217,7 @@
" # we would return [1,1,0,0,1]\n",
" # Remember that these are torch tensors and not numpy arrays\n",
" # Replace this function:\n",
" mask = torch.ones_like(scores)\n",
" mask = torch.ones_like(absolute_weights)\n",
"\n",
"\n",
" return mask"
@@ -237,7 +244,6 @@
"def find_lottery_ticket(model, dataset, args, sparsity_schedule, criteria_fn=None, **kwargs):\n",
"\n",
" criteria_fn = lambda init_params, final_params: final_params.abs()\n",
"\n",
" init_params = model.get_layer_vecs()\n",
" stats = {'train_losses':[], 'test_losses':[], 'train_accs':[], 'test_accs':[]}\n",
" models = []\n",
@@ -253,7 +259,7 @@
" model.set_layer_masks(masks)\n",
"\n",
" # training process\n",
" results = mnist1d.train_model(dataset, model, args)\n",
" results = train_model(dataset, model, args)\n",
" model = results['checkpoints'][-1]\n",
"\n",
" # store stats\n",
@@ -291,7 +297,8 @@
},
"source": [
"# train settings\n",
"model_args = mnist1d.get_model_args()\n",
"from train import get_model_args, train_model\n",
"model_args = get_model_args()\n",
"model_args.total_steps = 1501\n",
"model_args.hidden_size = 500\n",
"model_args.print_every = 5000 # print never\n",

View File

@@ -137,7 +137,7 @@
"id": "CfZ-srQtmff2"
},
"source": [
"Why might the distributions for blue and yellow populations be different? It could be that the behaviour of the populations is identical, but the credit rating algorithm is biased; it may favor one population over another or simply be more noisy for one group. Alternatively, it could be that that the populations genuinely behave differently. In practice, the differences in blue and yellow distributions are probably attributable to a combination of these factors.\n",
"Why might the distributions for blue and yellow populations be different? It could be that the behaviour of the populations is identical, but the credit rating algorithm is biased; it may favor one population over another or simply be more noisy for one group. Alternatively, it could be that the populations genuinely behave differently. In practice, the differences in blue and yellow distributions are probably attributable to a combination of these factors.\n",
"\n",
"Lets assume that we cant retrain the credit score prediction algorithm; our job is to adjudicate whether each individual is refused the loan ($\\hat{y}=0$)\n",
" or granted it ($\\hat{y}=1$). Since we only have the credit score\n",
@@ -382,7 +382,7 @@
"source": [
"# Equal opportunity:\n",
"\n",
"The thresholds are chosen so that so that the true positive rate is is the same for both population. Of the people who pay back the loan, the same proportion are offered credit in each group. In terms of the two ROC curves, it means choosing thresholds so that the vertical position on each curve is the same without regard for the horizontal position."
"The thresholds are chosen so that so that the true positive rate is the same for both population. Of the people who pay back the loan, the same proportion are offered credit in each group. In terms of the two ROC curves, it means choosing thresholds so that the vertical position on each curve is the same without regard for the horizontal position."
]
},
{

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

1061
Trees/SAT_Crossword.ipynb Normal file

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

248
Trees/SAT_Exhaustive.ipynb Normal file

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

270
Trees/SAT_Sudoku.ipynb Normal file

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

251
Trees/SAT_Tseitin.ipynb Normal file

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

264
Trees/SAT_Z3.ipynb Normal file

File diff suppressed because one or more lines are too long

335
Trees/SAT_Z3_Answers.ipynb Normal file

File diff suppressed because one or more lines are too long

Binary file not shown.

Binary file not shown.

2229
UDL_Equations.tex Normal file

File diff suppressed because it is too large Load Diff

Binary file not shown.

View File

@@ -10,6 +10,7 @@
href="https://fonts.googleapis.com/css2?family=Encode+Sans+Expanded:wght@400;700&display=swap"
rel="stylesheet"
/>
<title>Understanding Deep Learning</title>
</head>
<body>

File diff suppressed because one or more lines are too long

View File

@@ -33,6 +33,124 @@ const citation = `
`;
const news = [
{
// date: "03/6/25",
// content: (
// <HeroNewsItemContent>
// New {" "}
// <UDLLink href="https://dl4ds.github.io/sp2025/lectures/">
// slides and video lectures
// </UDLLink>{" "}
// that closely follow the book from Thomas Gardos of Boston University.
// </HeroNewsItemContent>
// ),
},
{
date: "02/19/25",
content: (
<HeroNewsItemContent>
Three new blogs {" "}
<UDLLink href="https://rbcborealis.com/research-blogs/odes-and-sdes-for-machine-learning/">
[1]
</UDLLink>
<UDLLink href="https://rbcborealis.com/research-blogs/introduction-ordinary-differential-equations/">
[2]
</UDLLink>
<UDLLink href="https://rbcborealis.com/research-blogs/closed-form-solutions-for-odes/">
[3]
</UDLLink>{" "}
on ODEs and SDEs in machine learning.
</HeroNewsItemContent>
),
},
{
date: "01/23/25",
content: (
<HeroNewsItemContent>
Added{" "}
<UDLLink href="https://github.com/udlbook/udlbook/raw/main/understanding-deep-learning-final.bib">
bibfile
</UDLLink>{" "} for book and
<UDLLink href="https://github.com/udlbook/udlbook/raw/main/UDL_Equations.tex">
LaTeX
</UDLLink>{" "}
for all equations
</HeroNewsItemContent>
),
},
{
date: "12/17/24",
content: (
<HeroNewsItemContent>
<UDLLink href="https://www.youtube.com/playlist?list=PLRdABJkXXytCz19PsZ1PCQBKoZGV069k3">
Video lectures
</UDLLink>{" "}
for chapters 1-12 from Tamer Elsayed of Qatar University.
</HeroNewsItemContent>
),
},
{
date: "12/05/24",
content: (
<HeroNewsItemContent>
New{" "}
<UDLLink href="https://rbcborealis.com/research-blogs/neural-network-gaussian-processes/">
blog
</UDLLink>{" "}
on Neural network Gaussian processes
</HeroNewsItemContent>
),
},
{
date: "11/14/24",
content: (
<HeroNewsItemContent>
New{" "}
<UDLLink href=" https://rbcborealis.com/research-blogs/bayesian-neural-networks/">
blog
</UDLLink>{" "}
on Bayesian Neural Networks
</HeroNewsItemContent>
),
},
{
date: "08/13/24",
content: (
<HeroNewsItemContent>
New{" "}
<UDLLink href="https://www.borealisai.com/research-blogs/bayesian-machine-learning-function-space/">
blog
</UDLLink>{" "}
on Bayesian machine learning (function perspective)
</HeroNewsItemContent>
),
},
{
date: "08/05/24",
content: (
<HeroNewsItemContent>
Added{" "}
<UDLLink href="https://udlbook.github.io/udlfigures/">
interactive figures
</UDLLink>{" "}
to explore 1D linear regression, shallow and deep networks, Gabor model.
</HeroNewsItemContent>
),
},
{
date: "07/30/24",
content: (
<HeroNewsItemContent>
New{" "}
<UDLLink href="https://www.borealisai.com/research-blogs/bayesian-machine-learning-parameter-space/">
blog
</UDLLink>{" "}
on Bayesian machine learning (parameter perspective)
</HeroNewsItemContent>
),
},
{
date: "05/22/24",
content: (
@@ -184,8 +302,8 @@ export default function HeroSection() {
<HeroImgWrap>
<Img src={img} alt="Book Cover" />
</HeroImgWrap>
<HeroLink href="https://github.com/udlbook/udlbook/releases/download/v4.0.1/UnderstandingDeepLearning_05_27_24_C.pdf">
Download full PDF (27 May 2024)
<HeroLink href="https://github.com/udlbook/udlbook/releases/download/v5.0.2/UnderstandingDeepLearning_05_29_25_C.pdf">
Download full PDF (29 May 2025)
</HeroLink>
<br />
<HeroDownloadsImg
@@ -201,7 +319,7 @@ export default function HeroSection() {
<HeroLink href="https://github.com/udlbook/udlbook/raw/main/UDL_Errata.pdf">
Errata
</HeroLink>
</HeroColumn2>
</HeroColumn2> <h1></h1>
</HeroRow>
</HeroContent>
</HeroContainer>

View File

@@ -280,6 +280,12 @@ export default function InstructorsSection() {
</InstructorsLink>{" "}
with MIT Press for answer booklet.
<InstructorsContent></InstructorsContent>
<TopLine>Interactive figures</TopLine>
<InstructorsLink href="https://udlbook.github.io/udlfigures/">
Interactive figures </InstructorsLink>{" "}
to illustrate ideas in class
<InstructorsContent></InstructorsContent>
<TopLine>Full slides</TopLine>
<InstructorsContent>
Slides for 20 lecture undergraduate deep learning course:
@@ -296,6 +302,11 @@ export default function InstructorsSection() {
))}
</ol>
</InstructorsContent>
<TopLine>LaTeX for equations</TopLine>
A {" "} <InstructorsLink href="https://github.com/udlbook/udlbook/raw/main/UDL_Equations.tex">
working Latex file </InstructorsLink>{" "}
containing all of the equations
<InstructorsContent></InstructorsContent>
</Column1>
<Column2>
<TopLine>Figures</TopLine>
@@ -325,6 +336,11 @@ export default function InstructorsSection() {
</InstructorsLink>{" "}
for editing equations in figures.
<InstructorsContent></InstructorsContent>
<TopLine>LaTeX Bibfile </TopLine>
The {" "} <InstructorsLink href="https://github.com/udlbook/udlbook/raw/main/understanding-deep-learning-final.bib">
bibfile </InstructorsLink>{" "}
containing all of the references
<InstructorsContent></InstructorsContent>
</Column2>
</InstructorsRow2>
</InstructorsWrapper>

48
src/components/Media/index.jsx Normal file → Executable file
View File

@@ -69,22 +69,6 @@ export default function MediaSection() {
</VideoFrame>
</Column1>
<Column2>
Deeper insights podcast
<VideoFrame>
<iframe
width="100%"
height="100%"
src="https://www.youtube.com/embed/nQf4o9TDSHI?si=uMk66zLD7uhuSnQ1&amp;controls=0"
title="YouTube video player"
frameBorder="2"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
allowfullscreen
></iframe>
</VideoFrame>
</Column2>
</MediaRow>
<MediaRow2>
<Column1>
<TopLine>Reviews</TopLine>
<MediaContent>
{/* TODO: add dynamic rendering for reviews */}
@@ -120,28 +104,21 @@ export default function MediaSection() {
by Vishal V.
</li>
<li>
Amazon{" "}
<MediaLink href="https://www.amazon.com/Understanding-Deep-Learning-Simon-Prince-ebook/product-reviews/B0BXKH8XY6/">
reviews
</MediaLink>
</li>
<li>
Goodreads{" "}
<MediaLink href="https://www.goodreads.com/book/show/123239819-understanding-deep-learning?">
reviews{" "}
</MediaLink>
Book{" "}
<MediaLink href="https://www.linkedin.com/pulse/review-understanding-deep-learning-prof-simon-prince-chandrasekharan-6egec/">
review
</MediaLink>{" "}
by Nidhin Chandrasekharan
</li>
<li>
Book{" "}
<MediaLink href="https://medium.com/@vishalvignesh/udl-book-review-the-new-deep-learning-textbook-youll-want-to-finish-69e1557b018d">
<MediaLink href="https://www.justinmath.com/the-best-neural-nets-textbook/">
review
</MediaLink>{" "}
by Vishal V.
by Justin Skycak
</li>
</ul>
</MediaContent>
</Column1>
<Column2>
<TopLine>Interviews</TopLine>
<MediaContent>
<ul>
@@ -155,8 +132,17 @@ export default function MediaSection() {
))}
</ul>
</MediaContent>
<TopLine>Video lectures</TopLine>
<ul>
<li>
<MediaLink href="https://www.youtube.com/playlist?list=PLRdABJkXXytCz19PsZ1PCQBKoZGV069k3">
Video lectures
</MediaLink>{" "} for chapters 1-12 from Tamer Elsayed
</li>
</ul>
</Column2>
</MediaRow2>
</MediaRow>
</MediaWrapper>
</MediaContainer>
</>

110
src/components/More/index.jsx Normal file → Executable file
View File

@@ -376,6 +376,51 @@ const aiTheory = [
"NTK and generalizability",
],
},
{
text: "Bayesian ML I",
link: "https://www.borealisai.com/research-blogs/bayesian-machine-learning-parameter-space/",
details: [
"Maximum likelihood",
"Maximum a posteriori",
"The Bayesian approach",
"Example: 1D linear regression",
"Practical concerns",
],
},
{
text: "Bayesian ML II",
link: "https://www.borealisai.com/research-blogs/bayesian-machine-learning-function-space/",
details: [
"Function space",
"Gaussian processes",
"Inference",
"Non-linear regression",
"Kernels and the kernel trick",
],
},
{
text: "Bayesian neural networks",
link: "https://rbcborealis.com/research-blogs/bayesian-neural-networks/",
details: [
"Sampling vs. variational approximation",
"MCMC methods",
"SWAG and MultiSWAG",
"Bayes by backprop",
"Monte Carlo dropout",
],
},
{
text: "Neural network Gaussian processes",
link: "https://rbcborealis.com/research-blogs/neural-network-gaussian-processes/",
details: [
"Shallow networks as GPs",
"Neural network Gaussian processes",
"NNGP Kernel",
"Kernel regression",
"Network stability",
],
},
];
const unsupervisedLearning = [
@@ -664,6 +709,50 @@ const responsibleAI = [
},
];
const ODESDE = [
{
text: "ODEs and SDEs in machine learning",
link: "https://rbcborealis.com/research-blogs/odes-and-sdes-for-machine-learning/",
details: [
"ODEs",
"SDEs",
"ODEs and gradient descent",
"SDEs in stochastic gradient descent",
"ODEs in residual networks",
"ODEs and SDES in diffusion models",
"Physics-informed machine learning",
],
},
{
text: "Introduction to ODEs",
link: "https://rbcborealis.com/research-blogs/introduction-ordinary-differential-equations/",
details: [
"What are ODEs?",
"Terminology and properties",
"Solutions",
"Boundary conditions",
"Existence of solutions",
],
},
{
text: "Closed-form solutions for ODEs",
link: "https://rbcborealis.com/research-blogs/closed-form-solutions-for-odes/",
details: [
"Validating proposed solutions",
"Class 1: Right-hand side is a function of t only",
"Class 2: Linear homogeneous",
"Class 3: right-hand side is function of x alone",
"Class 4: Right-hand side is a separable function of x and t",
"Class 5: Exact ODEs",
"Class 6: linear inhomogeneous ODEs",
"Class 7: Euler homogeneous",
"Vector ODEs",
"The matrix exponential"
],
},
]
export default function MoreSection() {
return (
<>
@@ -689,7 +778,7 @@ export default function MoreSection() {
</MoreRow>
<MoreRow2>
<Column1>
<TopLine>Book</TopLine>
<TopLine>Computer vision book</TopLine>
<MoreOuterList>
{book.map((item, index) => (
<li key={index}>
@@ -814,10 +903,27 @@ export default function MoreSection() {
</li>
))}
</MoreOuterList>
<TopLine>ODEs and SDEs in machine learning</TopLine>
<MoreOuterList>
{ODESDE.map((item, index) => (
<li key={index}>
<MoreLink href={item.link} target="_blank" rel="noreferrer">
{item.text}
</MoreLink>
<MoreInnerP>
<MoreInnerList>
{item.details.map((detail, index) => (
<li key={index}>{detail}</li>
))}
</MoreInnerList>
</MoreInnerP>
</li>
))}
</MoreOuterList>
</Column1>
<Column2>
<TopLine>AI Theory</TopLine>
<TopLine>ML Theory</TopLine>
<MoreOuterList>
{aiTheory.map((item, index) => (
<li key={index}>

File diff suppressed because it is too large Load Diff