Compare commits
301 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
de0a8946a6 | ||
|
|
12672832f5 | ||
|
|
51444a4bbb | ||
|
|
987df8cd88 | ||
|
|
9873b8b20d | ||
|
|
bc0ca18695 | ||
|
|
d66ba78862 | ||
|
|
a8fe82b5e1 | ||
|
|
ac540f1294 | ||
|
|
080bdd319d | ||
|
|
60d50aa9d2 | ||
|
|
d45cba5c95 | ||
|
|
e9f75027bb | ||
|
|
9de32ff327 | ||
|
|
871304357c | ||
|
|
c385687d8a | ||
|
|
207ff5e636 | ||
|
|
cc9c695ff7 | ||
|
|
75646c2c8e | ||
|
|
5552890706 | ||
|
|
01755deefe | ||
|
|
afb9ead4d8 | ||
|
|
57151930de | ||
|
|
ca85255c74 | ||
|
|
3003437b04 | ||
|
|
5e726fcf4e | ||
|
|
6a8273459f | ||
|
|
1c2e19aa3b | ||
|
|
e818dfe054 | ||
|
|
4a08818706 | ||
|
|
16b72a8a9e | ||
|
|
44a3e5f678 | ||
|
|
a644267053 | ||
|
|
69a2b00c9d | ||
|
|
9f0570e26f | ||
|
|
e3a8bb9ac4 | ||
|
|
49da623d86 | ||
|
|
0c771fd677 | ||
|
|
5302b32929 | ||
|
|
d5586e57fc | ||
|
|
d0acc42d81 | ||
|
|
f3188ac35a | ||
|
|
ad1b6a558b | ||
|
|
7eadd56eaa | ||
|
|
53c1357df7 | ||
|
|
8d862ede26 | ||
|
|
44bbfbed91 | ||
|
|
f65f0b1ddf | ||
|
|
1d6d6b6fbe | ||
|
|
62779ec260 | ||
|
|
be3edb60f9 | ||
|
|
b9403e091b | ||
|
|
2c916d9a87 | ||
|
|
310b71e203 | ||
|
|
fcb1333aed | ||
|
|
c39267b3b4 | ||
|
|
4291ed453c | ||
|
|
ab2ff3177a | ||
|
|
c2a4d40da3 | ||
|
|
aa75d3ad73 | ||
|
|
1f0c224a7d | ||
|
|
eb29a28284 | ||
|
|
7648203767 | ||
|
|
64e1d82d04 | ||
|
|
f7450d1875 | ||
|
|
884a7e358b | ||
|
|
2016977f30 | ||
|
|
f88127c0d2 | ||
|
|
a637eec888 | ||
|
|
ddd6bf9149 | ||
|
|
0b41646bf3 | ||
|
|
16afbcdf83 | ||
|
|
b0add1f8e2 | ||
|
|
03ebe5a039 | ||
|
|
41e8262f20 | ||
|
|
2c6e1cb9f8 | ||
|
|
6c99c6b7eb | ||
|
|
0988ae8bd0 | ||
|
|
2cca6dec75 | ||
|
|
49d74b66a9 | ||
|
|
13c0ad30fe | ||
|
|
95549683c4 | ||
|
|
9649ce382b | ||
|
|
666cbb02d5 | ||
|
|
f0337130cb | ||
|
|
472571aef0 | ||
|
|
13b39c2f72 | ||
|
|
84a11d68ed | ||
|
|
653d2f7b84 | ||
|
|
a7ed3e2c34 | ||
|
|
40a2c3ca8b | ||
|
|
fb66cd682d | ||
|
|
88e8526fa7 | ||
|
|
667346fbdd | ||
|
|
4e564088a1 | ||
|
|
f1c07f53bf | ||
|
|
623b9782e7 | ||
|
|
60c5a48477 | ||
|
|
b4688bda68 | ||
|
|
faf34e0887 | ||
|
|
8f2ef53eab | ||
|
|
2f0339341c | ||
|
|
f8acbaab82 | ||
|
|
2aaaef0838 | ||
|
|
9a2039d392 | ||
|
|
6d76e47849 | ||
|
|
b5c65665b6 | ||
|
|
dd9a56d96b | ||
|
|
9b71ac0487 | ||
|
|
eaff933ff7 | ||
|
|
c3dfe95700 | ||
|
|
7082ae8620 | ||
|
|
6072ad4450 | ||
|
|
33197fde36 | ||
|
|
6d425c04d4 | ||
|
|
57c95132d3 | ||
|
|
2b0ac95740 | ||
|
|
d5f198f2d8 | ||
|
|
4edd8c923d | ||
|
|
1adb96e006 | ||
|
|
3801b8d52d | ||
|
|
dc6b346bda | ||
|
|
5eb264540d | ||
|
|
7ba844f2b5 | ||
|
|
be86733a93 | ||
|
|
d101aa428b | ||
|
|
8c6e40daee | ||
|
|
efafb942eb | ||
|
|
b10a2b6940 | ||
|
|
ede7247a0c | ||
|
|
c3b97af456 | ||
|
|
e1df2156a3 | ||
|
|
f887835646 | ||
|
|
e9c8d846f2 | ||
|
|
b7869e8b41 | ||
|
|
747ec9efe1 | ||
|
|
58dfb0390c | ||
|
|
3aeb8db4cd | ||
|
|
305a055079 | ||
|
|
87cf590af9 | ||
|
|
ccedbb72e7 | ||
|
|
b423a67855 | ||
|
|
3c8dab14e6 | ||
|
|
ab73ae785b | ||
|
|
df86bbba04 | ||
|
|
a9868e6da8 | ||
|
|
fed3962bce | ||
|
|
c5fafbca97 | ||
|
|
5f16e0f9bc | ||
|
|
121c81a04e | ||
|
|
e968741846 | ||
|
|
37011065d7 | ||
|
|
afd20d0364 | ||
|
|
0d135f1ee7 | ||
|
|
54a020304e | ||
|
|
ccbbc4126e | ||
|
|
d3273c99e2 | ||
|
|
f9e45c976c | ||
|
|
b005cec9c1 | ||
|
|
b8a91ad34d | ||
|
|
a2a86c27bc | ||
|
|
d80d04c2d4 | ||
|
|
c1f0181653 | ||
|
|
6e18234d24 | ||
|
|
5730c05547 | ||
|
|
ccb80c16b8 | ||
|
|
87387b2b4c | ||
|
|
06eaec9749 | ||
|
|
9aeda14efa | ||
|
|
d1df6426b2 | ||
|
|
43b8fa3685 | ||
|
|
ca6e4b29ac | ||
|
|
267d6ccb7f | ||
|
|
735947b728 | ||
|
|
251aef1876 | ||
|
|
07ff6c06b1 | ||
|
|
29e4cec04e | ||
|
|
c3ce38410c | ||
|
|
646e60ed95 | ||
|
|
5e61bcf694 | ||
|
|
54399a3c68 | ||
|
|
3926ff41ea | ||
|
|
9c34bfed02 | ||
|
|
9176623331 | ||
|
|
5534df187e | ||
|
|
9b58b2862f | ||
|
|
2070ac4400 | ||
|
|
393e4907dc | ||
|
|
e850676722 | ||
|
|
796f17ed90 | ||
|
|
dc0301a86e | ||
|
|
813f628e4e | ||
|
|
3ae7d68f6e | ||
|
|
a96a14999f | ||
|
|
f91e878eef | ||
|
|
9b89499b75 | ||
|
|
7d6ac5e34f | ||
|
|
55dbe7e0c4 | ||
|
|
1cf21ea61a | ||
|
|
e4191beb79 | ||
|
|
10b9dea9a4 | ||
|
|
414eeb3557 | ||
|
|
f126809572 | ||
|
|
2a30c49d22 | ||
|
|
bb32fe0cdf | ||
|
|
1ee756cf9a | ||
|
|
742d922ce7 | ||
|
|
c02eea499c | ||
|
|
cb94b61abd | ||
|
|
447bb82e2f | ||
|
|
77da5694bb | ||
|
|
96c7e41c9d | ||
|
|
625d1e29bb | ||
|
|
3cf0c4c418 | ||
|
|
03c92541ad | ||
|
|
def3e5234b | ||
|
|
815adb9b21 | ||
|
|
5ba28e5b56 | ||
|
|
8566a7322f | ||
|
|
c867e67e8c | ||
|
|
cba27b3da4 | ||
|
|
1c706bd058 | ||
|
|
72514994bf | ||
|
|
872926c17e | ||
|
|
0dfeb169be | ||
|
|
89a0532283 | ||
|
|
af5a719496 | ||
|
|
56c31efc90 | ||
|
|
06fc37c243 | ||
|
|
45793f02f8 | ||
|
|
7c4cc1ddb4 | ||
|
|
35b6f67bbf | ||
|
|
194baf622a | ||
|
|
a547fee3f4 | ||
|
|
ea4858e78e | ||
|
|
444b06d5c2 | ||
|
|
98bce9edb5 | ||
|
|
37e9ae2311 | ||
|
|
ea1b6ad998 | ||
|
|
d17a5a3872 | ||
|
|
3e7e059bff | ||
|
|
445ad11c46 | ||
|
|
6928b50966 | ||
|
|
e1d34ed561 | ||
|
|
f3528f758b | ||
|
|
5c7a03172a | ||
|
|
0233131b07 | ||
|
|
8200299e64 | ||
|
|
2ac42e70d3 | ||
|
|
dd0eaeb781 | ||
|
|
2cdff544f3 | ||
|
|
384e122c5f | ||
|
|
1343b68c60 | ||
|
|
30420a2f92 | ||
|
|
89e8ebcbc5 | ||
|
|
14b751ff47 | ||
|
|
80e99ef2da | ||
|
|
46214f64bc | ||
|
|
c875fb0361 | ||
|
|
451ccc0832 | ||
|
|
4b939b7426 | ||
|
|
2d300a16a1 | ||
|
|
d057548be9 | ||
|
|
75976a32d0 | ||
|
|
48b204df2c | ||
|
|
9b68e6a8e6 | ||
|
|
862ac6e4d3 | ||
|
|
8fe07cf0fb | ||
|
|
c9679dee90 | ||
|
|
90d879494f | ||
|
|
19bdc23674 | ||
|
|
d7f9929a3c | ||
|
|
a7ac089fc0 | ||
|
|
8fd753d191 | ||
|
|
51424b57bd | ||
|
|
80732b29bc | ||
|
|
36e3a53764 | ||
|
|
569749963b | ||
|
|
d17e47421b | ||
|
|
e8fca0cb0a | ||
|
|
19c0c7ab3e | ||
|
|
418ea93e83 | ||
|
|
ea248af22f | ||
|
|
5492ed0ee5 | ||
|
|
d9138d6177 | ||
|
|
a5413d6a15 | ||
|
|
faf53a49a0 | ||
|
|
7e41097381 | ||
|
|
72b2d79ec7 | ||
|
|
d81bef8a6e | ||
|
|
911da8ca58 | ||
|
|
031401a3dd | ||
|
|
4652f90f09 | ||
|
|
5f524edd3b | ||
|
|
7a423507f5 | ||
|
|
4a5bd9c4d5 | ||
|
|
c0cd9c2aea | ||
|
|
924b6e220d | ||
|
|
b535a13d57 | ||
|
|
d0d413b9f6 | ||
|
|
1b53be1e08 |
10
.editorconfig
Normal file
10
.editorconfig
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
root = true
|
||||||
|
|
||||||
|
[*.{js,jsx,ts,tsx,md,mdx,json,cjs,mjs,css}]
|
||||||
|
indent_style = space
|
||||||
|
indent_size = 4
|
||||||
|
end_of_line = lf
|
||||||
|
charset = utf-8
|
||||||
|
trim_trailing_whitespace = true
|
||||||
|
insert_final_newline = true
|
||||||
|
max_line_length = 100
|
||||||
18
.eslintrc.cjs
Normal file
18
.eslintrc.cjs
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
module.exports = {
|
||||||
|
root: true,
|
||||||
|
env: { browser: true, es2020: true, node: true },
|
||||||
|
extends: [
|
||||||
|
"eslint:recommended",
|
||||||
|
"plugin:react/recommended",
|
||||||
|
"plugin:react/jsx-runtime",
|
||||||
|
"plugin:react-hooks/recommended",
|
||||||
|
],
|
||||||
|
ignorePatterns: ["build", ".eslintrc.cjs"],
|
||||||
|
parserOptions: { ecmaVersion: "latest", sourceType: "module" },
|
||||||
|
settings: { react: { version: "18.2" } },
|
||||||
|
plugins: ["react-refresh"],
|
||||||
|
rules: {
|
||||||
|
"react/jsx-no-target-blank": "off",
|
||||||
|
"react-refresh/only-export-components": ["warn", { allowConstantExport: true }],
|
||||||
|
},
|
||||||
|
};
|
||||||
30
.gitignore
vendored
Executable file
30
.gitignore
vendored
Executable file
@@ -0,0 +1,30 @@
|
|||||||
|
# See https://help.github.com/articles/ignoring-files/ for more about ignoring files.
|
||||||
|
|
||||||
|
# dependencies
|
||||||
|
/node_modules
|
||||||
|
/.pnp
|
||||||
|
.pnp.js
|
||||||
|
|
||||||
|
# testing
|
||||||
|
/coverage
|
||||||
|
|
||||||
|
# production
|
||||||
|
/dist
|
||||||
|
|
||||||
|
# ENV
|
||||||
|
.env.local
|
||||||
|
.env.development.local
|
||||||
|
.env.test.local
|
||||||
|
.env.production.local
|
||||||
|
|
||||||
|
# debug
|
||||||
|
npm-debug.log*
|
||||||
|
yarn-debug.log*
|
||||||
|
yarn-error.log*
|
||||||
|
|
||||||
|
# IDE
|
||||||
|
.idea
|
||||||
|
.vscode
|
||||||
|
|
||||||
|
# macOS
|
||||||
|
.DS_Store
|
||||||
7
.prettierignore
Normal file
7
.prettierignore
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
# ignore these directories when formatting the repo
|
||||||
|
/Blogs
|
||||||
|
/CM20315
|
||||||
|
/CM20315_2023
|
||||||
|
/Notebooks
|
||||||
|
/PDFFigures
|
||||||
|
/Slides
|
||||||
14
.prettierrc.cjs
Normal file
14
.prettierrc.cjs
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
/** @type {import("prettier").Config} */
|
||||||
|
const prettierConfig = {
|
||||||
|
trailingComma: "all",
|
||||||
|
tabWidth: 4,
|
||||||
|
useTabs: false,
|
||||||
|
semi: true,
|
||||||
|
singleQuote: false,
|
||||||
|
bracketSpacing: true,
|
||||||
|
printWidth: 100,
|
||||||
|
endOfLine: "lf",
|
||||||
|
plugins: [require.resolve("prettier-plugin-organize-imports")],
|
||||||
|
};
|
||||||
|
|
||||||
|
module.exports = prettierConfig;
|
||||||
1097
Blogs/BorealisBayesianFunction.ipynb
Normal file
1097
Blogs/BorealisBayesianFunction.ipynb
Normal file
File diff suppressed because one or more lines are too long
519
Blogs/BorealisBayesianParameter.ipynb
Normal file
519
Blogs/BorealisBayesianParameter.ipynb
Normal file
File diff suppressed because one or more lines are too long
@@ -31,7 +31,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"# Gradient flow\n",
|
"# Gradient flow\n",
|
||||||
"\n",
|
"\n",
|
||||||
"This notebook replicates some of the results in the the Borealis AI [blog](https://www.borealisai.com/research-blogs/gradient-flow/) on gradient flow. \n"
|
"This notebook replicates some of the results in the Borealis AI [blog](https://www.borealisai.com/research-blogs/gradient-flow/) on gradient flow. \n"
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "ucrRRJ4dq8_d"
|
"id": "ucrRRJ4dq8_d"
|
||||||
@@ -398,4 +398,4 @@
|
|||||||
"outputs": []
|
"outputs": []
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -166,7 +166,7 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"source": [
|
"source": [
|
||||||
"Routines to calculate the empirical and analytical NTK (i.e. the NTK with infinite hidden units) for the the shallow network"
|
"Routines to calculate the empirical and analytical NTK (i.e. the NTK with infinite hidden units) for the shallow network"
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "mxW8E5kYIzlj"
|
"id": "mxW8E5kYIzlj"
|
||||||
@@ -1106,4 +1106,4 @@
|
|||||||
},
|
},
|
||||||
"nbformat": 4,
|
"nbformat": 4,
|
||||||
"nbformat_minor": 0
|
"nbformat_minor": 0
|
||||||
}
|
}
|
||||||
|
|||||||
432
Blogs/BorealisODENumerical.ipynb
Normal file
432
Blogs/BorealisODENumerical.ipynb
Normal file
@@ -0,0 +1,432 @@
|
|||||||
|
{
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "view-in-github",
|
||||||
|
"colab_type": "text"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Blogs/BorealisODENumerical.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "JXsO7ce7oqeq"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"# Numerical methods for ODEs\n",
|
||||||
|
"\n",
|
||||||
|
"This blog contains code that accompanies the RBC Borealis blog on numerical methods for ODEs. Contact udlbookmail@gmail.com if you find any problems."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "AnvAKtP_oqes"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"Import relevant libraries"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "UF-gJyZggyrl"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"import numpy as np\n",
|
||||||
|
"import matplotlib.pyplot as plt"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "szWLVrSSoqet"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"Define the ODE that we will be experimenting with."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "NkrGZLL6iM3P"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# The ODE that we will experiment with\n",
|
||||||
|
"def ode_lin_homog(t,x):\n",
|
||||||
|
" return 0.5 * x ;\n",
|
||||||
|
"\n",
|
||||||
|
"# The derivative of the ODE function with respect to x (needed for Taylor's method)\n",
|
||||||
|
"def ode_lin_homog_deriv_x(t,x):\n",
|
||||||
|
" return 0.5 ;\n",
|
||||||
|
"\n",
|
||||||
|
"# The derivative of the ODE function with respect to t (needed for Taylor's method)\n",
|
||||||
|
"def ode_lin_homog_deriv_t(t,x):\n",
|
||||||
|
" return 0.0 ;\n",
|
||||||
|
"\n",
|
||||||
|
"# The closed form solution (so we can measure the error)\n",
|
||||||
|
"def ode_lin_homog_soln(t,C=0.5):\n",
|
||||||
|
" return C * np.exp(0.5 * t) ;"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "In1C9wZkoqet"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"This is a generic method that runs the numerical methods. It takes the initial conditions ($t_0$, $x_0$), the final time $t_1$ and the step size $h$. It also takes the ODE function itself and its derivatives (only used for Taylor's method). Finally, the parameter \"step_function\" is the method used to update (e.g., Euler's methods, Runge-Kutte 4-step)."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "VZfZDJAfmyrf"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"def run_numerical(x_0, t_0, t_1, h, ode_func, ode_func_deriv_x, ode_func_deriv_t, ode_soln, step_function):\n",
|
||||||
|
" x = [x_0]\n",
|
||||||
|
" t = [t_0]\n",
|
||||||
|
" while (t[-1] <= t_1):\n",
|
||||||
|
" x = x+[step_function(x[-1],t[-1],h, ode_func, ode_func_deriv_x, ode_func_deriv_t)]\n",
|
||||||
|
" t = t + [t[-1]+h]\n",
|
||||||
|
"\n",
|
||||||
|
" # Returns x,y plot plus total numerical error at last point.\n",
|
||||||
|
" return t, x, np.abs(ode_soln(t[-1])-x[-1])"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "Vfkc3-_7oqet"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"Run the numerical method with step sizes of 2.0, 1.0, 0.5, 0.25, 0.125, 0.0675 and plot the results"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "1tyGbMZhoqeu"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"def run_and_plot(ode, ode_deriv_x, ode_deriv_t, ode_solution, step_function):\n",
|
||||||
|
" # Specify the grid of points to draw the ODE\n",
|
||||||
|
" t = np.arange(0.04, 4.0, 0.2)\n",
|
||||||
|
" x = np.arange(0.04, 4.0, 0.2)\n",
|
||||||
|
" T, X = np.meshgrid(t,x)\n",
|
||||||
|
"\n",
|
||||||
|
" # ODE equation at these grid points (used to draw quiver-plot)\n",
|
||||||
|
" dx = ode(T,X)\n",
|
||||||
|
" dt = np.ones(dx.shape)\n",
|
||||||
|
"\n",
|
||||||
|
" # The ground truth solution\n",
|
||||||
|
" t2= np.arange(0,10,0.1)\n",
|
||||||
|
" x2 = ode_solution(t2)\n",
|
||||||
|
"\n",
|
||||||
|
" #####################################x_0, t_0, t_1, h #################################################\n",
|
||||||
|
" t_sim1,x_sim1,error1 = run_numerical(0.5, 0.0, 4.0, 2.0000, ode, ode_deriv_x, ode_deriv_t, ode_solution, step_function)\n",
|
||||||
|
" t_sim2,x_sim2,error2 = run_numerical(0.5, 0.0, 4.0, 1.0000, ode, ode_deriv_x, ode_deriv_t, ode_solution, step_function)\n",
|
||||||
|
" t_sim3,x_sim3,error3 = run_numerical(0.5, 0.0, 4.0, 0.5000, ode, ode_deriv_x, ode_deriv_t, ode_solution, step_function)\n",
|
||||||
|
" t_sim4,x_sim4,error4 = run_numerical(0.5, 0.0, 4.0, 0.2500, ode, ode_deriv_x, ode_deriv_t, ode_solution, step_function)\n",
|
||||||
|
" t_sim5,x_sim5,error5 = run_numerical(0.5, 0.0, 4.0, 0.1250, ode, ode_deriv_x, ode_deriv_t, ode_solution, step_function)\n",
|
||||||
|
" t_sim6,x_sim6,error6 = run_numerical(0.5, 0.0, 4.0, 0.0675, ode, ode_deriv_x, ode_deriv_t, ode_solution, step_function)\n",
|
||||||
|
"\n",
|
||||||
|
" # Plot the ODE and ground truth solution\n",
|
||||||
|
" fig,ax = plt.subplots()\n",
|
||||||
|
" ax.quiver(T,X,dt,dx, scale=35.0)\n",
|
||||||
|
" ax.plot(t2,x2,'r-')\n",
|
||||||
|
"\n",
|
||||||
|
" # Plot the numerical approximations\n",
|
||||||
|
" ax.plot(t_sim1,x_sim1,'.-',markeredgecolor='#773c23ff',markerfacecolor='#d18362', color='#d18362', markersize=10)\n",
|
||||||
|
" ax.plot(t_sim2,x_sim2,'.-',markeredgecolor='#773c23ff',markerfacecolor='#d18362', color='#d18362', markersize=10)\n",
|
||||||
|
" ax.plot(t_sim3,x_sim3,'.-',markeredgecolor='#773c23ff',markerfacecolor='#d18362', color='#d18362', markersize=10)\n",
|
||||||
|
" ax.plot(t_sim4,x_sim4,'.-',markeredgecolor='#773c23ff',markerfacecolor='#d18362', color='#d18362', markersize=10)\n",
|
||||||
|
" ax.plot(t_sim5,x_sim5,'.-',markeredgecolor='#773c23ff',markerfacecolor='#d18362', color='#d18362', markersize=10)\n",
|
||||||
|
" ax.plot(t_sim6,x_sim6,'.-',markeredgecolor='#773c23ff',markerfacecolor='#d18362', color='#d18362', markersize=10)\n",
|
||||||
|
"\n",
|
||||||
|
" ax.set_aspect('equal')\n",
|
||||||
|
" ax.set_xlim(0,4)\n",
|
||||||
|
" ax.set_ylim(0,4)\n",
|
||||||
|
"\n",
|
||||||
|
" plt.show()"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "JYrq8QIwvOIy"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"# Euler Method\n",
|
||||||
|
"\n",
|
||||||
|
"Define the Euler method and set up functions for plotting."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "N73xMnCukVVX"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"def euler_step(x_0, t_0, h, ode_func, ode_func_deriv_x=None, ode_func_deriv_t=None):\n",
|
||||||
|
" return x_0 + h * ode_func(t_0, x_0) ;"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "4B1_PGEcsZ9H"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"run_and_plot(ode_lin_homog, None, None, ode_lin_homog_soln, euler_step)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "FfwNihtkvJeX"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"# Heun's Method"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "srHfNDcDxI1o"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"def heun_step(x_0, t_0, h, ode_func, ode_func_deriv_x=None, ode_func_deriv_t=None):\n",
|
||||||
|
" f_x0_t0 = ode_func(t_0, x_0)\n",
|
||||||
|
" return x_0 + h/2 * ( f_x0_t0 + ode_func(t_0+h, x_0+h*f_x0_t0)) ;"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "WOApHz9xoqev"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"run_and_plot(ode_lin_homog, None, None, ode_lin_homog_soln, heun_step)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "0XSzzFDIvRhm"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"# Modified Euler method"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "fSXprgVJ5Yep"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"def modified_euler_step(x_0, t_0, h, ode_func, ode_func_deriv_x=None, ode_func_deriv_t=None):\n",
|
||||||
|
" f_x0_t0 = ode_func(t_0, x_0)\n",
|
||||||
|
" return x_0 + h * ode_func(t_0+h/2, x_0+ h * f_x0_t0/2) ;"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "8LKSrCD2oqev"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"run_and_plot(ode_lin_homog, None, None, ode_lin_homog_soln, modified_euler_step)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "yp8ZBpwooqev"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"# Second order Taylor's method"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "NtBBgzWLoqev"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"def taylor_2nd_order(x_0, t_0, h, ode_func, ode_func_deriv_x, ode_func_deriv_t):\n",
|
||||||
|
" f1 = ode_func(t_0, x_0)\n",
|
||||||
|
" return x_0 + h * ode_func(t_0, x_0) + (h*h/2) * (ode_func_deriv_x(t_0,x_0) * ode_func(t_0, x_0) + ode_func_deriv_t(t_0, x_0))"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "ioeeIohUoqev"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"run_and_plot(ode_lin_homog, ode_lin_homog_deriv_x, ode_lin_homog_deriv_t, ode_lin_homog_soln, taylor_2nd_order)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "WcuhV5lL1zAJ"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"# Fourth Order Runge Kutta"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "0NZN81Bpwu56"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"def runge_kutta_4_step(x_0, t_0, h, ode_func, ode_func_deriv_x=None, ode_func_deriv_t=None):\n",
|
||||||
|
" f1 = ode_func(t_0, x_0)\n",
|
||||||
|
" f2 = ode_func(t_0+h/2,x_0+f1 * h/2)\n",
|
||||||
|
" f3 = ode_func(t_0+h/2,x_0+f2 * h/2)\n",
|
||||||
|
" f4 = ode_func(t_0+h, x_0+ f3*h)\n",
|
||||||
|
" return x_0 + (h/6) * (f1 + 2*f2 + 2*f3+f4)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "K-OxE9E6oqew"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"run_and_plot(ode_lin_homog, None, None, ode_lin_homog_soln, runge_kutta_4_step)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "7JifxBhhoqew"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"# Plot the error as a function of step size"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "ZoEpmlCfsi9P"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# Run systematically with a number of different step sizes and store errors for each\n",
|
||||||
|
"def get_errors(ode, ode_deriv_x, ode_deriv_t, ode_solution, step_function):\n",
|
||||||
|
" # Choose the step size h to divide the plotting interval into 1,2,4,8... segments.\n",
|
||||||
|
" # The plots in the article add a few more smaller step sizes, but this takes a while to compute.\n",
|
||||||
|
" # Add them back in if you want the full plot.\n",
|
||||||
|
" all_h = (1./np.array([1,2,4,8,16,32,64,128,256,512,1024,2048,4096])).tolist()\n",
|
||||||
|
" all_err = []\n",
|
||||||
|
"\n",
|
||||||
|
" for i in range(len(all_h)):\n",
|
||||||
|
" t_sim,x_sim,err = run_numerical(0.5, 0.0, 4.0, all_h[i], ode, ode_deriv_x, ode_deriv_t, ode_solution, step_function)\n",
|
||||||
|
" all_err = all_err + [err]\n",
|
||||||
|
"\n",
|
||||||
|
" return all_h, all_err"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "X0O0KK47xF28"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# Plot the errors\n",
|
||||||
|
"all_h, all_err_euler = get_errors(ode_lin_homog, ode_lin_homog_deriv_x, ode_lin_homog_deriv_t, ode_lin_homog_soln, euler_step)\n",
|
||||||
|
"all_h, all_err_heun = get_errors(ode_lin_homog, ode_lin_homog_deriv_x, ode_lin_homog_deriv_t, ode_lin_homog_soln, heun_step)\n",
|
||||||
|
"all_h, all_err_mod_euler = get_errors(ode_lin_homog, ode_lin_homog_deriv_x, ode_lin_homog_deriv_t, ode_lin_homog_soln, modified_euler_step)\n",
|
||||||
|
"all_h, all_err_taylor = get_errors(ode_lin_homog, ode_lin_homog_deriv_x, ode_lin_homog_deriv_t, ode_lin_homog_soln, taylor_2nd_order)\n",
|
||||||
|
"all_h, all_err_rk = get_errors(ode_lin_homog, ode_lin_homog_deriv_x, ode_lin_homog_deriv_t, ode_lin_homog_soln, runge_kutta_4_step)\n",
|
||||||
|
"\n",
|
||||||
|
"\n",
|
||||||
|
"fig, ax = plt.subplots()\n",
|
||||||
|
"ax.loglog(all_h, all_err_euler,'ro-')\n",
|
||||||
|
"ax.loglog(all_h, all_err_heun,'bo-')\n",
|
||||||
|
"ax.loglog(all_h, all_err_mod_euler,'go-')\n",
|
||||||
|
"ax.loglog(all_h, all_err_taylor,'co-')\n",
|
||||||
|
"ax.loglog(all_h, all_err_rk,'mo-')\n",
|
||||||
|
"ax.set_ylim(1e-13,1e1)\n",
|
||||||
|
"ax.set_xlim(1e-6,1e1)\n",
|
||||||
|
"ax.set_aspect(0.5)\n",
|
||||||
|
"ax.set_xlabel('Step size, $h$')\n",
|
||||||
|
"ax.set_ylabel('Error')\n",
|
||||||
|
"plt.show()"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "BttOqpeo9MsJ"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"Note that for this ODE, the Heun, Modified Euler and Taylor methods provide EXACTLY the same updates, and so the error curves for all three are identical (subject to difference is numerical rounding errors). This is not in general the case, although the general trend would be the same for each."
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": [],
|
||||||
|
"include_colab_link": true
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"display_name": "Python 3 (ipykernel)",
|
||||||
|
"language": "python",
|
||||||
|
"name": "python3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"codemirror_mode": {
|
||||||
|
"name": "ipython",
|
||||||
|
"version": 3
|
||||||
|
},
|
||||||
|
"file_extension": ".py",
|
||||||
|
"mimetype": "text/x-python",
|
||||||
|
"name": "python",
|
||||||
|
"nbconvert_exporter": "python",
|
||||||
|
"pygments_lexer": "ipython3",
|
||||||
|
"version": "3.9.10"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0
|
||||||
|
}
|
||||||
1127
Blogs/Borealis_NNGP.ipynb
Normal file
1127
Blogs/Borealis_NNGP.ipynb
Normal file
File diff suppressed because one or more lines are too long
@@ -128,7 +128,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"In part (b) of the practical we calculate the volume of a hypersphere of radius 0.5 (i.e., of diameter 1) as a function of the radius. You will find that the volume decreases to almost nothing in high dimensions. All of the volume is in the corners of the unit hypercube (which always has volume 1). Double weird.\n",
|
"In part (b) of the practical we calculate the volume of a hypersphere of radius 0.5 (i.e., of diameter 1) as a function of the radius. You will find that the volume decreases to almost nothing in high dimensions. All of the volume is in the corners of the unit hypercube (which always has volume 1). Double weird.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Note that you you can check your answer by doing the calculation for 2D using the standard formula for the area of a circle and making sure it matches."
|
"Note that you can check your answer by doing the calculation for 2D using the standard formula for the area of a circle and making sure it matches."
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "b2FYKV1SL4Z7"
|
"id": "b2FYKV1SL4Z7"
|
||||||
|
|||||||
@@ -199,7 +199,7 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"source": [
|
"source": [
|
||||||
"The left is model output and the right is the model output after the sigmoid has been applied, so it now lies in the range [0,1] and represents the probability, that y=1. The black dots show the training data. We'll compute the the likelihood and the negative log likelihood."
|
"The left is model output and the right is the model output after the sigmoid has been applied, so it now lies in the range [0,1] and represents the probability, that y=1. The black dots show the training data. We'll compute the likelihood and the negative log likelihood."
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "MvVX6tl9AEXF"
|
"id": "MvVX6tl9AEXF"
|
||||||
|
|||||||
@@ -218,7 +218,7 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"source": [
|
"source": [
|
||||||
"The left is model output and the right is the model output after the softmax has been applied, so it now lies in the range [0,1] and represents the probability, that y=0 (red), 1 (green) and 2 (blue) The dots at the bottom show the training data with the same color scheme. So we want the red curve to be high where there are red dots, the green curve to be high where there are green dotsmand the blue curve to be high where there are blue dots We'll compute the the likelihood and the negative log likelihood."
|
"The left is model output and the right is the model output after the softmax has been applied, so it now lies in the range [0,1] and represents the probability, that y=0 (red), 1 (green) and 2 (blue) The dots at the bottom show the training data with the same color scheme. So we want the red curve to be high where there are red dots, the green curve to be high where there are green dotsmand the blue curve to be high where there are blue dots We'll compute the likelihood and the negative log likelihood."
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "MvVX6tl9AEXF"
|
"id": "MvVX6tl9AEXF"
|
||||||
|
|||||||
@@ -128,7 +128,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"In part (b) of the practical we calculate the volume of a hypersphere of radius 0.5 (i.e., of diameter 1) as a function of the radius. You will find that the volume decreases to almost nothing in high dimensions. All of the volume is in the corners of the unit hypercube (which always has volume 1). Double weird.\n",
|
"In part (b) of the practical we calculate the volume of a hypersphere of radius 0.5 (i.e., of diameter 1) as a function of the radius. You will find that the volume decreases to almost nothing in high dimensions. All of the volume is in the corners of the unit hypercube (which always has volume 1). Double weird.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Note that you you can check your answer by doing the calculation for 2D using the standard formula for the area of a circle and making sure it matches."
|
"Note that you can check your answer by doing the calculation for 2D using the standard formula for the area of a circle and making sure it matches."
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "b2FYKV1SL4Z7"
|
"id": "b2FYKV1SL4Z7"
|
||||||
@@ -209,4 +209,4 @@
|
|||||||
"outputs": []
|
"outputs": []
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -214,7 +214,7 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"source": [
|
||||||
"# Compute the derivative of the the loss with respect to the function output f_val\n",
|
"# Compute the derivative of the loss with respect to the function output f_val\n",
|
||||||
"def dl_df(f_val,y):\n",
|
"def dl_df(f_val,y):\n",
|
||||||
" # Compute sigmoid of network output\n",
|
" # Compute sigmoid of network output\n",
|
||||||
" sig_f_val = sig(f_val)\n",
|
" sig_f_val = sig(f_val)\n",
|
||||||
@@ -522,4 +522,4 @@
|
|||||||
"outputs": []
|
"outputs": []
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,346 +1,346 @@
|
|||||||
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0
|
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0
|
||||||
International Public License
|
International Public License
|
||||||
|
|
||||||
By exercising the Licensed Rights (defined below), You accept and agree
|
By exercising the Licensed Rights (defined below), You accept and agree
|
||||||
to be bound by the terms and conditions of this Creative Commons
|
to be bound by the terms and conditions of this Creative Commons
|
||||||
Attribution-NonCommercial-NoDerivatives 4.0 International Public
|
Attribution-NonCommercial-NoDerivatives 4.0 International Public
|
||||||
License ("Public License"). To the extent this Public License may be
|
License ("Public License"). To the extent this Public License may be
|
||||||
interpreted as a contract, You are granted the Licensed Rights in
|
interpreted as a contract, You are granted the Licensed Rights in
|
||||||
consideration of Your acceptance of these terms and conditions, and the
|
consideration of Your acceptance of these terms and conditions, and the
|
||||||
Licensor grants You such rights in consideration of benefits the
|
Licensor grants You such rights in consideration of benefits the
|
||||||
Licensor receives from making the Licensed Material available under
|
Licensor receives from making the Licensed Material available under
|
||||||
these terms and conditions.
|
these terms and conditions.
|
||||||
|
|
||||||
|
|
||||||
Section 1 -- Definitions.
|
Section 1 -- Definitions.
|
||||||
|
|
||||||
a. Adapted Material means material subject to Copyright and Similar
|
a. Adapted Material means material subject to Copyright and Similar
|
||||||
Rights that is derived from or based upon the Licensed Material
|
Rights that is derived from or based upon the Licensed Material
|
||||||
and in which the Licensed Material is translated, altered,
|
and in which the Licensed Material is translated, altered,
|
||||||
arranged, transformed, or otherwise modified in a manner requiring
|
arranged, transformed, or otherwise modified in a manner requiring
|
||||||
permission under the Copyright and Similar Rights held by the
|
permission under the Copyright and Similar Rights held by the
|
||||||
Licensor. For purposes of this Public License, where the Licensed
|
Licensor. For purposes of this Public License, where the Licensed
|
||||||
Material is a musical work, performance, or sound recording,
|
Material is a musical work, performance, or sound recording,
|
||||||
Adapted Material is always produced where the Licensed Material is
|
Adapted Material is always produced where the Licensed Material is
|
||||||
synched in timed relation with a moving image.
|
synched in timed relation with a moving image.
|
||||||
|
|
||||||
b. Copyright and Similar Rights means copyright and/or similar rights
|
b. Copyright and Similar Rights means copyright and/or similar rights
|
||||||
closely related to copyright including, without limitation,
|
closely related to copyright including, without limitation,
|
||||||
performance, broadcast, sound recording, and Sui Generis Database
|
performance, broadcast, sound recording, and Sui Generis Database
|
||||||
Rights, without regard to how the rights are labeled or
|
Rights, without regard to how the rights are labeled or
|
||||||
categorized. For purposes of this Public License, the rights
|
categorized. For purposes of this Public License, the rights
|
||||||
specified in Section 2(b)(1)-(2) are not Copyright and Similar
|
specified in Section 2(b)(1)-(2) are not Copyright and Similar
|
||||||
Rights.
|
Rights.
|
||||||
|
|
||||||
c. Effective Technological Measures means those measures that, in the
|
c. Effective Technological Measures means those measures that, in the
|
||||||
absence of proper authority, may not be circumvented under laws
|
absence of proper authority, may not be circumvented under laws
|
||||||
fulfilling obligations under Article 11 of the WIPO Copyright
|
fulfilling obligations under Article 11 of the WIPO Copyright
|
||||||
Treaty adopted on December 20, 1996, and/or similar international
|
Treaty adopted on December 20, 1996, and/or similar international
|
||||||
agreements.
|
agreements.
|
||||||
|
|
||||||
d. Exceptions and Limitations means fair use, fair dealing, and/or
|
d. Exceptions and Limitations means fair use, fair dealing, and/or
|
||||||
any other exception or limitation to Copyright and Similar Rights
|
any other exception or limitation to Copyright and Similar Rights
|
||||||
that applies to Your use of the Licensed Material.
|
that applies to Your use of the Licensed Material.
|
||||||
|
|
||||||
e. Licensed Material means the artistic or literary work, database,
|
e. Licensed Material means the artistic or literary work, database,
|
||||||
or other material to which the Licensor applied this Public
|
or other material to which the Licensor applied this Public
|
||||||
License.
|
License.
|
||||||
|
|
||||||
f. Licensed Rights means the rights granted to You subject to the
|
f. Licensed Rights means the rights granted to You subject to the
|
||||||
terms and conditions of this Public License, which are limited to
|
terms and conditions of this Public License, which are limited to
|
||||||
all Copyright and Similar Rights that apply to Your use of the
|
all Copyright and Similar Rights that apply to Your use of the
|
||||||
Licensed Material and that the Licensor has authority to license.
|
Licensed Material and that the Licensor has authority to license.
|
||||||
|
|
||||||
g. Licensor means the individual(s) or entity(ies) granting rights
|
g. Licensor means the individual(s) or entity(ies) granting rights
|
||||||
under this Public License.
|
under this Public License.
|
||||||
|
|
||||||
h. NonCommercial means not primarily intended for or directed towards
|
h. NonCommercial means not primarily intended for or directed towards
|
||||||
commercial advantage or monetary compensation. For purposes of
|
commercial advantage or monetary compensation. For purposes of
|
||||||
this Public License, the exchange of the Licensed Material for
|
this Public License, the exchange of the Licensed Material for
|
||||||
other material subject to Copyright and Similar Rights by digital
|
other material subject to Copyright and Similar Rights by digital
|
||||||
file-sharing or similar means is NonCommercial provided there is
|
file-sharing or similar means is NonCommercial provided there is
|
||||||
no payment of monetary compensation in connection with the
|
no payment of monetary compensation in connection with the
|
||||||
exchange.
|
exchange.
|
||||||
|
|
||||||
i. Share means to provide material to the public by any means or
|
i. Share means to provide material to the public by any means or
|
||||||
process that requires permission under the Licensed Rights, such
|
process that requires permission under the Licensed Rights, such
|
||||||
as reproduction, public display, public performance, distribution,
|
as reproduction, public display, public performance, distribution,
|
||||||
dissemination, communication, or importation, and to make material
|
dissemination, communication, or importation, and to make material
|
||||||
available to the public including in ways that members of the
|
available to the public including in ways that members of the
|
||||||
public may access the material from a place and at a time
|
public may access the material from a place and at a time
|
||||||
individually chosen by them.
|
individually chosen by them.
|
||||||
|
|
||||||
j. Sui Generis Database Rights means rights other than copyright
|
j. Sui Generis Database Rights means rights other than copyright
|
||||||
resulting from Directive 96/9/EC of the European Parliament and of
|
resulting from Directive 96/9/EC of the European Parliament and of
|
||||||
the Council of 11 March 1996 on the legal protection of databases,
|
the Council of 11 March 1996 on the legal protection of databases,
|
||||||
as amended and/or succeeded, as well as other essentially
|
as amended and/or succeeded, as well as other essentially
|
||||||
equivalent rights anywhere in the world.
|
equivalent rights anywhere in the world.
|
||||||
|
|
||||||
k. You means the individual or entity exercising the Licensed Rights
|
k. You means the individual or entity exercising the Licensed Rights
|
||||||
under this Public License. Your has a corresponding meaning.
|
under this Public License. Your has a corresponding meaning.
|
||||||
|
|
||||||
|
|
||||||
Section 2 -- Scope.
|
Section 2 -- Scope.
|
||||||
|
|
||||||
a. License grant.
|
a. License grant.
|
||||||
|
|
||||||
1. Subject to the terms and conditions of this Public License,
|
1. Subject to the terms and conditions of this Public License,
|
||||||
the Licensor hereby grants You a worldwide, royalty-free,
|
the Licensor hereby grants You a worldwide, royalty-free,
|
||||||
non-sublicensable, non-exclusive, irrevocable license to
|
non-sublicensable, non-exclusive, irrevocable license to
|
||||||
exercise the Licensed Rights in the Licensed Material to:
|
exercise the Licensed Rights in the Licensed Material to:
|
||||||
|
|
||||||
a. reproduce and Share the Licensed Material, in whole or
|
a. reproduce and Share the Licensed Material, in whole or
|
||||||
in part, for NonCommercial purposes only; and
|
in part, for NonCommercial purposes only; and
|
||||||
|
|
||||||
b. produce and reproduce, but not Share, Adapted Material
|
b. produce and reproduce, but not Share, Adapted Material
|
||||||
for NonCommercial purposes only.
|
for NonCommercial purposes only.
|
||||||
|
|
||||||
2. Exceptions and Limitations. For the avoidance of doubt, where
|
2. Exceptions and Limitations. For the avoidance of doubt, where
|
||||||
Exceptions and Limitations apply to Your use, this Public
|
Exceptions and Limitations apply to Your use, this Public
|
||||||
License does not apply, and You do not need to comply with
|
License does not apply, and You do not need to comply with
|
||||||
its terms and conditions.
|
its terms and conditions.
|
||||||
|
|
||||||
3. Term. The term of this Public License is specified in Section
|
3. Term. The term of this Public License is specified in Section
|
||||||
6(a).
|
6(a).
|
||||||
|
|
||||||
4. Media and formats; technical modifications allowed. The
|
4. Media and formats; technical modifications allowed. The
|
||||||
Licensor authorizes You to exercise the Licensed Rights in
|
Licensor authorizes You to exercise the Licensed Rights in
|
||||||
all media and formats whether now known or hereafter created,
|
all media and formats whether now known or hereafter created,
|
||||||
and to make technical modifications necessary to do so. The
|
and to make technical modifications necessary to do so. The
|
||||||
Licensor waives and/or agrees not to assert any right or
|
Licensor waives and/or agrees not to assert any right or
|
||||||
authority to forbid You from making technical modifications
|
authority to forbid You from making technical modifications
|
||||||
necessary to exercise the Licensed Rights, including
|
necessary to exercise the Licensed Rights, including
|
||||||
technical modifications necessary to circumvent Effective
|
technical modifications necessary to circumvent Effective
|
||||||
Technological Measures. For purposes of this Public License,
|
Technological Measures. For purposes of this Public License,
|
||||||
simply making modifications authorized by this Section 2(a)
|
simply making modifications authorized by this Section 2(a)
|
||||||
(4) never produces Adapted Material.
|
(4) never produces Adapted Material.
|
||||||
|
|
||||||
5. Downstream recipients.
|
5. Downstream recipients.
|
||||||
|
|
||||||
a. Offer from the Licensor -- Licensed Material. Every
|
a. Offer from the Licensor -- Licensed Material. Every
|
||||||
recipient of the Licensed Material automatically
|
recipient of the Licensed Material automatically
|
||||||
receives an offer from the Licensor to exercise the
|
receives an offer from the Licensor to exercise the
|
||||||
Licensed Rights under the terms and conditions of this
|
Licensed Rights under the terms and conditions of this
|
||||||
Public License.
|
Public License.
|
||||||
|
|
||||||
b. No downstream restrictions. You may not offer or impose
|
b. No downstream restrictions. You may not offer or impose
|
||||||
any additional or different terms or conditions on, or
|
any additional or different terms or conditions on, or
|
||||||
apply any Effective Technological Measures to, the
|
apply any Effective Technological Measures to, the
|
||||||
Licensed Material if doing so restricts exercise of the
|
Licensed Material if doing so restricts exercise of the
|
||||||
Licensed Rights by any recipient of the Licensed
|
Licensed Rights by any recipient of the Licensed
|
||||||
Material.
|
Material.
|
||||||
|
|
||||||
6. No endorsement. Nothing in this Public License constitutes or
|
6. No endorsement. Nothing in this Public License constitutes or
|
||||||
may be construed as permission to assert or imply that You
|
may be construed as permission to assert or imply that You
|
||||||
are, or that Your use of the Licensed Material is, connected
|
are, or that Your use of the Licensed Material is, connected
|
||||||
with, or sponsored, endorsed, or granted official status by,
|
with, or sponsored, endorsed, or granted official status by,
|
||||||
the Licensor or others designated to receive attribution as
|
the Licensor or others designated to receive attribution as
|
||||||
provided in Section 3(a)(1)(A)(i).
|
provided in Section 3(a)(1)(A)(i).
|
||||||
|
|
||||||
b. Other rights.
|
b. Other rights.
|
||||||
|
|
||||||
1. Moral rights, such as the right of integrity, are not
|
1. Moral rights, such as the right of integrity, are not
|
||||||
licensed under this Public License, nor are publicity,
|
licensed under this Public License, nor are publicity,
|
||||||
privacy, and/or other similar personality rights; however, to
|
privacy, and/or other similar personality rights; however, to
|
||||||
the extent possible, the Licensor waives and/or agrees not to
|
the extent possible, the Licensor waives and/or agrees not to
|
||||||
assert any such rights held by the Licensor to the limited
|
assert any such rights held by the Licensor to the limited
|
||||||
extent necessary to allow You to exercise the Licensed
|
extent necessary to allow You to exercise the Licensed
|
||||||
Rights, but not otherwise.
|
Rights, but not otherwise.
|
||||||
|
|
||||||
2. Patent and trademark rights are not licensed under this
|
2. Patent and trademark rights are not licensed under this
|
||||||
Public License.
|
Public License.
|
||||||
|
|
||||||
3. To the extent possible, the Licensor waives any right to
|
3. To the extent possible, the Licensor waives any right to
|
||||||
collect royalties from You for the exercise of the Licensed
|
collect royalties from You for the exercise of the Licensed
|
||||||
Rights, whether directly or through a collecting society
|
Rights, whether directly or through a collecting society
|
||||||
under any voluntary or waivable statutory or compulsory
|
under any voluntary or waivable statutory or compulsory
|
||||||
licensing scheme. In all other cases the Licensor expressly
|
licensing scheme. In all other cases the Licensor expressly
|
||||||
reserves any right to collect such royalties, including when
|
reserves any right to collect such royalties, including when
|
||||||
the Licensed Material is used other than for NonCommercial
|
the Licensed Material is used other than for NonCommercial
|
||||||
purposes.
|
purposes.
|
||||||
|
|
||||||
|
|
||||||
Section 3 -- License Conditions.
|
Section 3 -- License Conditions.
|
||||||
|
|
||||||
Your exercise of the Licensed Rights is expressly made subject to the
|
Your exercise of the Licensed Rights is expressly made subject to the
|
||||||
following conditions.
|
following conditions.
|
||||||
|
|
||||||
a. Attribution.
|
a. Attribution.
|
||||||
|
|
||||||
1. If You Share the Licensed Material, You must:
|
1. If You Share the Licensed Material, You must:
|
||||||
|
|
||||||
a. retain the following if it is supplied by the Licensor
|
a. retain the following if it is supplied by the Licensor
|
||||||
with the Licensed Material:
|
with the Licensed Material:
|
||||||
|
|
||||||
i. identification of the creator(s) of the Licensed
|
i. identification of the creator(s) of the Licensed
|
||||||
Material and any others designated to receive
|
Material and any others designated to receive
|
||||||
attribution, in any reasonable manner requested by
|
attribution, in any reasonable manner requested by
|
||||||
the Licensor (including by pseudonym if
|
the Licensor (including by pseudonym if
|
||||||
designated);
|
designated);
|
||||||
|
|
||||||
ii. a copyright notice;
|
ii. a copyright notice;
|
||||||
|
|
||||||
iii. a notice that refers to this Public License;
|
iii. a notice that refers to this Public License;
|
||||||
|
|
||||||
iv. a notice that refers to the disclaimer of
|
iv. a notice that refers to the disclaimer of
|
||||||
warranties;
|
warranties;
|
||||||
|
|
||||||
v. a URI or hyperlink to the Licensed Material to the
|
v. a URI or hyperlink to the Licensed Material to the
|
||||||
extent reasonably practicable;
|
extent reasonably practicable;
|
||||||
|
|
||||||
b. indicate if You modified the Licensed Material and
|
b. indicate if You modified the Licensed Material and
|
||||||
retain an indication of any previous modifications; and
|
retain an indication of any previous modifications; and
|
||||||
|
|
||||||
c. indicate the Licensed Material is licensed under this
|
c. indicate the Licensed Material is licensed under this
|
||||||
Public License, and include the text of, or the URI or
|
Public License, and include the text of, or the URI or
|
||||||
hyperlink to, this Public License.
|
hyperlink to, this Public License.
|
||||||
|
|
||||||
For the avoidance of doubt, You do not have permission under
|
For the avoidance of doubt, You do not have permission under
|
||||||
this Public License to Share Adapted Material.
|
this Public License to Share Adapted Material.
|
||||||
|
|
||||||
2. You may satisfy the conditions in Section 3(a)(1) in any
|
2. You may satisfy the conditions in Section 3(a)(1) in any
|
||||||
reasonable manner based on the medium, means, and context in
|
reasonable manner based on the medium, means, and context in
|
||||||
which You Share the Licensed Material. For example, it may be
|
which You Share the Licensed Material. For example, it may be
|
||||||
reasonable to satisfy the conditions by providing a URI or
|
reasonable to satisfy the conditions by providing a URI or
|
||||||
hyperlink to a resource that includes the required
|
hyperlink to a resource that includes the required
|
||||||
information.
|
information.
|
||||||
|
|
||||||
3. If requested by the Licensor, You must remove any of the
|
3. If requested by the Licensor, You must remove any of the
|
||||||
information required by Section 3(a)(1)(A) to the extent
|
information required by Section 3(a)(1)(A) to the extent
|
||||||
reasonably practicable.
|
reasonably practicable.
|
||||||
|
|
||||||
|
|
||||||
Section 4 -- Sui Generis Database Rights.
|
Section 4 -- Sui Generis Database Rights.
|
||||||
|
|
||||||
Where the Licensed Rights include Sui Generis Database Rights that
|
Where the Licensed Rights include Sui Generis Database Rights that
|
||||||
apply to Your use of the Licensed Material:
|
apply to Your use of the Licensed Material:
|
||||||
|
|
||||||
a. for the avoidance of doubt, Section 2(a)(1) grants You the right
|
a. for the avoidance of doubt, Section 2(a)(1) grants You the right
|
||||||
to extract, reuse, reproduce, and Share all or a substantial
|
to extract, reuse, reproduce, and Share all or a substantial
|
||||||
portion of the contents of the database for NonCommercial purposes
|
portion of the contents of the database for NonCommercial purposes
|
||||||
only and provided You do not Share Adapted Material;
|
only and provided You do not Share Adapted Material;
|
||||||
|
|
||||||
b. if You include all or a substantial portion of the database
|
b. if You include all or a substantial portion of the database
|
||||||
contents in a database in which You have Sui Generis Database
|
contents in a database in which You have Sui Generis Database
|
||||||
Rights, then the database in which You have Sui Generis Database
|
Rights, then the database in which You have Sui Generis Database
|
||||||
Rights (but not its individual contents) is Adapted Material; and
|
Rights (but not its individual contents) is Adapted Material; and
|
||||||
|
|
||||||
c. You must comply with the conditions in Section 3(a) if You Share
|
c. You must comply with the conditions in Section 3(a) if You Share
|
||||||
all or a substantial portion of the contents of the database.
|
all or a substantial portion of the contents of the database.
|
||||||
|
|
||||||
For the avoidance of doubt, this Section 4 supplements and does not
|
For the avoidance of doubt, this Section 4 supplements and does not
|
||||||
replace Your obligations under this Public License where the Licensed
|
replace Your obligations under this Public License where the Licensed
|
||||||
Rights include other Copyright and Similar Rights.
|
Rights include other Copyright and Similar Rights.
|
||||||
|
|
||||||
|
|
||||||
Section 5 -- Disclaimer of Warranties and Limitation of Liability.
|
Section 5 -- Disclaimer of Warranties and Limitation of Liability.
|
||||||
|
|
||||||
a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE
|
a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE
|
||||||
EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS
|
EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS
|
||||||
AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF
|
AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF
|
||||||
ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS,
|
ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS,
|
||||||
IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION,
|
IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION,
|
||||||
WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR
|
WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR
|
||||||
PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS,
|
PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS,
|
||||||
ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT
|
ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT
|
||||||
KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT
|
KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT
|
||||||
ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.
|
ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.
|
||||||
|
|
||||||
b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE
|
b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE
|
||||||
TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION,
|
TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION,
|
||||||
NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT,
|
NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT,
|
||||||
INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES,
|
INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES,
|
||||||
COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR
|
COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR
|
||||||
USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN
|
USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN
|
||||||
ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR
|
ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR
|
||||||
DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR
|
DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR
|
||||||
IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.
|
IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.
|
||||||
|
|
||||||
c. The disclaimer of warranties and limitation of liability provided
|
c. The disclaimer of warranties and limitation of liability provided
|
||||||
above shall be interpreted in a manner that, to the extent
|
above shall be interpreted in a manner that, to the extent
|
||||||
possible, most closely approximates an absolute disclaimer and
|
possible, most closely approximates an absolute disclaimer and
|
||||||
waiver of all liability.
|
waiver of all liability.
|
||||||
|
|
||||||
|
|
||||||
Section 6 -- Term and Termination.
|
Section 6 -- Term and Termination.
|
||||||
|
|
||||||
a. This Public License applies for the term of the Copyright and
|
a. This Public License applies for the term of the Copyright and
|
||||||
Similar Rights licensed here. However, if You fail to comply with
|
Similar Rights licensed here. However, if You fail to comply with
|
||||||
this Public License, then Your rights under this Public License
|
this Public License, then Your rights under this Public License
|
||||||
terminate automatically.
|
terminate automatically.
|
||||||
|
|
||||||
b. Where Your right to use the Licensed Material has terminated under
|
b. Where Your right to use the Licensed Material has terminated under
|
||||||
Section 6(a), it reinstates:
|
Section 6(a), it reinstates:
|
||||||
|
|
||||||
1. automatically as of the date the violation is cured, provided
|
1. automatically as of the date the violation is cured, provided
|
||||||
it is cured within 30 days of Your discovery of the
|
it is cured within 30 days of Your discovery of the
|
||||||
violation; or
|
violation; or
|
||||||
|
|
||||||
2. upon express reinstatement by the Licensor.
|
2. upon express reinstatement by the Licensor.
|
||||||
|
|
||||||
For the avoidance of doubt, this Section 6(b) does not affect any
|
For the avoidance of doubt, this Section 6(b) does not affect any
|
||||||
right the Licensor may have to seek remedies for Your violations
|
right the Licensor may have to seek remedies for Your violations
|
||||||
of this Public License.
|
of this Public License.
|
||||||
|
|
||||||
c. For the avoidance of doubt, the Licensor may also offer the
|
c. For the avoidance of doubt, the Licensor may also offer the
|
||||||
Licensed Material under separate terms or conditions or stop
|
Licensed Material under separate terms or conditions or stop
|
||||||
distributing the Licensed Material at any time; however, doing so
|
distributing the Licensed Material at any time; however, doing so
|
||||||
will not terminate this Public License.
|
will not terminate this Public License.
|
||||||
|
|
||||||
d. Sections 1, 5, 6, 7, and 8 survive termination of this Public
|
d. Sections 1, 5, 6, 7, and 8 survive termination of this Public
|
||||||
License.
|
License.
|
||||||
|
|
||||||
|
|
||||||
Section 7 -- Other Terms and Conditions.
|
Section 7 -- Other Terms and Conditions.
|
||||||
|
|
||||||
a. The Licensor shall not be bound by any additional or different
|
a. The Licensor shall not be bound by any additional or different
|
||||||
terms or conditions communicated by You unless expressly agreed.
|
terms or conditions communicated by You unless expressly agreed.
|
||||||
|
|
||||||
b. Any arrangements, understandings, or agreements regarding the
|
b. Any arrangements, understandings, or agreements regarding the
|
||||||
Licensed Material not stated herein are separate from and
|
Licensed Material not stated herein are separate from and
|
||||||
independent of the terms and conditions of this Public License.
|
independent of the terms and conditions of this Public License.
|
||||||
|
|
||||||
|
|
||||||
Section 8 -- Interpretation.
|
Section 8 -- Interpretation.
|
||||||
|
|
||||||
a. For the avoidance of doubt, this Public License does not, and
|
a. For the avoidance of doubt, this Public License does not, and
|
||||||
shall not be interpreted to, reduce, limit, restrict, or impose
|
shall not be interpreted to, reduce, limit, restrict, or impose
|
||||||
conditions on any use of the Licensed Material that could lawfully
|
conditions on any use of the Licensed Material that could lawfully
|
||||||
be made without permission under this Public License.
|
be made without permission under this Public License.
|
||||||
|
|
||||||
b. To the extent possible, if any provision of this Public License is
|
b. To the extent possible, if any provision of this Public License is
|
||||||
deemed unenforceable, it shall be automatically reformed to the
|
deemed unenforceable, it shall be automatically reformed to the
|
||||||
minimum extent necessary to make it enforceable. If the provision
|
minimum extent necessary to make it enforceable. If the provision
|
||||||
cannot be reformed, it shall be severed from this Public License
|
cannot be reformed, it shall be severed from this Public License
|
||||||
without affecting the enforceability of the remaining terms and
|
without affecting the enforceability of the remaining terms and
|
||||||
conditions.
|
conditions.
|
||||||
|
|
||||||
c. No term or condition of this Public License will be waived and no
|
c. No term or condition of this Public License will be waived and no
|
||||||
failure to comply consented to unless expressly agreed to by the
|
failure to comply consented to unless expressly agreed to by the
|
||||||
Licensor.
|
Licensor.
|
||||||
|
|
||||||
d. Nothing in this Public License constitutes or may be interpreted
|
d. Nothing in this Public License constitutes or may be interpreted
|
||||||
as a limitation upon, or waiver of, any privileges and immunities
|
as a limitation upon, or waiver of, any privileges and immunities
|
||||||
that apply to the Licensor or You, including from the legal
|
that apply to the Licensor or You, including from the legal
|
||||||
processes of any jurisdiction or authority.
|
processes of any jurisdiction or authority.
|
||||||
|
|
||||||
=======================================================================
|
=======================================================================
|
||||||
|
|
||||||
Creative Commons is not a party to its public
|
Creative Commons is not a party to its public
|
||||||
licenses. Notwithstanding, Creative Commons may elect to apply one of
|
licenses. Notwithstanding, Creative Commons may elect to apply one of
|
||||||
its public licenses to material it publishes and in those instances
|
its public licenses to material it publishes and in those instances
|
||||||
will be considered the “Licensor.” The text of the Creative Commons
|
will be considered the “Licensor.” The text of the Creative Commons
|
||||||
public licenses is dedicated to the public domain under the CC0 Public
|
public licenses is dedicated to the public domain under the CC0 Public
|
||||||
Domain Dedication. Except for the limited purpose of indicating that
|
Domain Dedication. Except for the limited purpose of indicating that
|
||||||
material is shared under a Creative Commons public license or as
|
material is shared under a Creative Commons public license or as
|
||||||
otherwise permitted by the Creative Commons policies published at
|
otherwise permitted by the Creative Commons policies published at
|
||||||
creativecommons.org/policies, Creative Commons does not authorize the
|
creativecommons.org/policies, Creative Commons does not authorize the
|
||||||
use of the trademark "Creative Commons" or any other trademark or logo
|
use of the trademark "Creative Commons" or any other trademark or logo
|
||||||
of Creative Commons without its prior written consent including,
|
of Creative Commons without its prior written consent including,
|
||||||
without limitation, in connection with any unauthorized modifications
|
without limitation, in connection with any unauthorized modifications
|
||||||
to any of its public licenses or any other arrangements,
|
to any of its public licenses or any other arrangements,
|
||||||
understandings, or agreements concerning use of licensed material. For
|
understandings, or agreements concerning use of licensed material. For
|
||||||
the avoidance of doubt, this paragraph does not form part of the
|
the avoidance of doubt, this paragraph does not form part of the
|
||||||
public licenses.
|
public licenses.
|
||||||
|
|
||||||
Creative Commons may be contacted at creativecommons.org.
|
Creative Commons may be contacted at creativecommons.org.
|
||||||
@@ -1,18 +1,16 @@
|
|||||||
{
|
{
|
||||||
"cells": [
|
"cells": [
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"colab_type": "text",
|
"id": "view-in-github",
|
||||||
"id": "view-in-github"
|
"colab_type": "text"
|
||||||
},
|
},
|
||||||
"source": [
|
"source": [
|
||||||
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap01/1_1_BackgroundMathematics.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap01/1_1_BackgroundMathematics.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "s5zzKSOusPOB"
|
"id": "s5zzKSOusPOB"
|
||||||
@@ -21,7 +19,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"# **Notebook 1.1 -- Background Mathematics**\n",
|
"# **Notebook 1.1 -- Background Mathematics**\n",
|
||||||
"\n",
|
"\n",
|
||||||
"The purpose of this Python notebook is to make sure you can use CoLab and to familiarize yourself with some of the background mathematical concepts that you are going to need to understand deep learning. <br><br> It's not meant to be difficult and it may be that you know some or all of this information already.<br><br> Math is *NOT* a spectator sport. You won't learn it by just listening to lectures or reading books. It really helps to interact with it and explore yourself. <br><br> Work through the cells below, running each cell in turn. In various places you will see the words **\"TO DO\"**. Follow the instructions at these places and write code to complete the functions. There are also questions interspersed in the text.\n",
|
"The purpose of this Python notebook is to make sure you can use CoLab and to familiarize yourself with some of the background mathematical concepts that you are going to need to understand deep learning. <br><br> It's not meant to be difficult and it may be that you know some or all of this information already.<br><br> Math is *NOT* a spectator sport. You won't learn it by just listening to lectures or reading books. It really helps to interact with it and explore yourself. <br><br> Work through the cells below, running each cell in turn. In various places you will see the words **\"TODO\"**. Follow the instructions at these places and write code to complete the functions. There are also questions interspersed in the text.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
]
|
]
|
||||||
@@ -41,7 +39,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "WV2Dl6owme2d"
|
"id": "WV2Dl6owme2d"
|
||||||
@@ -49,11 +46,11 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"**Linear functions**<br> We will be using the term *linear equation* to mean a weighted sum of inputs plus an offset. If there is just one input $x$, then this is a straight line:\n",
|
"**Linear functions**<br> We will be using the term *linear equation* to mean a weighted sum of inputs plus an offset. If there is just one input $x$, then this is a straight line:\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\\begin{equation}y=\\beta+\\omega x,\\end{equation} \n",
|
"\\begin{equation}y=\\beta+\\omega x,\\end{equation}\n",
|
||||||
"\n",
|
"\n",
|
||||||
"where $\\beta$ is the y-intercept of the linear and $\\omega$ is the slope of the line. When there are two inputs $x_{1}$ and $x_{2}$, then this becomes:\n",
|
"where $\\beta$ is the y-intercept of the linear and $\\omega$ is the slope of the line. When there are two inputs $x_{1}$ and $x_{2}$, then this becomes:\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\\begin{equation}y=\\beta+\\omega_1 x_1 + \\omega_2 x_2.\\end{equation} \n",
|
"\\begin{equation}y=\\beta+\\omega_1 x_1 + \\omega_2 x_2.\\end{equation}\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Any other functions are by definition **non-linear**.\n",
|
"Any other functions are by definition **non-linear**.\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -99,7 +96,7 @@
|
|||||||
"ax.plot(x,y,'r-')\n",
|
"ax.plot(x,y,'r-')\n",
|
||||||
"ax.set_ylim([0,10]);ax.set_xlim([0,10])\n",
|
"ax.set_ylim([0,10]);ax.set_xlim([0,10])\n",
|
||||||
"ax.set_xlabel('x'); ax.set_ylabel('y')\n",
|
"ax.set_xlabel('x'); ax.set_ylabel('y')\n",
|
||||||
"plt.show\n",
|
"plt.show()\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# TODO -- experiment with changing the values of beta and omega\n",
|
"# TODO -- experiment with changing the values of beta and omega\n",
|
||||||
"# to understand what they do. Try to make a line\n",
|
"# to understand what they do. Try to make a line\n",
|
||||||
@@ -107,7 +104,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "AedfvD9dxShZ"
|
"id": "AedfvD9dxShZ"
|
||||||
@@ -192,7 +188,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "i8tLwpls476R"
|
"id": "i8tLwpls476R"
|
||||||
@@ -236,7 +231,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "fGzVJQ6N-mHJ"
|
"id": "fGzVJQ6N-mHJ"
|
||||||
@@ -275,11 +269,10 @@
|
|||||||
"# Compute with vector/matrix form\n",
|
"# Compute with vector/matrix form\n",
|
||||||
"y_vec = beta_vec+np.matmul(omega_mat, x_vec)\n",
|
"y_vec = beta_vec+np.matmul(omega_mat, x_vec)\n",
|
||||||
"print(\"Matrix/vector form\")\n",
|
"print(\"Matrix/vector form\")\n",
|
||||||
"print('y1= %3.3f\\ny2 = %3.3f'%((y_vec[0],y_vec[1])))\n"
|
"print('y1= %3.3f\\ny2 = %3.3f'%((y_vec[0][0],y_vec[1][0])))\n"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "3LGRoTMLU8ZU"
|
"id": "3LGRoTMLU8ZU"
|
||||||
@@ -293,7 +286,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "7Y5zdKtKZAB2"
|
"id": "7Y5zdKtKZAB2"
|
||||||
@@ -303,7 +295,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"Throughout the book, we'll be using some special functions (see Appendix B.1.3). The most important of these are the logarithm and exponential functions. Let's investigate their properties.\n",
|
"Throughout the book, we'll be using some special functions (see Appendix B.1.3). The most important of these are the logarithm and exponential functions. Let's investigate their properties.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"We'll start with the exponential function $y=\\exp[x]=e^x$ which maps the real line $[-\\infty,+\\infty]$ to non-negative numbers $[0,+\\infty]$."
|
"We'll start with the exponential function $y=\\exp[x]=e^x$ which maps the real line $(-\\infty,+\\infty)$ to positive numbers $(0,+\\infty)$."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -325,11 +317,10 @@
|
|||||||
"ax.plot(x,y,'r-')\n",
|
"ax.plot(x,y,'r-')\n",
|
||||||
"ax.set_ylim([0,100]);ax.set_xlim([-5,5])\n",
|
"ax.set_ylim([0,100]);ax.set_xlim([-5,5])\n",
|
||||||
"ax.set_xlabel('x'); ax.set_ylabel('exp[x]')\n",
|
"ax.set_xlabel('x'); ax.set_ylabel('exp[x]')\n",
|
||||||
"plt.show"
|
"plt.show()"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "XyrT8257IWCu"
|
"id": "XyrT8257IWCu"
|
||||||
@@ -341,11 +332,10 @@
|
|||||||
"2. What is $\\exp[1]$?\n",
|
"2. What is $\\exp[1]$?\n",
|
||||||
"3. What is $\\exp[-\\infty]$?\n",
|
"3. What is $\\exp[-\\infty]$?\n",
|
||||||
"4. What is $\\exp[+\\infty]$?\n",
|
"4. What is $\\exp[+\\infty]$?\n",
|
||||||
"5. A function is convex if we can draw a straight line between any two points on the function, and this line always lies above the function. Similarly, a function is concave if a straight line between any two points always lies below the function. Is the exponential function convex or concave or neither?\n"
|
"5. A function is convex if we can draw a straight line between any two points on the function, and the line lies above the function everywhere between these two points. Similarly, a function is concave if a straight line between any two points lies below the function everywhere between these two points. Is the exponential function convex or concave or neither?\n"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "R6A4e5IxIWCu"
|
"id": "R6A4e5IxIWCu"
|
||||||
@@ -373,11 +363,10 @@
|
|||||||
"ax.plot(x,y,'r-')\n",
|
"ax.plot(x,y,'r-')\n",
|
||||||
"ax.set_ylim([-5,5]);ax.set_xlim([0,5])\n",
|
"ax.set_ylim([-5,5]);ax.set_xlim([0,5])\n",
|
||||||
"ax.set_xlabel('x'); ax.set_ylabel('$\\log[x]$')\n",
|
"ax.set_xlabel('x'); ax.set_ylabel('$\\log[x]$')\n",
|
||||||
"plt.show"
|
"plt.show()"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "yYWrL5AXIWCv"
|
"id": "yYWrL5AXIWCv"
|
||||||
@@ -397,8 +386,8 @@
|
|||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"colab": {
|
"colab": {
|
||||||
"include_colab_link": true,
|
"provenance": [],
|
||||||
"provenance": []
|
"include_colab_link": true
|
||||||
},
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "Python 3 (ipykernel)",
|
"display_name": "Python 3 (ipykernel)",
|
||||||
@@ -420,4 +409,4 @@
|
|||||||
},
|
},
|
||||||
"nbformat": 4,
|
"nbformat": 4,
|
||||||
"nbformat_minor": 0
|
"nbformat_minor": 0
|
||||||
}
|
}
|
||||||
@@ -4,7 +4,6 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"colab": {
|
"colab": {
|
||||||
"provenance": [],
|
"provenance": [],
|
||||||
"authorship_tag": "ABX9TyOmndC0N7dFV7W3Mh5ljOLl",
|
|
||||||
"include_colab_link": true
|
"include_colab_link": true
|
||||||
},
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
@@ -33,7 +32,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"The purpose of this notebook is to explore the linear regression model discussed in Chapter 2 of the book.\n",
|
"The purpose of this notebook is to explore the linear regression model discussed in Chapter 2 of the book.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and write code to complete the functions. There are also questions interspersed in the text.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and write code to complete the functions. There are also questions interspersed in the text.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
],
|
],
|
||||||
@@ -175,7 +174,7 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"source": [
|
||||||
"# TO DO -- Change the parameters manually to fit the model\n",
|
"# TODO -- Change the parameters manually to fit the model\n",
|
||||||
"# First fix phi1 and try changing phi0 until you can't make the loss go down any more\n",
|
"# First fix phi1 and try changing phi0 until you can't make the loss go down any more\n",
|
||||||
"# Then fix phi0 and try changing phi1 until you can't make the loss go down any more\n",
|
"# Then fix phi0 and try changing phi1 until you can't make the loss go down any more\n",
|
||||||
"# Repeat this process until you find a set of parameters that fit the model as in figure 2.2d\n",
|
"# Repeat this process until you find a set of parameters that fit the model as in figure 2.2d\n",
|
||||||
@@ -197,7 +196,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"# Visualizing the loss function\n",
|
"# Visualizing the loss function\n",
|
||||||
"\n",
|
"\n",
|
||||||
"The above process is equivalent to to descending coordinate wise on the loss function<br>\n",
|
"The above process is equivalent to descending coordinate wise on the loss function<br>\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Now let's plot that function"
|
"Now let's plot that function"
|
||||||
],
|
],
|
||||||
@@ -235,8 +234,8 @@
|
|||||||
"levels = 40\n",
|
"levels = 40\n",
|
||||||
"ax.contour(phi0_mesh, phi1_mesh, all_losses ,levels, colors=['#80808080'])\n",
|
"ax.contour(phi0_mesh, phi1_mesh, all_losses ,levels, colors=['#80808080'])\n",
|
||||||
"ax.set_ylim([1,-1])\n",
|
"ax.set_ylim([1,-1])\n",
|
||||||
"ax.set_xlabel('Intercept, $\\phi_0$')\n",
|
"ax.set_xlabel(r'Intercept, $\\phi_0$')\n",
|
||||||
"ax.set_ylabel('Slope, $\\phi_1$')\n",
|
"ax.set_ylabel(r'Slope, $\\phi_1$')\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Plot the position of your best fitting line on the loss function\n",
|
"# Plot the position of your best fitting line on the loss function\n",
|
||||||
"# It should be close to the minimum\n",
|
"# It should be close to the minimum\n",
|
||||||
|
|||||||
File diff suppressed because one or more lines are too long
@@ -32,7 +32,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"The purpose of this notebook is to gain some familiarity with shallow neural networks with 2D inputs. It works through an example similar to figure 3.8 and experiments with different activation functions. <br><br>\n",
|
"The purpose of this notebook is to gain some familiarity with shallow neural networks with 2D inputs. It works through an example similar to figure 3.8 and experiments with different activation functions. <br><br>\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and write code to complete the functions. There are also questions interspersed in the text.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and write code to complete the functions. There are also questions interspersed in the text.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n"
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n"
|
||||||
],
|
],
|
||||||
|
|||||||
@@ -4,7 +4,6 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"colab": {
|
"colab": {
|
||||||
"provenance": [],
|
"provenance": [],
|
||||||
"authorship_tag": "ABX9TyNioITtfAcfxEfM3UOfQyb9",
|
|
||||||
"include_colab_link": true
|
"include_colab_link": true
|
||||||
},
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
@@ -33,7 +32,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"The purpose of this notebook is to compute the maximum possible number of linear regions as seen in figure 3.9 of the book.\n",
|
"The purpose of this notebook is to compute the maximum possible number of linear regions as seen in figure 3.9 of the book.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and write code to complete the functions. There are also questions interspersed in the text.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and write code to complete the functions. There are also questions interspersed in the text.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
],
|
],
|
||||||
@@ -62,7 +61,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"The number of regions $N$ created by a shallow neural network with $D_i$ inputs and $D$ hidden units is given by Zaslavsky's formula:\n",
|
"The number of regions $N$ created by a shallow neural network with $D_i$ inputs and $D$ hidden units is given by Zaslavsky's formula:\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\\begin{equation}N = \\sum_{j=0}^{D_{i}}\\binom{D}{j}=\\sum_{j=0}^{D_{i}} \\frac{D!}{(D-j)!j!} \\end{equation} <br>\n",
|
"\\begin{equation}N = \\sum_{j=0}^{D_{i}}\\binom{D}{j}=\\sum_{j=0}^{D_{i}} \\frac{D!}{(D-j)!j!} \\end{equation} \n",
|
||||||
"\n"
|
"\n"
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
@@ -221,7 +220,7 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"source": [
|
||||||
"# Now let's plot the graph from figure 3.9a (takes ~1min)\n",
|
"# Now let's plot the graph from figure 3.9b (takes ~1min)\n",
|
||||||
"dims = np.array([1,5,10,50,100])\n",
|
"dims = np.array([1,5,10,50,100])\n",
|
||||||
"regions = np.zeros((dims.shape[0], 200))\n",
|
"regions = np.zeros((dims.shape[0], 200))\n",
|
||||||
"params = np.zeros((dims.shape[0], 200))\n",
|
"params = np.zeros((dims.shape[0], 200))\n",
|
||||||
|
|||||||
@@ -22,7 +22,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"The purpose of this practical is to experiment with different activation functions. <br>\n",
|
"The purpose of this practical is to experiment with different activation functions. <br>\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and write code to complete the functions. There are also questions interspersed in the text.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and write code to complete the functions. There are also questions interspersed in the text.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
]
|
]
|
||||||
|
|||||||
@@ -28,11 +28,11 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"source": [
|
"source": [
|
||||||
"#Notebook 4.1 -- Composing networks\n",
|
"# Notebook 4.1 -- Composing networks\n",
|
||||||
"\n",
|
"\n",
|
||||||
"The purpose of this notebook is to understand what happens when we feed one neural network into another. It works through an example similar to 4.1 and varies both networks\n",
|
"The purpose of this notebook is to understand what happens when we feed one neural network into another. It works through an example similar to 4.1 and varies both networks\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions"
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions"
|
||||||
],
|
],
|
||||||
@@ -134,7 +134,7 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"source": [
|
"source": [
|
||||||
"Let's define two networks. We'll put the prefixes n1_ and n2_ before all the variables to make it clear which network is which. We'll just consider the inputs and outputs over the range [-1,1]. If you set the \"plot_all\" flat to True, you can see the details of how they were created."
|
"Let's define two networks. We'll put the prefixes n1_ and n2_ before all the variables to make it clear which network is which. We'll just consider the inputs and outputs over the range [-1,1]."
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "LxBJCObC-NTY"
|
"id": "LxBJCObC-NTY"
|
||||||
@@ -343,7 +343,7 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"source": [
|
||||||
"# TO DO\n",
|
"# TODO\n",
|
||||||
"# How many linear regions would there be if we ran N copies of the first network, feeding the result of the first\n",
|
"# How many linear regions would there be if we ran N copies of the first network, feeding the result of the first\n",
|
||||||
"# into the second, the second into the third and so on, and then passed the result into the original second\n",
|
"# into the second, the second into the third and so on, and then passed the result into the original second\n",
|
||||||
"# network (blue curve above)\n",
|
"# network (blue curve above)\n",
|
||||||
@@ -358,4 +358,4 @@
|
|||||||
"outputs": []
|
"outputs": []
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,7 +4,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"colab": {
|
"colab": {
|
||||||
"provenance": [],
|
"provenance": [],
|
||||||
"authorship_tag": "ABX9TyPkFrjmRAUf0fxN07RC4xMI",
|
"authorship_tag": "ABX9TyPZzptvvf7OPZai8erQ/0xT",
|
||||||
"include_colab_link": true
|
"include_colab_link": true
|
||||||
},
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
@@ -29,11 +29,11 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"source": [
|
"source": [
|
||||||
"#Notebook 4.2 -- Clipping functions\n",
|
"# Notebook 4.2 -- Clipping functions\n",
|
||||||
"\n",
|
"\n",
|
||||||
"The purpose of this notebook is to understand how a neural network with two hidden layers build more complicated functions by clipping and recombining the representations at the intermediate hidden variables.\n",
|
"The purpose of this notebook is to understand how a neural network with two hidden layers build more complicated functions by clipping and recombining the representations at the intermediate hidden variables.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions"
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions"
|
||||||
],
|
],
|
||||||
@@ -127,26 +127,26 @@
|
|||||||
" fig, ax = plt.subplots(3,3)\n",
|
" fig, ax = plt.subplots(3,3)\n",
|
||||||
" fig.set_size_inches(8.5, 8.5)\n",
|
" fig.set_size_inches(8.5, 8.5)\n",
|
||||||
" fig.tight_layout(pad=3.0)\n",
|
" fig.tight_layout(pad=3.0)\n",
|
||||||
" ax[0,0].plot(x,layer2_pre_1,'r-'); ax[0,0].set_ylabel('$\\psi_{10}+\\psi_{11}h_{1}+\\psi_{12}h_{2}+\\psi_{13}h_3$')\n",
|
" ax[0,0].plot(x,layer2_pre_1,'r-'); ax[0,0].set_ylabel(r'$\\psi_{10}+\\psi_{11}h_{1}+\\psi_{12}h_{2}+\\psi_{13}h_3$')\n",
|
||||||
" ax[0,1].plot(x,layer2_pre_2,'b-'); ax[0,1].set_ylabel('$\\psi_{20}+\\psi_{21}h_{1}+\\psi_{22}h_{2}+\\psi_{23}h_3$')\n",
|
" ax[0,1].plot(x,layer2_pre_2,'b-'); ax[0,1].set_ylabel(r'$\\psi_{20}+\\psi_{21}h_{1}+\\psi_{22}h_{2}+\\psi_{23}h_3$')\n",
|
||||||
" ax[0,2].plot(x,layer2_pre_3,'g-'); ax[0,2].set_ylabel('$\\psi_{30}+\\psi_{31}h_{1}+\\psi_{32}h_{2}+\\psi_{33}h_3$')\n",
|
" ax[0,2].plot(x,layer2_pre_3,'g-'); ax[0,2].set_ylabel(r'$\\psi_{30}+\\psi_{31}h_{1}+\\psi_{32}h_{2}+\\psi_{33}h_3$')\n",
|
||||||
" ax[1,0].plot(x,h1_prime,'r-'); ax[1,0].set_ylabel(\"$h_{1}^{'}$\")\n",
|
" ax[1,0].plot(x,h1_prime,'r-'); ax[1,0].set_ylabel(r\"$h_{1}^{'}$\")\n",
|
||||||
" ax[1,1].plot(x,h2_prime,'b-'); ax[1,1].set_ylabel(\"$h_{2}^{'}$\")\n",
|
" ax[1,1].plot(x,h2_prime,'b-'); ax[1,1].set_ylabel(r\"$h_{2}^{'}$\")\n",
|
||||||
" ax[1,2].plot(x,h3_prime,'g-'); ax[1,2].set_ylabel(\"$h_{3}^{'}$\")\n",
|
" ax[1,2].plot(x,h3_prime,'g-'); ax[1,2].set_ylabel(r\"$h_{3}^{'}$\")\n",
|
||||||
" ax[2,0].plot(x,phi1_h1_prime,'r-'); ax[2,0].set_ylabel(\"$\\phi_1 h_{1}^{'}$\")\n",
|
" ax[2,0].plot(x,phi1_h1_prime,'r-'); ax[2,0].set_ylabel(r\"$\\phi_1 h_{1}^{'}$\")\n",
|
||||||
" ax[2,1].plot(x,phi2_h2_prime,'b-'); ax[2,1].set_ylabel(\"$\\phi_2 h_{2}^{'}$\")\n",
|
" ax[2,1].plot(x,phi2_h2_prime,'b-'); ax[2,1].set_ylabel(r\"$\\phi_2 h_{2}^{'}$\")\n",
|
||||||
" ax[2,2].plot(x,phi3_h3_prime,'g-'); ax[2,2].set_ylabel(\"$\\phi_3 h_{3}^{'}$\")\n",
|
" ax[2,2].plot(x,phi3_h3_prime,'g-'); ax[2,2].set_ylabel(r\"$\\phi_3 h_{3}^{'}$\")\n",
|
||||||
"\n",
|
"\n",
|
||||||
" for plot_y in range(3):\n",
|
" for plot_y in range(3):\n",
|
||||||
" for plot_x in range(3):\n",
|
" for plot_x in range(3):\n",
|
||||||
" ax[plot_y,plot_x].set_xlim([0,1]);ax[plot_x,plot_y].set_ylim([-1,1])\n",
|
" ax[plot_y,plot_x].set_xlim([0,1]);ax[plot_x,plot_y].set_ylim([-1,1])\n",
|
||||||
" ax[plot_y,plot_x].set_aspect(0.5)\n",
|
" ax[plot_y,plot_x].set_aspect(0.5)\n",
|
||||||
" ax[2,plot_y].set_xlabel('Input, $x$');\n",
|
" ax[2,plot_y].set_xlabel(r'Input, $x$');\n",
|
||||||
" plt.show()\n",
|
" plt.show()\n",
|
||||||
"\n",
|
"\n",
|
||||||
" fig, ax = plt.subplots()\n",
|
" fig, ax = plt.subplots()\n",
|
||||||
" ax.plot(x,y)\n",
|
" ax.plot(x,y)\n",
|
||||||
" ax.set_xlabel('Input, $x$'); ax.set_ylabel('Output, $y$')\n",
|
" ax.set_xlabel(r'Input, $x$'); ax.set_ylabel(r'Output, $y$')\n",
|
||||||
" ax.set_xlim([0,1]);ax.set_ylim([-1,1])\n",
|
" ax.set_xlim([0,1]);ax.set_ylim([-1,1])\n",
|
||||||
" ax.set_aspect(0.5)\n",
|
" ax.set_aspect(0.5)\n",
|
||||||
" plt.show()"
|
" plt.show()"
|
||||||
@@ -169,7 +169,7 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"source": [
|
||||||
"# Define parameters (note first dimension of theta and phi is padded to make indices match\n",
|
"# Define parameters (note first dimension of theta and psi is padded to make indices match\n",
|
||||||
"# notation in book)\n",
|
"# notation in book)\n",
|
||||||
"theta = np.zeros([4,2])\n",
|
"theta = np.zeros([4,2])\n",
|
||||||
"psi = np.zeros([4,4])\n",
|
"psi = np.zeros([4,4])\n",
|
||||||
@@ -216,4 +216,4 @@
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,7 +4,6 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"colab": {
|
"colab": {
|
||||||
"provenance": [],
|
"provenance": [],
|
||||||
"authorship_tag": "ABX9TyO2DaD75p+LGi7WgvTzjrk1",
|
|
||||||
"include_colab_link": true
|
"include_colab_link": true
|
||||||
},
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
@@ -31,9 +30,9 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"# **Notebook 4.3 Deep neural networks**\n",
|
"# **Notebook 4.3 Deep neural networks**\n",
|
||||||
"\n",
|
"\n",
|
||||||
"This network investigates converting neural networks to matrix form.\n",
|
"This notebook investigates converting neural networks to matrix form.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
],
|
],
|
||||||
@@ -118,7 +117,7 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"source": [
|
"source": [
|
||||||
"Let's define a network. We'll just consider the inputs and outputs over the range [-1,1]. If you set the \"plot_all\" flat to True, you can see the details of how it was created."
|
"Let's define a network. We'll just consider the inputs and outputs over the range [-1,1]."
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "LxBJCObC-NTY"
|
"id": "LxBJCObC-NTY"
|
||||||
@@ -150,7 +149,7 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"source": [
|
"source": [
|
||||||
"Now we'll define the same neural network, but this time, we will use matrix form. When you get this right, it will draw the same plot as above."
|
"Now we'll define the same neural network, but this time, we will use matrix form as in equation 4.15. When you get this right, it will draw the same plot as above."
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "XCJqo_AjfAra"
|
"id": "XCJqo_AjfAra"
|
||||||
@@ -176,8 +175,8 @@
|
|||||||
"n1_in_mat = np.reshape(n1_in,(n_dim_in,n_data))\n",
|
"n1_in_mat = np.reshape(n1_in,(n_dim_in,n_data))\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# This runs the network for ALL of the inputs, x at once so we can draw graph\n",
|
"# This runs the network for ALL of the inputs, x at once so we can draw graph\n",
|
||||||
"h1 = ReLU(np.matmul(beta_0,np.ones((1,n_data))) + np.matmul(Omega_0,n1_in_mat))\n",
|
"h1 = ReLU(beta_0 + np.matmul(Omega_0,n1_in_mat))\n",
|
||||||
"n1_out = np.matmul(beta_1,np.ones((1,n_data))) + np.matmul(Omega_1,h1)\n",
|
"n1_out = beta_1 + np.matmul(Omega_1,h1)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Draw the network and check that it looks the same as the non-matrix case\n",
|
"# Draw the network and check that it looks the same as the non-matrix case\n",
|
||||||
"plot_neural(n1_in, n1_out)"
|
"plot_neural(n1_in, n1_out)"
|
||||||
@@ -247,9 +246,9 @@
|
|||||||
"n1_in_mat = np.reshape(n1_in,(n_dim_in,n_data))\n",
|
"n1_in_mat = np.reshape(n1_in,(n_dim_in,n_data))\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# This runs the network for ALL of the inputs, x at once so we can draw graph (hence extra np.ones term)\n",
|
"# This runs the network for ALL of the inputs, x at once so we can draw graph (hence extra np.ones term)\n",
|
||||||
"h1 = ReLU(np.matmul(beta_0,np.ones((1,n_data))) + np.matmul(Omega_0,n1_in_mat))\n",
|
"h1 = ReLU(beta_0 + np.matmul(Omega_0,n1_in_mat))\n",
|
||||||
"h2 = ReLU(np.matmul(beta_1,np.ones((1,n_data))) + np.matmul(Omega_1,h1))\n",
|
"h2 = ReLU(beta_1 + np.matmul(Omega_1,h1))\n",
|
||||||
"n1_out = np.matmul(beta_2,np.ones((1,n_data))) + np.matmul(Omega_2,h2)\n",
|
"n1_out = beta_2 + np.matmul(Omega_2,h2)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Draw the network and check that it looks the same as the non-matrix version\n",
|
"# Draw the network and check that it looks the same as the non-matrix version\n",
|
||||||
"plot_neural(n1_in, n1_out)"
|
"plot_neural(n1_in, n1_out)"
|
||||||
@@ -291,10 +290,10 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# If you set the parameters to the correct sizes, the following code will run\n",
|
"# If you set the parameters to the correct sizes, the following code will run\n",
|
||||||
"h1 = ReLU(np.matmul(beta_0,np.ones((1,n_data))) + np.matmul(Omega_0,x));\n",
|
"h1 = ReLU(beta_0 + np.matmul(Omega_0,x));\n",
|
||||||
"h2 = ReLU(np.matmul(beta_1,np.ones((1,n_data))) + np.matmul(Omega_1,h1));\n",
|
"h2 = ReLU(beta_1 + np.matmul(Omega_1,h1));\n",
|
||||||
"h3 = ReLU(np.matmul(beta_2,np.ones((1,n_data))) + np.matmul(Omega_2,h2));\n",
|
"h3 = ReLU(beta_2 + np.matmul(Omega_2,h2));\n",
|
||||||
"y = np.matmul(beta_3,np.ones((1,n_data))) + np.matmul(Omega_3,h3)\n",
|
"y = beta_3 + np.matmul(Omega_3,h3)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"if h1.shape[0] is not D_1 or h1.shape[1] is not n_data:\n",
|
"if h1.shape[0] is not D_1 or h1.shape[1] is not n_data:\n",
|
||||||
" print(\"h1 is wrong shape\")\n",
|
" print(\"h1 is wrong shape\")\n",
|
||||||
@@ -318,4 +317,4 @@
|
|||||||
"outputs": []
|
"outputs": []
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
@@ -32,7 +32,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates the least squares loss and the equivalence of maximum likelihood and minimum negative log likelihood.\n",
|
"This notebook investigates the least squares loss and the equivalence of maximum likelihood and minimum negative log likelihood.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
],
|
],
|
||||||
@@ -118,7 +118,7 @@
|
|||||||
" ax.plot(x_model,y_model)\n",
|
" ax.plot(x_model,y_model)\n",
|
||||||
" if sigma_model is not None:\n",
|
" if sigma_model is not None:\n",
|
||||||
" ax.fill_between(x_model, y_model-2*sigma_model, y_model+2*sigma_model, color='lightgray')\n",
|
" ax.fill_between(x_model, y_model-2*sigma_model, y_model+2*sigma_model, color='lightgray')\n",
|
||||||
" ax.set_xlabel('Input, $x$'); ax.set_ylabel('Output, $y$')\n",
|
" ax.set_xlabel(r'Input, $x$'); ax.set_ylabel(r'Output, $y$')\n",
|
||||||
" ax.set_xlim([0,1]);ax.set_ylim([-1,1])\n",
|
" ax.set_xlim([0,1]);ax.set_ylim([-1,1])\n",
|
||||||
" ax.set_aspect(0.5)\n",
|
" ax.set_aspect(0.5)\n",
|
||||||
" if title is not None:\n",
|
" if title is not None:\n",
|
||||||
@@ -222,7 +222,7 @@
|
|||||||
"gauss_prob = normal_distribution(y_gauss, mu, sigma)\n",
|
"gauss_prob = normal_distribution(y_gauss, mu, sigma)\n",
|
||||||
"fig, ax = plt.subplots()\n",
|
"fig, ax = plt.subplots()\n",
|
||||||
"ax.plot(y_gauss, gauss_prob)\n",
|
"ax.plot(y_gauss, gauss_prob)\n",
|
||||||
"ax.set_xlabel('Input, $y$'); ax.set_ylabel('Probability $Pr(y)$')\n",
|
"ax.set_xlabel(r'Input, $y$'); ax.set_ylabel(r'Probability $Pr(y)$')\n",
|
||||||
"ax.set_xlim([-5,5]);ax.set_ylim([0,1.0])\n",
|
"ax.set_xlim([-5,5]);ax.set_ylim([0,1.0])\n",
|
||||||
"plt.show()\n",
|
"plt.show()\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -590,4 +590,4 @@
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
@@ -32,7 +32,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates the binary cross-entropy loss. It follows from applying the formula in section 5.2 to a loss function based on the Bernoulli distribution.\n",
|
"This notebook investigates the binary cross-entropy loss. It follows from applying the formula in section 5.2 to a loss function based on the Bernoulli distribution.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
],
|
],
|
||||||
@@ -119,12 +119,12 @@
|
|||||||
" fig.set_size_inches(7.0, 3.5)\n",
|
" fig.set_size_inches(7.0, 3.5)\n",
|
||||||
" fig.tight_layout(pad=3.0)\n",
|
" fig.tight_layout(pad=3.0)\n",
|
||||||
" ax[0].plot(x_model,out_model)\n",
|
" ax[0].plot(x_model,out_model)\n",
|
||||||
" ax[0].set_xlabel('Input, $x$'); ax[0].set_ylabel('Model output')\n",
|
" ax[0].set_xlabel(r'Input, $x$'); ax[0].set_ylabel(r'Model output')\n",
|
||||||
" ax[0].set_xlim([0,1]);ax[0].set_ylim([-4,4])\n",
|
" ax[0].set_xlim([0,1]);ax[0].set_ylim([-4,4])\n",
|
||||||
" if title is not None:\n",
|
" if title is not None:\n",
|
||||||
" ax[0].set_title(title)\n",
|
" ax[0].set_title(title)\n",
|
||||||
" ax[1].plot(x_model,lambda_model)\n",
|
" ax[1].plot(x_model,lambda_model)\n",
|
||||||
" ax[1].set_xlabel('Input, $x$'); ax[1].set_ylabel('$\\lambda$ or Pr(y=1|x)')\n",
|
" ax[1].set_xlabel(r'Input, $x$'); ax[1].set_ylabel(r'$\\lambda$ or Pr(y=1|x)')\n",
|
||||||
" ax[1].set_xlim([0,1]);ax[1].set_ylim([-0.05,1.05])\n",
|
" ax[1].set_xlim([0,1]);ax[1].set_ylim([-0.05,1.05])\n",
|
||||||
" if title is not None:\n",
|
" if title is not None:\n",
|
||||||
" ax[1].set_title(title)\n",
|
" ax[1].set_title(title)\n",
|
||||||
|
|||||||
@@ -20,7 +20,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates the multi-class cross-entropy loss. It follows from applying the formula in section 5.2 to a loss function based on the Categorical distribution.\n",
|
"This notebook investigates the multi-class cross-entropy loss. It follows from applying the formula in section 5.2 to a loss function based on the Categorical distribution.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
]
|
]
|
||||||
@@ -211,7 +211,7 @@
|
|||||||
"id": "MvVX6tl9AEXF"
|
"id": "MvVX6tl9AEXF"
|
||||||
},
|
},
|
||||||
"source": [
|
"source": [
|
||||||
"The left is model output and the right is the model output after the softmax has been applied, so it now lies in the range [0,1] and represents the probability, that y=0 (red), 1 (green) and 2 (blue). The dots at the bottom show the training data with the same color scheme. So we want the red curve to be high where there are red dots, the green curve to be high where there are green dots, and the blue curve to be high where there are blue dots We'll compute the the likelihood and the negative log likelihood."
|
"The left is model output and the right is the model output after the softmax has been applied, so it now lies in the range [0,1] and represents the probability, that y=0 (red), 1 (green) and 2 (blue). The dots at the bottom show the training data with the same color scheme. So we want the red curve to be high where there are red dots, the green curve to be high where there are green dots, and the blue curve to be high where there are blue dots We'll compute the likelihood and the negative log likelihood."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -236,11 +236,10 @@
|
|||||||
},
|
},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"# Let's double check we get the right answer before proceeding\n",
|
"# Here are three examples\n",
|
||||||
"print(\"Correct answer = %3.3f, Your answer = %3.3f\"%(0.2,categorical_distribution(np.array([[0]]),np.array([[0.2],[0.5],[0.3]]))))\n",
|
"print(categorical_distribution(np.array([[0]]),np.array([[0.2],[0.5],[0.3]])))\n",
|
||||||
"print(\"Correct answer = %3.3f, Your answer = %3.3f\"%(0.5,categorical_distribution(np.array([[1]]),np.array([[0.2],[0.5],[0.3]]))))\n",
|
"print(categorical_distribution(np.array([[1]]),np.array([[0.2],[0.5],[0.3]])))\n",
|
||||||
"print(\"Correct answer = %3.3f, Your answer = %3.3f\"%(0.3,categorical_distribution(np.array([[2]]),np.array([[0.2],[0.5],[0.3]]))))\n",
|
"print(categorical_distribution(np.array([[2]]),np.array([[0.2],[0.5],[0.3]])))"
|
||||||
"\n"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -4,7 +4,6 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"colab": {
|
"colab": {
|
||||||
"provenance": [],
|
"provenance": [],
|
||||||
"authorship_tag": "ABX9TyN4E9Vtuk6t2BhZ0Ajv5SW3",
|
|
||||||
"include_colab_link": true
|
"include_colab_link": true
|
||||||
},
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
@@ -33,7 +32,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates how to find the minimum of a 1D function using line search as described in Figure 6.10.\n",
|
"This notebook investigates how to find the minimum of a 1D function using line search as described in Figure 6.10.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n"
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n"
|
||||||
],
|
],
|
||||||
@@ -67,7 +66,7 @@
|
|||||||
" fig,ax = plt.subplots()\n",
|
" fig,ax = plt.subplots()\n",
|
||||||
" ax.plot(phi_plot,loss_function(phi_plot),'r-')\n",
|
" ax.plot(phi_plot,loss_function(phi_plot),'r-')\n",
|
||||||
" ax.set_xlim(0,1); ax.set_ylim(0,1)\n",
|
" ax.set_xlim(0,1); ax.set_ylim(0,1)\n",
|
||||||
" ax.set_xlabel('$\\phi$'); ax.set_ylabel('$L[\\phi]$')\n",
|
" ax.set_xlabel(r'$\\phi$'); ax.set_ylabel(r'$L[\\phi]$')\n",
|
||||||
" if a is not None and b is not None and c is not None and d is not None:\n",
|
" if a is not None and b is not None and c is not None and d is not None:\n",
|
||||||
" plt.axvspan(a, d, facecolor='k', alpha=0.2)\n",
|
" plt.axvspan(a, d, facecolor='k', alpha=0.2)\n",
|
||||||
" ax.plot([a,a],[0,1],'b-')\n",
|
" ax.plot([a,a],[0,1],'b-')\n",
|
||||||
@@ -131,7 +130,8 @@
|
|||||||
"\n",
|
"\n",
|
||||||
" print('Iter %d, a=%3.3f, b=%3.3f, c=%3.3f, d=%3.3f'%(n_iter, a,b,c,d))\n",
|
" print('Iter %d, a=%3.3f, b=%3.3f, c=%3.3f, d=%3.3f'%(n_iter, a,b,c,d))\n",
|
||||||
"\n",
|
"\n",
|
||||||
" # Rule #1 If the HEIGHT at point A is less than the HEIGHT at points B, C, and D then halve values of B, C, and D\n",
|
" # Rule #1 If the HEIGHT at point A is less than the HEIGHT at points B, C, and D then move them to they are half\n",
|
||||||
|
" # as far from A as they start\n",
|
||||||
" # i.e. bring them closer to the original point\n",
|
" # i.e. bring them closer to the original point\n",
|
||||||
" # TODO REPLACE THE BLOCK OF CODE BELOW WITH THIS RULE\n",
|
" # TODO REPLACE THE BLOCK OF CODE BELOW WITH THIS RULE\n",
|
||||||
" if (0):\n",
|
" if (0):\n",
|
||||||
@@ -189,4 +189,4 @@
|
|||||||
"outputs": []
|
"outputs": []
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
@@ -1,18 +1,16 @@
|
|||||||
{
|
{
|
||||||
"cells": [
|
"cells": [
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"colab_type": "text",
|
"id": "view-in-github",
|
||||||
"id": "view-in-github"
|
"colab_type": "text"
|
||||||
},
|
},
|
||||||
"source": [
|
"source": [
|
||||||
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap06/6_2_Gradient_Descent.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap06/6_2_Gradient_Descent.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "el8l05WQEO46"
|
"id": "el8l05WQEO46"
|
||||||
@@ -22,7 +20,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook recreates the gradient descent algorithm as shown in figure 6.1.\n",
|
"This notebook recreates the gradient descent algorithm as shown in figure 6.1.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
|
||||||
"\n"
|
"\n"
|
||||||
@@ -111,7 +109,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "QU5mdGvpTtEG"
|
"id": "QU5mdGvpTtEG"
|
||||||
@@ -140,7 +137,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "eB5DQvU5hYNx"
|
"id": "eB5DQvU5hYNx"
|
||||||
@@ -162,7 +158,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "F3trnavPiHpH"
|
"id": "F3trnavPiHpH"
|
||||||
@@ -218,7 +213,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "s9Duf05WqqSC"
|
"id": "s9Duf05WqqSC"
|
||||||
@@ -252,7 +246,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "RS1nEcYVuEAM"
|
"id": "RS1nEcYVuEAM"
|
||||||
@@ -265,7 +258,7 @@
|
|||||||
"\\frac{\\partial L}{\\partial \\phi_{1}}&\\approx & \\frac{L[\\phi_0, \\phi_1+\\delta]-L[\\phi_0, \\phi_1]}{\\delta}\n",
|
"\\frac{\\partial L}{\\partial \\phi_{1}}&\\approx & \\frac{L[\\phi_0, \\phi_1+\\delta]-L[\\phi_0, \\phi_1]}{\\delta}\n",
|
||||||
"\\end{align}\n",
|
"\\end{align}\n",
|
||||||
"\n",
|
"\n",
|
||||||
"We can't do this when there are many parameters; for a million parameters, we would have to evaluate the loss function two million times, and usually computing the gradients directly is much more efficient."
|
"We can't do this when there are many parameters; for a million parameters, we would have to evaluate the loss function one million plus one times, and usually computing the gradients directly is much more efficient."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -290,7 +283,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "5EIjMM9Fw2eT"
|
"id": "5EIjMM9Fw2eT"
|
||||||
@@ -309,7 +301,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"def loss_function_1D(dist_prop, data, model, phi_start, search_direction):\n",
|
"def loss_function_1D(dist_prop, data, model, phi_start, search_direction):\n",
|
||||||
" # Return the loss after moving this far\n",
|
" # Return the loss after moving this far\n",
|
||||||
" return compute_loss(data[0,:], data[1,:], model, phi_start+ search_direction * dist_prop)\n",
|
" return compute_loss(data[0,:], data[1,:], model, phi_start - search_direction * dist_prop)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"def line_search(data, model, phi, gradient, thresh=.00001, max_dist = 0.1, max_iter = 15, verbose=False):\n",
|
"def line_search(data, model, phi, gradient, thresh=.00001, max_dist = 0.1, max_iter = 15, verbose=False):\n",
|
||||||
" # Initialize four points along the range we are going to search\n",
|
" # Initialize four points along the range we are going to search\n",
|
||||||
@@ -324,20 +316,20 @@
|
|||||||
" # Increment iteration counter (just to prevent an infinite loop)\n",
|
" # Increment iteration counter (just to prevent an infinite loop)\n",
|
||||||
" n_iter = n_iter+1\n",
|
" n_iter = n_iter+1\n",
|
||||||
" # Calculate all four points\n",
|
" # Calculate all four points\n",
|
||||||
" lossa = loss_function_1D(a, data, model, phi,gradient)\n",
|
" lossa = loss_function_1D(a, data, model, phi, gradient)\n",
|
||||||
" lossb = loss_function_1D(b, data, model, phi,gradient)\n",
|
" lossb = loss_function_1D(b, data, model, phi, gradient)\n",
|
||||||
" lossc = loss_function_1D(c, data, model, phi,gradient)\n",
|
" lossc = loss_function_1D(c, data, model, phi, gradient)\n",
|
||||||
" lossd = loss_function_1D(d, data, model, phi,gradient)\n",
|
" lossd = loss_function_1D(d, data, model, phi, gradient)\n",
|
||||||
"\n",
|
"\n",
|
||||||
" if verbose:\n",
|
" if verbose:\n",
|
||||||
" print('Iter %d, a=%3.3f, b=%3.3f, c=%3.3f, d=%3.3f'%(n_iter, a,b,c,d))\n",
|
" print('Iter %d, a=%3.3f, b=%3.3f, c=%3.3f, d=%3.3f'%(n_iter, a,b,c,d))\n",
|
||||||
" print('a %f, b%f, c%f, d%f'%(lossa,lossb,lossc,lossd))\n",
|
" print('a %f, b%f, c%f, d%f'%(lossa,lossb,lossc,lossd))\n",
|
||||||
"\n",
|
"\n",
|
||||||
" # Rule #1 If point A is less than points B, C, and D then halve points B,C, and D\n",
|
" # Rule #1 If point A is less than points B, C, and D then halve distance from A to points B,C, and D\n",
|
||||||
" if np.argmin((lossa,lossb,lossc,lossd))==0:\n",
|
" if np.argmin((lossa,lossb,lossc,lossd))==0:\n",
|
||||||
" b = b/2\n",
|
" b = a+ (b-a)/2\n",
|
||||||
" c = c/2\n",
|
" c = a+ (c-a)/2\n",
|
||||||
" d = d/2\n",
|
" d = a+ (d-a)/2\n",
|
||||||
" continue;\n",
|
" continue;\n",
|
||||||
"\n",
|
"\n",
|
||||||
" # Rule #2 If point b is less than point c then\n",
|
" # Rule #2 If point b is less than point c then\n",
|
||||||
@@ -373,7 +365,7 @@
|
|||||||
"def gradient_descent_step(phi, data, model):\n",
|
"def gradient_descent_step(phi, data, model):\n",
|
||||||
" # TODO -- update Phi with the gradient descent step (equation 6.3)\n",
|
" # TODO -- update Phi with the gradient descent step (equation 6.3)\n",
|
||||||
" # 1. Compute the gradient (you wrote this function above)\n",
|
" # 1. Compute the gradient (you wrote this function above)\n",
|
||||||
" # 2. Find the best step size alpha using line search function (above) -- use negative gradient as going downhill\n",
|
" # 2. Find the best step size alpha using line search function (above)\n",
|
||||||
" # 3. Update the parameters phi based on the gradient and the step size alpha.\n",
|
" # 3. Update the parameters phi based on the gradient and the step size alpha.\n",
|
||||||
"\n",
|
"\n",
|
||||||
" return phi"
|
" return phi"
|
||||||
@@ -412,8 +404,8 @@
|
|||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"colab": {
|
"colab": {
|
||||||
"include_colab_link": true,
|
"provenance": [],
|
||||||
"provenance": []
|
"include_colab_link": true
|
||||||
},
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "Python 3",
|
"display_name": "Python 3",
|
||||||
@@ -425,4 +417,4 @@
|
|||||||
},
|
},
|
||||||
"nbformat": 4,
|
"nbformat": 4,
|
||||||
"nbformat_minor": 0
|
"nbformat_minor": 0
|
||||||
}
|
}
|
||||||
@@ -1,18 +1,16 @@
|
|||||||
{
|
{
|
||||||
"cells": [
|
"cells": [
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"colab_type": "text",
|
"id": "view-in-github",
|
||||||
"id": "view-in-github"
|
"colab_type": "text"
|
||||||
},
|
},
|
||||||
"source": [
|
"source": [
|
||||||
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap06/6_3_Stochastic_Gradient_Descent.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap06/6_3_Stochastic_Gradient_Descent.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "el8l05WQEO46"
|
"id": "el8l05WQEO46"
|
||||||
@@ -22,7 +20,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates gradient descent and stochastic gradient descent and recreates figure 6.5 from the book\n",
|
"This notebook investigates gradient descent and stochastic gradient descent and recreates figure 6.5 from the book\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -122,7 +120,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "QU5mdGvpTtEG"
|
"id": "QU5mdGvpTtEG"
|
||||||
@@ -150,7 +147,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "eB5DQvU5hYNx"
|
"id": "eB5DQvU5hYNx"
|
||||||
@@ -172,7 +168,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "F3trnavPiHpH"
|
"id": "F3trnavPiHpH"
|
||||||
@@ -228,7 +223,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "s9Duf05WqqSC"
|
"id": "s9Duf05WqqSC"
|
||||||
@@ -279,7 +273,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "RS1nEcYVuEAM"
|
"id": "RS1nEcYVuEAM"
|
||||||
@@ -316,7 +309,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "5EIjMM9Fw2eT"
|
"id": "5EIjMM9Fw2eT"
|
||||||
@@ -359,11 +351,11 @@
|
|||||||
" print('Iter %d, a=%3.3f, b=%3.3f, c=%3.3f, d=%3.3f'%(n_iter, a,b,c,d))\n",
|
" print('Iter %d, a=%3.3f, b=%3.3f, c=%3.3f, d=%3.3f'%(n_iter, a,b,c,d))\n",
|
||||||
" print('a %f, b%f, c%f, d%f'%(lossa,lossb,lossc,lossd))\n",
|
" print('a %f, b%f, c%f, d%f'%(lossa,lossb,lossc,lossd))\n",
|
||||||
"\n",
|
"\n",
|
||||||
" # Rule #1 If point A is less than points B, C, and D then halve points B,C, and D\n",
|
" # Rule #1 If point A is less than points B, C, and D then change B,C,D so they are half their current distance from A\n",
|
||||||
" if np.argmin((lossa,lossb,lossc,lossd))==0:\n",
|
" if np.argmin((lossa,lossb,lossc,lossd))==0:\n",
|
||||||
" b = b/2\n",
|
" b = a+ (b-a)/2\n",
|
||||||
" c = c/2\n",
|
" c = a+ (c-a)/2\n",
|
||||||
" d = d/2\n",
|
" d = a+ (d-a)/2\n",
|
||||||
" continue;\n",
|
" continue;\n",
|
||||||
"\n",
|
"\n",
|
||||||
" # Rule #2 If point b is less than point c then\n",
|
" # Rule #2 If point b is less than point c then\n",
|
||||||
@@ -577,9 +569,8 @@
|
|||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"colab": {
|
"colab": {
|
||||||
"authorship_tag": "ABX9TyNk5FN4qlw3pk8BwDVWw1jN",
|
"provenance": [],
|
||||||
"include_colab_link": true,
|
"include_colab_link": true
|
||||||
"provenance": []
|
|
||||||
},
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "Python 3",
|
"display_name": "Python 3",
|
||||||
@@ -591,4 +582,4 @@
|
|||||||
},
|
},
|
||||||
"nbformat": 4,
|
"nbformat": 4,
|
||||||
"nbformat_minor": 0
|
"nbformat_minor": 0
|
||||||
}
|
}
|
||||||
@@ -32,7 +32,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates the use of momentum as illustrated in figure 6.7 from the book.\n",
|
"This notebook investigates the use of momentum as illustrated in figure 6.7 from the book.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
|
|||||||
@@ -32,7 +32,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates the Adam algorithm as illustrated in figure 6.9 from the book.\n",
|
"This notebook investigates the Adam algorithm as illustrated in figure 6.9 from the book.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
],
|
],
|
||||||
@@ -108,8 +108,8 @@
|
|||||||
" ax.contour(phi0mesh, phi1mesh, loss_function, 20, colors=['#80808080'])\n",
|
" ax.contour(phi0mesh, phi1mesh, loss_function, 20, colors=['#80808080'])\n",
|
||||||
" ax.plot(opt_path[0,:], opt_path[1,:],'-', color='#a0d9d3ff')\n",
|
" ax.plot(opt_path[0,:], opt_path[1,:],'-', color='#a0d9d3ff')\n",
|
||||||
" ax.plot(opt_path[0,:], opt_path[1,:],'.', color='#a0d9d3ff',markersize=10)\n",
|
" ax.plot(opt_path[0,:], opt_path[1,:],'.', color='#a0d9d3ff',markersize=10)\n",
|
||||||
" ax.set_xlabel(\"$\\phi_{0}$\")\n",
|
" ax.set_xlabel(r\"$\\phi_{0}$\")\n",
|
||||||
" ax.set_ylabel(\"$\\phi_{1}$\")\n",
|
" ax.set_ylabel(r\"$\\phi_{1}$\")\n",
|
||||||
" plt.show()"
|
" plt.show()"
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
@@ -185,11 +185,11 @@
|
|||||||
" for c_step in range(n_steps):\n",
|
" for c_step in range(n_steps):\n",
|
||||||
" # Measure the gradient as in equation 6.13 (first line)\n",
|
" # Measure the gradient as in equation 6.13 (first line)\n",
|
||||||
" m = get_loss_gradient(grad_path[0,c_step], grad_path[1,c_step]);\n",
|
" m = get_loss_gradient(grad_path[0,c_step], grad_path[1,c_step]);\n",
|
||||||
" # TO DO -- compute the squared gradient as in equation 6.13 (second line)\n",
|
" # TODO -- compute the squared gradient as in equation 6.13 (second line)\n",
|
||||||
" # Replace this line:\n",
|
" # Replace this line:\n",
|
||||||
" v = np.ones_like(grad_path[:,0])\n",
|
" v = np.ones_like(grad_path[:,0])\n",
|
||||||
"\n",
|
"\n",
|
||||||
" # TO DO -- apply the update rule (equation 6.14)\n",
|
" # TODO -- apply the update rule (equation 6.14)\n",
|
||||||
" # Replace this line:\n",
|
" # Replace this line:\n",
|
||||||
" grad_path[:,c_step+1] = grad_path[:,c_step]\n",
|
" grad_path[:,c_step+1] = grad_path[:,c_step]\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -221,7 +221,7 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"source": [
|
"source": [
|
||||||
"This moves towards the minimum at a sensible speed, but we never actually converge -- the solution just bounces back and forth between the last two points. To make it converge, we add momentum to both the estimates of the gradient and the pointwise squared gradient. We also modify the statistics by a factor that depends on the time to make sure the progress is now slow to start with."
|
"This moves towards the minimum at a sensible speed, but we never actually converge -- the solution just bounces back and forth between the last two points. To make it converge, we add momentum to both the estimates of the gradient and the pointwise squared gradient. We also modify the statistics by a factor that depends on the time to make sure the progress is not slow to start with."
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "_6KoKBJdGGI4"
|
"id": "_6KoKBJdGGI4"
|
||||||
@@ -254,7 +254,7 @@
|
|||||||
" v_tilde = v\n",
|
" v_tilde = v\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
" # TO DO -- apply the update rule (equation 6.17)\n",
|
" # TODO -- apply the update rule (equation 6.17)\n",
|
||||||
" # Replace this line:\n",
|
" # Replace this line:\n",
|
||||||
" grad_path[:,c_step+1] = grad_path[:,c_step]\n",
|
" grad_path[:,c_step+1] = grad_path[:,c_step]\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -284,4 +284,4 @@
|
|||||||
"outputs": []
|
"outputs": []
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -22,7 +22,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook computes the derivatives of the toy function discussed in section 7.3 of the book.\n",
|
"This notebook computes the derivatives of the toy function discussed in section 7.3 of the book.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
]
|
]
|
||||||
@@ -131,7 +131,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"beta0 = 1.0; beta1 = 2.0; beta2 = -3.0; beta3 = 0.4\n",
|
"beta0 = 1.0; beta1 = 2.0; beta2 = -3.0; beta3 = 0.4\n",
|
||||||
"omega0 = 0.1; omega1 = -0.4; omega2 = 2.0; omega3 = 3.0\n",
|
"omega0 = 0.1; omega1 = -0.4; omega2 = 2.0; omega3 = 3.0\n",
|
||||||
"x = 2.3; y =2.0\n",
|
"x = 2.3; y = 2.0\n",
|
||||||
"l_i_func = loss(x,y,beta0,beta1,beta2,beta3,omega0,omega1,omega2,omega3)\n",
|
"l_i_func = loss(x,y,beta0,beta1,beta2,beta3,omega0,omega1,omega2,omega3)\n",
|
||||||
"print('l_i=%3.3f'%l_i_func)"
|
"print('l_i=%3.3f'%l_i_func)"
|
||||||
]
|
]
|
||||||
@@ -279,7 +279,7 @@
|
|||||||
"f2: true value = 7.137, your value = 0.000\n",
|
"f2: true value = 7.137, your value = 0.000\n",
|
||||||
"h3: true value = 0.657, your value = 0.000\n",
|
"h3: true value = 0.657, your value = 0.000\n",
|
||||||
"f3: true value = 2.372, your value = 0.000\n",
|
"f3: true value = 2.372, your value = 0.000\n",
|
||||||
"like original = 0.139, like from forward pass = 0.000\n"
|
"l_i original = 0.139, l_i from forward pass = 0.000\n"
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
@@ -292,7 +292,7 @@
|
|||||||
"print(\"f2: true value = %3.3f, your value = %3.3f\"%(7.137, f2))\n",
|
"print(\"f2: true value = %3.3f, your value = %3.3f\"%(7.137, f2))\n",
|
||||||
"print(\"h3: true value = %3.3f, your value = %3.3f\"%(0.657, h3))\n",
|
"print(\"h3: true value = %3.3f, your value = %3.3f\"%(0.657, h3))\n",
|
||||||
"print(\"f3: true value = %3.3f, your value = %3.3f\"%(2.372, f3))\n",
|
"print(\"f3: true value = %3.3f, your value = %3.3f\"%(2.372, f3))\n",
|
||||||
"print(\"like original = %3.3f, like from forward pass = %3.3f\"%(l_i_func, l_i))\n"
|
"print(\"l_i original = %3.3f, l_i from forward pass = %3.3f\"%(l_i_func, l_i))\n"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -4,7 +4,6 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"colab": {
|
"colab": {
|
||||||
"provenance": [],
|
"provenance": [],
|
||||||
"authorship_tag": "ABX9TyM2kkHLr00J4Jeypw41sTkQ",
|
|
||||||
"include_colab_link": true
|
"include_colab_link": true
|
||||||
},
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
@@ -33,7 +32,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook runs the backpropagation algorithm on a deep neural network as described in section 7.4 of the book.\n",
|
"This notebook runs the backpropagation algorithm on a deep neural network as described in section 7.4 of the book.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
],
|
],
|
||||||
@@ -68,7 +67,7 @@
|
|||||||
"# Set seed so we always get the same random numbers\n",
|
"# Set seed so we always get the same random numbers\n",
|
||||||
"np.random.seed(0)\n",
|
"np.random.seed(0)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Number of layers\n",
|
"# Number of hidden layers\n",
|
||||||
"K = 5\n",
|
"K = 5\n",
|
||||||
"# Number of neurons per layer\n",
|
"# Number of neurons per layer\n",
|
||||||
"D = 6\n",
|
"D = 6\n",
|
||||||
@@ -115,9 +114,9 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"source": [
|
"source": [
|
||||||
"Now let's run our random network. The weight matrices $\\boldsymbol\\Omega_{1\\ldots K}$ are the entries of the list \"all_weights\" and the biases $\\boldsymbol\\beta_{1\\ldots k}$ are the entries of the list \"all_biases\"\n",
|
"Now let's run our random network. The weight matrices $\\boldsymbol\\Omega_{0\\ldots K}$ are the entries of the list \"all_weights\" and the biases $\\boldsymbol\\beta_{0\\ldots K}$ are the entries of the list \"all_biases\"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"We know that we will need the activations $\\mathbf{f}_{0\\ldots K}$ and the activations $\\mathbf{h}_{1\\ldots K}$ for the forward pass of backpropagation, so we'll store and return these as well.\n"
|
"We know that we will need the preactivations $\\mathbf{f}_{0\\ldots K}$ and the activations $\\mathbf{h}_{1\\ldots K}$ for the forward pass of backpropagation, so we'll store and return these as well.\n"
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "5irtyxnLJSGX"
|
"id": "5irtyxnLJSGX"
|
||||||
@@ -132,7 +131,7 @@
|
|||||||
" K = len(all_weights) -1\n",
|
" K = len(all_weights) -1\n",
|
||||||
"\n",
|
"\n",
|
||||||
" # We'll store the pre-activations at each layer in a list \"all_f\"\n",
|
" # We'll store the pre-activations at each layer in a list \"all_f\"\n",
|
||||||
" # and the activations in a second list[all_h].\n",
|
" # and the activations in a second list \"all_h\".\n",
|
||||||
" all_f = [None] * (K+1)\n",
|
" all_f = [None] * (K+1)\n",
|
||||||
" all_h = [None] * (K+1)\n",
|
" all_h = [None] * (K+1)\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -142,14 +141,14 @@
|
|||||||
"\n",
|
"\n",
|
||||||
" # Run through the layers, calculating all_f[0...K-1] and all_h[1...K]\n",
|
" # Run through the layers, calculating all_f[0...K-1] and all_h[1...K]\n",
|
||||||
" for layer in range(K):\n",
|
" for layer in range(K):\n",
|
||||||
" # Update preactivations and activations at this layer according to eqn 7.16\n",
|
" # Update preactivations and activations at this layer according to eqn 7.17\n",
|
||||||
" # Remmember to use np.matmul for matrrix multiplications\n",
|
" # Remember to use np.matmul for matrix multiplications\n",
|
||||||
" # TODO -- Replace the lines below\n",
|
" # TODO -- Replace the lines below\n",
|
||||||
" all_f[layer] = all_h[layer]\n",
|
" all_f[layer] = all_h[layer]\n",
|
||||||
" all_h[layer+1] = all_f[layer]\n",
|
" all_h[layer+1] = all_f[layer]\n",
|
||||||
"\n",
|
"\n",
|
||||||
" # Compute the output from the last hidden layer\n",
|
" # Compute the output from the last hidden layer\n",
|
||||||
" # TO DO -- Replace the line below\n",
|
" # TODO -- Replace the line below\n",
|
||||||
" all_f[K] = np.zeros_like(all_biases[-1])\n",
|
" all_f[K] = np.zeros_like(all_biases[-1])\n",
|
||||||
"\n",
|
"\n",
|
||||||
" # Retrieve the output\n",
|
" # Retrieve the output\n",
|
||||||
@@ -166,7 +165,7 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"source": [
|
||||||
"# Define in input\n",
|
"# Define input\n",
|
||||||
"net_input = np.ones((D_i,1)) * 1.2\n",
|
"net_input = np.ones((D_i,1)) * 1.2\n",
|
||||||
"# Compute network output\n",
|
"# Compute network output\n",
|
||||||
"net_output, all_f, all_h = compute_network_output(net_input,all_weights, all_biases)\n",
|
"net_output, all_f, all_h = compute_network_output(net_input,all_weights, all_biases)\n",
|
||||||
@@ -230,8 +229,8 @@
|
|||||||
"# We'll need the indicator function\n",
|
"# We'll need the indicator function\n",
|
||||||
"def indicator_function(x):\n",
|
"def indicator_function(x):\n",
|
||||||
" x_in = np.array(x)\n",
|
" x_in = np.array(x)\n",
|
||||||
" x_in[x_in>=0] = 1\n",
|
" x_in[x_in>0] = 1\n",
|
||||||
" x_in[x_in<0] = 0\n",
|
" x_in[x_in<=0] = 0\n",
|
||||||
" return x_in\n",
|
" return x_in\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Main backward pass routine\n",
|
"# Main backward pass routine\n",
|
||||||
@@ -249,23 +248,23 @@
|
|||||||
"\n",
|
"\n",
|
||||||
" # Now work backwards through the network\n",
|
" # Now work backwards through the network\n",
|
||||||
" for layer in range(K,-1,-1):\n",
|
" for layer in range(K,-1,-1):\n",
|
||||||
" # TODO Calculate the derivatives of the loss with respect to the biases at layer this from all_dl_df[layer]. (eq 7.21)\n",
|
" # TODO Calculate the derivatives of the loss with respect to the biases at layer from all_dl_df[layer]. (eq 7.22)\n",
|
||||||
" # NOTE! To take a copy of matrix X, use Z=np.array(X)\n",
|
" # NOTE! To take a copy of matrix X, use Z=np.array(X)\n",
|
||||||
" # REPLACE THIS LINE\n",
|
" # REPLACE THIS LINE\n",
|
||||||
" all_dl_dbiases[layer] = np.zeros_like(all_biases[layer])\n",
|
" all_dl_dbiases[layer] = np.zeros_like(all_biases[layer])\n",
|
||||||
"\n",
|
"\n",
|
||||||
" # TODO Calculate the derivatives of the loss with respect to the weights at layer from all_dl_df[layer] and all_h[layer] (eq 7.22)\n",
|
" # TODO Calculate the derivatives of the loss with respect to the weights at layer from all_dl_df[layer] and all_h[layer] (eq 7.23)\n",
|
||||||
" # Don't forget to use np.matmul\n",
|
" # Don't forget to use np.matmul\n",
|
||||||
" # REPLACE THIS LINE\n",
|
" # REPLACE THIS LINE\n",
|
||||||
" all_dl_dweights[layer] = np.zeros_like(all_weights[layer])\n",
|
" all_dl_dweights[layer] = np.zeros_like(all_weights[layer])\n",
|
||||||
"\n",
|
"\n",
|
||||||
" # TODO: calculate the derivatives of the loss with respect to the activations from weight and derivatives of next preactivations (second part of last line of eq 7.24)\n",
|
" # TODO: calculate the derivatives of the loss with respect to the activations from weight and derivatives of next preactivations (second part of last line of eq 7.25)\n",
|
||||||
" # REPLACE THIS LINE\n",
|
" # REPLACE THIS LINE\n",
|
||||||
" all_dl_dh[layer] = np.zeros_like(all_h[layer])\n",
|
" all_dl_dh[layer] = np.zeros_like(all_h[layer])\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
" if layer > 0:\n",
|
" if layer > 0:\n",
|
||||||
" # TODO Calculate the derivatives of the loss with respect to the pre-activation f (use deriv of ReLu function, first part of last line of eq. 7.24)\n",
|
" # TODO Calculate the derivatives of the loss with respect to the pre-activation f (use derivative of ReLu function, first part of last line of eq. 7.25)\n",
|
||||||
" # REPLACE THIS LINE\n",
|
" # REPLACE THIS LINE\n",
|
||||||
" all_dl_df[layer-1] = np.zeros_like(all_f[layer-1])\n",
|
" all_dl_df[layer-1] = np.zeros_like(all_f[layer-1])\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -300,7 +299,7 @@
|
|||||||
"delta_fd = 0.000001\n",
|
"delta_fd = 0.000001\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Test the dervatives of the bias vectors\n",
|
"# Test the dervatives of the bias vectors\n",
|
||||||
"for layer in range(K):\n",
|
"for layer in range(K+1):\n",
|
||||||
" dl_dbias = np.zeros_like(all_dl_dbiases[layer])\n",
|
" dl_dbias = np.zeros_like(all_dl_dbiases[layer])\n",
|
||||||
" # For every element in the bias\n",
|
" # For every element in the bias\n",
|
||||||
" for row in range(all_biases[layer].shape[0]):\n",
|
" for row in range(all_biases[layer].shape[0]):\n",
|
||||||
@@ -324,7 +323,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Test the derivatives of the weights matrices\n",
|
"# Test the derivatives of the weights matrices\n",
|
||||||
"for layer in range(K):\n",
|
"for layer in range(K+1):\n",
|
||||||
" dl_dweight = np.zeros_like(all_dl_dweights[layer])\n",
|
" dl_dweight = np.zeros_like(all_dl_dweights[layer])\n",
|
||||||
" # For every element in the bias\n",
|
" # For every element in the bias\n",
|
||||||
" for row in range(all_weights[layer].shape[0]):\n",
|
" for row in range(all_weights[layer].shape[0]):\n",
|
||||||
@@ -353,4 +352,4 @@
|
|||||||
"outputs": []
|
"outputs": []
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,7 +4,6 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"colab": {
|
"colab": {
|
||||||
"provenance": [],
|
"provenance": [],
|
||||||
"authorship_tag": "ABX9TyNHLXFpiSnUzAbzhtOk+bxu",
|
|
||||||
"include_colab_link": true
|
"include_colab_link": true
|
||||||
},
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
@@ -33,7 +32,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook explores weight initialization in deep neural networks as described in section 7.5 of the book.\n",
|
"This notebook explores weight initialization in deep neural networks as described in section 7.5 of the book.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
],
|
],
|
||||||
@@ -117,10 +116,10 @@
|
|||||||
"def compute_network_output(net_input, all_weights, all_biases):\n",
|
"def compute_network_output(net_input, all_weights, all_biases):\n",
|
||||||
"\n",
|
"\n",
|
||||||
" # Retrieve number of layers\n",
|
" # Retrieve number of layers\n",
|
||||||
" K = len(all_weights) -1\n",
|
" K = len(all_weights)-1\n",
|
||||||
"\n",
|
"\n",
|
||||||
" # We'll store the pre-activations at each layer in a list \"all_f\"\n",
|
" # We'll store the pre-activations at each layer in a list \"all_f\"\n",
|
||||||
" # and the activations in a second list[all_h].\n",
|
" # and the activations in a second list \"all_h\".\n",
|
||||||
" all_f = [None] * (K+1)\n",
|
" all_f = [None] * (K+1)\n",
|
||||||
" all_h = [None] * (K+1)\n",
|
" all_h = [None] * (K+1)\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -151,7 +150,7 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"source": [
|
"source": [
|
||||||
"Now let's investigate how this the size of the outputs vary as we change the initialization variance:\n"
|
"Now let's investigate how the size of the outputs vary as we change the initialization variance:\n"
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "bIUrcXnOqChl"
|
"id": "bIUrcXnOqChl"
|
||||||
@@ -164,7 +163,7 @@
|
|||||||
"K = 5\n",
|
"K = 5\n",
|
||||||
"# Number of neurons per layer\n",
|
"# Number of neurons per layer\n",
|
||||||
"D = 8\n",
|
"D = 8\n",
|
||||||
" # Input layer\n",
|
"# Input layer\n",
|
||||||
"D_i = 1\n",
|
"D_i = 1\n",
|
||||||
"# Output layer\n",
|
"# Output layer\n",
|
||||||
"D_o = 1\n",
|
"D_o = 1\n",
|
||||||
@@ -177,7 +176,7 @@
|
|||||||
"data_in = np.random.normal(size=(1,n_data))\n",
|
"data_in = np.random.normal(size=(1,n_data))\n",
|
||||||
"net_output, all_f, all_h = compute_network_output(data_in, all_weights, all_biases)\n",
|
"net_output, all_f, all_h = compute_network_output(data_in, all_weights, all_biases)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"for layer in range(K):\n",
|
"for layer in range(1,K+1):\n",
|
||||||
" print(\"Layer %d, std of hidden units = %3.3f\"%(layer, np.std(all_h[layer])))"
|
" print(\"Layer %d, std of hidden units = %3.3f\"%(layer, np.std(all_h[layer])))"
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
@@ -192,11 +191,11 @@
|
|||||||
"# You can see that the values of the hidden units are increasing on average (the variance is across all hidden units at the layer\n",
|
"# You can see that the values of the hidden units are increasing on average (the variance is across all hidden units at the layer\n",
|
||||||
"# and the 1000 training examples\n",
|
"# and the 1000 training examples\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# TO DO\n",
|
"# TODO\n",
|
||||||
"# Change this to 50 layers with 80 hidden units per layer\n",
|
"# Change this to 50 layers with 80 hidden units per layer\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# TO DO\n",
|
"# TODO\n",
|
||||||
"# Now experiment with sigma_sq_omega to try to stop the variance of the forward computation explode"
|
"# Now experiment with sigma_sq_omega to try to stop the variance of the forward computation exploding"
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "VL_SO4tar3DC"
|
"id": "VL_SO4tar3DC"
|
||||||
@@ -249,6 +248,9 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"# Main backward pass routine\n",
|
"# Main backward pass routine\n",
|
||||||
"def backward_pass(all_weights, all_biases, all_f, all_h, y):\n",
|
"def backward_pass(all_weights, all_biases, all_f, all_h, y):\n",
|
||||||
|
" # Retrieve number of layers\n",
|
||||||
|
" K = len(all_weights) - 1\n",
|
||||||
|
"\n",
|
||||||
" # We'll store the derivatives dl_dweights and dl_dbiases in lists as well\n",
|
" # We'll store the derivatives dl_dweights and dl_dbiases in lists as well\n",
|
||||||
" all_dl_dweights = [None] * (K+1)\n",
|
" all_dl_dweights = [None] * (K+1)\n",
|
||||||
" all_dl_dbiases = [None] * (K+1)\n",
|
" all_dl_dbiases = [None] * (K+1)\n",
|
||||||
@@ -297,7 +299,7 @@
|
|||||||
"K = 5\n",
|
"K = 5\n",
|
||||||
"# Number of neurons per layer\n",
|
"# Number of neurons per layer\n",
|
||||||
"D = 8\n",
|
"D = 8\n",
|
||||||
" # Input layer\n",
|
"# Input layer\n",
|
||||||
"D_i = 1\n",
|
"D_i = 1\n",
|
||||||
"# Output layer\n",
|
"# Output layer\n",
|
||||||
"D_o = 1\n",
|
"D_o = 1\n",
|
||||||
@@ -323,7 +325,7 @@
|
|||||||
" for layer in range(1,K):\n",
|
" for layer in range(1,K):\n",
|
||||||
" aggregate_dl_df[layer][:,c_data] = np.squeeze(all_dl_df[layer])\n",
|
" aggregate_dl_df[layer][:,c_data] = np.squeeze(all_dl_df[layer])\n",
|
||||||
"\n",
|
"\n",
|
||||||
"for layer in range(1,K):\n",
|
"for layer in reversed(range(1,K)):\n",
|
||||||
" print(\"Layer %d, std of dl_dh = %3.3f\"%(layer, np.std(aggregate_dl_df[layer].ravel())))\n"
|
" print(\"Layer %d, std of dl_dh = %3.3f\"%(layer, np.std(aggregate_dl_df[layer].ravel())))\n"
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
@@ -335,13 +337,13 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"source": [
|
||||||
"# You can see that the values of the hidden units are increasing on average (the variance is across all hidden units at the layer\n",
|
"# You can see that the gradients of the hidden units are increasing on average (the standard deviation is across all hidden units at the layer\n",
|
||||||
"# and the 1000 training examples\n",
|
"# and the 100 training examples\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# TO DO\n",
|
"# TODO\n",
|
||||||
"# Change this to 50 layers with 80 hidden units per layer\n",
|
"# Change this to 50 layers with 80 hidden units per layer\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# TO DO\n",
|
"# TODO\n",
|
||||||
"# Now experiment with sigma_sq_omega to try to stop the variance of the gradients exploding\n"
|
"# Now experiment with sigma_sq_omega to try to stop the variance of the gradients exploding\n"
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
|
|||||||
@@ -1,28 +1,10 @@
|
|||||||
{
|
{
|
||||||
"nbformat": 4,
|
|
||||||
"nbformat_minor": 0,
|
|
||||||
"metadata": {
|
|
||||||
"colab": {
|
|
||||||
"provenance": [],
|
|
||||||
"gpuType": "T4",
|
|
||||||
"authorship_tag": "ABX9TyOuKMUcKfOIhIL2qTX9jJCy",
|
|
||||||
"include_colab_link": true
|
|
||||||
},
|
|
||||||
"kernelspec": {
|
|
||||||
"name": "python3",
|
|
||||||
"display_name": "Python 3"
|
|
||||||
},
|
|
||||||
"language_info": {
|
|
||||||
"name": "python"
|
|
||||||
},
|
|
||||||
"accelerator": "GPU"
|
|
||||||
},
|
|
||||||
"cells": [
|
"cells": [
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "view-in-github",
|
"colab_type": "text",
|
||||||
"colab_type": "text"
|
"id": "view-in-github"
|
||||||
},
|
},
|
||||||
"source": [
|
"source": [
|
||||||
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap08/8_1_MNIST_1D_Performance.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap08/8_1_MNIST_1D_Performance.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
@@ -30,33 +12,38 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "L6chybAVFJW2"
|
||||||
|
},
|
||||||
"source": [
|
"source": [
|
||||||
"# **Notebook 8.1: MNIST_1D_Performance**\n",
|
"# **Notebook 8.1: MNIST_1D_Performance**\n",
|
||||||
"\n",
|
"\n",
|
||||||
"This notebook runs a simple neural network on the MNIST1D dataset as in figure 8.2a. It uses code from https://github.com/greydanus/mnist1d to generate the data.\n",
|
"This notebook runs a simple neural network on the MNIST1D dataset as in figure 8.2a. It uses code from https://github.com/greydanus/mnist1d to generate the data.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
],
|
]
|
||||||
"metadata": {
|
|
||||||
"id": "L6chybAVFJW2"
|
|
||||||
}
|
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"execution_count": null,
|
||||||
"# Run this if you're in a Colab to make a local copy of the MNIST 1D repository\n",
|
|
||||||
"!git clone https://github.com/greydanus/mnist1d"
|
|
||||||
],
|
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "ifVjS4cTOqKz"
|
"id": "ifVjS4cTOqKz"
|
||||||
},
|
},
|
||||||
"execution_count": null,
|
"outputs": [],
|
||||||
"outputs": []
|
"source": [
|
||||||
|
"# Run this if you're in a Colab to install MNIST 1D repository\n",
|
||||||
|
"%pip install git+https://github.com/greydanus/mnist1d"
|
||||||
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "qyE7G1StPIqO"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"import torch, torch.nn as nn\n",
|
"import torch, torch.nn as nn\n",
|
||||||
"from torch.utils.data import TensorDataset, DataLoader\n",
|
"from torch.utils.data import TensorDataset, DataLoader\n",
|
||||||
@@ -64,47 +51,47 @@
|
|||||||
"import numpy as np\n",
|
"import numpy as np\n",
|
||||||
"import matplotlib.pyplot as plt\n",
|
"import matplotlib.pyplot as plt\n",
|
||||||
"import mnist1d"
|
"import mnist1d"
|
||||||
],
|
]
|
||||||
"metadata": {
|
|
||||||
"id": "qyE7G1StPIqO"
|
|
||||||
},
|
|
||||||
"execution_count": null,
|
|
||||||
"outputs": []
|
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"source": [
|
|
||||||
"Let's generate a training and test dataset using the MNIST1D code. The dataset gets saved as a .pkl file so it doesn't have to be regenerated each time."
|
|
||||||
],
|
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "F7LNq72SP6jO"
|
"id": "F7LNq72SP6jO"
|
||||||
}
|
},
|
||||||
|
"source": [
|
||||||
|
"Let's generate a training and test dataset using the MNIST1D code. The dataset gets saved as a .pkl file so it doesn't have to be regenerated each time."
|
||||||
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "YLxf7dJfPaqw"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"args = mnist1d.data.get_dataset_args()\n",
|
"args = mnist1d.data.get_dataset_args()\n",
|
||||||
"data = mnist1d.data.get_dataset(args, path='./sample_data/mnist1d_data.pkl', download=False, regenerate=False)\n",
|
"data = mnist1d.data.get_dataset(args, path='./mnist1d_data.pkl', download=False, regenerate=False)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# The training and test input and outputs are in\n",
|
"# The training and test input and outputs are in\n",
|
||||||
"# data['x'], data['y'], data['x_test'], and data['y_test']\n",
|
"# data['x'], data['y'], data['x_test'], and data['y_test']\n",
|
||||||
"print(\"Examples in training set: {}\".format(len(data['y'])))\n",
|
"print(\"Examples in training set: {}\".format(len(data['y'])))\n",
|
||||||
"print(\"Examples in test set: {}\".format(len(data['y_test'])))\n",
|
"print(\"Examples in test set: {}\".format(len(data['y_test'])))\n",
|
||||||
"print(\"Length of each example: {}\".format(data['x'].shape[-1]))"
|
"print(\"Length of each example: {}\".format(data['x'].shape[-1]))"
|
||||||
],
|
]
|
||||||
"metadata": {
|
|
||||||
"id": "YLxf7dJfPaqw"
|
|
||||||
},
|
|
||||||
"execution_count": null,
|
|
||||||
"outputs": []
|
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "FxaB5vc0uevl"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"D_i = 40 # Input dimensions\n",
|
"D_i = 40 # Input dimensions\n",
|
||||||
"D_k = 100 # Hidden dimensions\n",
|
"D_k = 100 # Hidden dimensions\n",
|
||||||
"D_o = 10 # Output dimensions\n",
|
"D_o = 10 # Output dimensions\n",
|
||||||
"# TO DO:\n",
|
"# TODO:\n",
|
||||||
"# Define a model with two hidden layers of size 100\n",
|
"# Define a model with two hidden layers of size 100\n",
|
||||||
"# And ReLU activations between them\n",
|
"# And ReLU activations between them\n",
|
||||||
"# Replace this line (see Figure 7.8 of book for help):\n",
|
"# Replace this line (see Figure 7.8 of book for help):\n",
|
||||||
@@ -112,7 +99,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"def weights_init(layer_in):\n",
|
"def weights_init(layer_in):\n",
|
||||||
" # TO DO:\n",
|
" # TODO:\n",
|
||||||
" # Initialize the parameters with He initialization\n",
|
" # Initialize the parameters with He initialization\n",
|
||||||
" # Replace this line (see figure 7.8 of book for help)\n",
|
" # Replace this line (see figure 7.8 of book for help)\n",
|
||||||
" print(\"Initializing layer\")\n",
|
" print(\"Initializing layer\")\n",
|
||||||
@@ -120,15 +107,15 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"# Call the function you just defined\n",
|
"# Call the function you just defined\n",
|
||||||
"model.apply(weights_init)\n"
|
"model.apply(weights_init)\n"
|
||||||
],
|
]
|
||||||
"metadata": {
|
|
||||||
"id": "FxaB5vc0uevl"
|
|
||||||
},
|
|
||||||
"execution_count": null,
|
|
||||||
"outputs": []
|
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "_rX6N3VyyQTY"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"# choose cross entropy loss function (equation 5.24)\n",
|
"# choose cross entropy loss function (equation 5.24)\n",
|
||||||
"loss_function = torch.nn.CrossEntropyLoss()\n",
|
"loss_function = torch.nn.CrossEntropyLoss()\n",
|
||||||
@@ -136,11 +123,10 @@
|
|||||||
"optimizer = torch.optim.SGD(model.parameters(), lr = 0.05, momentum=0.9)\n",
|
"optimizer = torch.optim.SGD(model.parameters(), lr = 0.05, momentum=0.9)\n",
|
||||||
"# object that decreases learning rate by half every 10 epochs\n",
|
"# object that decreases learning rate by half every 10 epochs\n",
|
||||||
"scheduler = StepLR(optimizer, step_size=10, gamma=0.5)\n",
|
"scheduler = StepLR(optimizer, step_size=10, gamma=0.5)\n",
|
||||||
"# create 100 dummy data points and store in data loader class\n",
|
|
||||||
"x_train = torch.tensor(data['x'].astype('float32'))\n",
|
"x_train = torch.tensor(data['x'].astype('float32'))\n",
|
||||||
"y_train = torch.tensor(data['y'].transpose().astype('long'))\n",
|
"y_train = torch.tensor(data['y'].transpose().astype('int64'))\n",
|
||||||
"x_test= torch.tensor(data['x_test'].astype('float32'))\n",
|
"x_test= torch.tensor(data['x_test'].astype('float32'))\n",
|
||||||
"y_test = torch.tensor(data['y_test'].astype('long'))\n",
|
"y_test = torch.tensor(data['y_test'].astype('int64'))\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# load the data into a class that creates the batches\n",
|
"# load the data into a class that creates the batches\n",
|
||||||
"data_loader = DataLoader(TensorDataset(x_train,y_train), batch_size=100, shuffle=True, worker_init_fn=np.random.seed(1))\n",
|
"data_loader = DataLoader(TensorDataset(x_train,y_train), batch_size=100, shuffle=True, worker_init_fn=np.random.seed(1))\n",
|
||||||
@@ -185,15 +171,15 @@
|
|||||||
"\n",
|
"\n",
|
||||||
" # tell scheduler to consider updating learning rate\n",
|
" # tell scheduler to consider updating learning rate\n",
|
||||||
" scheduler.step()"
|
" scheduler.step()"
|
||||||
],
|
]
|
||||||
"metadata": {
|
|
||||||
"id": "_rX6N3VyyQTY"
|
|
||||||
},
|
|
||||||
"execution_count": null,
|
|
||||||
"outputs": []
|
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "yI-l6kA_EH9G"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"# Plot the results\n",
|
"# Plot the results\n",
|
||||||
"fig, ax = plt.subplots()\n",
|
"fig, ax = plt.subplots()\n",
|
||||||
@@ -214,25 +200,38 @@
|
|||||||
"ax.set_title('Train loss %3.2f, Test loss %3.2f'%(losses_train[-1],losses_test[-1]))\n",
|
"ax.set_title('Train loss %3.2f, Test loss %3.2f'%(losses_train[-1],losses_test[-1]))\n",
|
||||||
"ax.legend()\n",
|
"ax.legend()\n",
|
||||||
"plt.show()"
|
"plt.show()"
|
||||||
],
|
]
|
||||||
"metadata": {
|
|
||||||
"id": "yI-l6kA_EH9G"
|
|
||||||
},
|
|
||||||
"execution_count": null,
|
|
||||||
"outputs": []
|
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "q-yT6re6GZS4"
|
||||||
|
},
|
||||||
"source": [
|
"source": [
|
||||||
"**TO DO**\n",
|
"**TODO**\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Play with the model -- try changing the number of layers, hidden units, learning rate, batch size, momentum or anything else you like. See if you can improve the test results.\n",
|
"Play with the model -- try changing the number of layers, hidden units, learning rate, batch size, momentum or anything else you like. See if you can improve the test results.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Is it a good idea to optimize the hyperparameters in this way? Will the final result be a good estimate of the true test performance?"
|
"Is it a good idea to optimize the hyperparameters in this way? Will the final result be a good estimate of the true test performance?"
|
||||||
],
|
]
|
||||||
"metadata": {
|
|
||||||
"id": "q-yT6re6GZS4"
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
]
|
],
|
||||||
}
|
"metadata": {
|
||||||
|
"accelerator": "GPU",
|
||||||
|
"colab": {
|
||||||
|
"authorship_tag": "ABX9TyOuKMUcKfOIhIL2qTX9jJCy",
|
||||||
|
"gpuType": "T4",
|
||||||
|
"include_colab_link": true,
|
||||||
|
"provenance": []
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"display_name": "Python 3",
|
||||||
|
"name": "python3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0
|
||||||
|
}
|
||||||
|
|||||||
@@ -32,7 +32,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates the bias-variance trade-off for the toy model used throughout chapter 8 and reproduces the bias/variance trade off curves seen in figure 8.9.\n",
|
"This notebook investigates the bias-variance trade-off for the toy model used throughout chapter 8 and reproduces the bias/variance trade off curves seen in figure 8.9.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
],
|
],
|
||||||
@@ -92,7 +92,7 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"source": [
|
||||||
"# Draw the fitted function, together win uncertainty used to generate points\n",
|
"# Draw the fitted function, together with uncertainty used to generate points\n",
|
||||||
"def plot_function(x_func, y_func, x_data=None,y_data=None, x_model = None, y_model =None, sigma_func = None, sigma_model=None):\n",
|
"def plot_function(x_func, y_func, x_data=None,y_data=None, x_model = None, y_model =None, sigma_func = None, sigma_model=None):\n",
|
||||||
"\n",
|
"\n",
|
||||||
" fig,ax = plt.subplots()\n",
|
" fig,ax = plt.subplots()\n",
|
||||||
@@ -203,7 +203,7 @@
|
|||||||
"# Closed form solution\n",
|
"# Closed form solution\n",
|
||||||
"beta, omega = fit_model_closed_form(x_data,y_data,n_hidden=3)\n",
|
"beta, omega = fit_model_closed_form(x_data,y_data,n_hidden=3)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Get prediction for model across graph grange\n",
|
"# Get prediction for model across graph range\n",
|
||||||
"x_model = np.linspace(0,1,100);\n",
|
"x_model = np.linspace(0,1,100);\n",
|
||||||
"y_model = network(x_model, beta, omega)\n",
|
"y_model = network(x_model, beta, omega)\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -293,7 +293,8 @@
|
|||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"source": [
|
||||||
"# Plot the noise, bias and variance as a function of capacity\n",
|
"# Plot the noise, bias and variance as a function of capacity\n",
|
||||||
"hidden_variables = [1,2,3,4,5,6,7,8,9,10,11,12]\n",
|
"n_hidden = 12\n",
|
||||||
|
"hidden_variables = list(range(1, n_hidden + 1))\n",
|
||||||
"bias = np.zeros((len(hidden_variables),1)) ;\n",
|
"bias = np.zeros((len(hidden_variables),1)) ;\n",
|
||||||
"variance = np.zeros((len(hidden_variables),1)) ;\n",
|
"variance = np.zeros((len(hidden_variables),1)) ;\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -302,7 +303,7 @@
|
|||||||
"sigma_func = 0.3\n",
|
"sigma_func = 0.3\n",
|
||||||
"n_hidden = 5\n",
|
"n_hidden = 5\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Set random seed so that get same result every time\n",
|
"# Set random seed so that we get the same result every time\n",
|
||||||
"np.random.seed(1)\n",
|
"np.random.seed(1)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"for c_hidden in range(len(hidden_variables)):\n",
|
"for c_hidden in range(len(hidden_variables)):\n",
|
||||||
@@ -321,7 +322,7 @@
|
|||||||
"ax.plot(hidden_variables, variance, 'k-')\n",
|
"ax.plot(hidden_variables, variance, 'k-')\n",
|
||||||
"ax.plot(hidden_variables, bias, 'r-')\n",
|
"ax.plot(hidden_variables, bias, 'r-')\n",
|
||||||
"ax.plot(hidden_variables, variance+bias, 'g-')\n",
|
"ax.plot(hidden_variables, variance+bias, 'g-')\n",
|
||||||
"ax.set_xlim(0,12)\n",
|
"ax.set_xlim(0,n_hidden)\n",
|
||||||
"ax.set_ylim(0,0.5)\n",
|
"ax.set_ylim(0,0.5)\n",
|
||||||
"ax.set_xlabel(\"Model capacity\")\n",
|
"ax.set_xlabel(\"Model capacity\")\n",
|
||||||
"ax.set_ylabel(\"Variance\")\n",
|
"ax.set_ylabel(\"Variance\")\n",
|
||||||
@@ -333,15 +334,6 @@
|
|||||||
},
|
},
|
||||||
"execution_count": null,
|
"execution_count": null,
|
||||||
"outputs": []
|
"outputs": []
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"source": [],
|
|
||||||
"metadata": {
|
|
||||||
"id": "WKUyOAywL_b2"
|
|
||||||
},
|
|
||||||
"execution_count": null,
|
|
||||||
"outputs": []
|
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
@@ -5,7 +5,6 @@
|
|||||||
"colab": {
|
"colab": {
|
||||||
"provenance": [],
|
"provenance": [],
|
||||||
"gpuType": "T4",
|
"gpuType": "T4",
|
||||||
"authorship_tag": "ABX9TyN/KUpEObCKnHZ/4Onp5sHG",
|
|
||||||
"include_colab_link": true
|
"include_colab_link": true
|
||||||
},
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
@@ -37,7 +36,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"It uses the MNIST-1D database which can be found at https://github.com/greydanus/mnist1d\n",
|
"It uses the MNIST-1D database which can be found at https://github.com/greydanus/mnist1d\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
],
|
],
|
||||||
@@ -48,8 +47,8 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"source": [
|
||||||
"# Run this if you're in a Colab to make a local copy of the MNIST 1D repository\n",
|
"# Run this if you're in a Colab to install MNIST 1D repository\n",
|
||||||
"!git clone https://github.com/greydanus/mnist1d"
|
"!pip install git+https://github.com/greydanus/mnist1d"
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "fn9BP5N5TguP"
|
"id": "fn9BP5N5TguP"
|
||||||
@@ -100,7 +99,7 @@
|
|||||||
"# data['x'], data['y'], data['x_test'], and data['y_test']\n",
|
"# data['x'], data['y'], data['x_test'], and data['y_test']\n",
|
||||||
"print(\"Examples in training set: {}\".format(len(data['y'])))\n",
|
"print(\"Examples in training set: {}\".format(len(data['y'])))\n",
|
||||||
"print(\"Examples in test set: {}\".format(len(data['y_test'])))\n",
|
"print(\"Examples in test set: {}\".format(len(data['y_test'])))\n",
|
||||||
"print(\"Length of each example: {}\".format(data['x'].shape[-1]))"
|
"print(\"Dimensionality of each example: {}\".format(data['x'].shape[-1]))"
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "PW2gyXL5UkLU"
|
"id": "PW2gyXL5UkLU"
|
||||||
@@ -124,7 +123,7 @@
|
|||||||
" D_k = n_hidden # Hidden dimensions\n",
|
" D_k = n_hidden # Hidden dimensions\n",
|
||||||
" D_o = 10 # Output dimensions\n",
|
" D_o = 10 # Output dimensions\n",
|
||||||
"\n",
|
"\n",
|
||||||
" # Define a model with two hidden layers of size 100\n",
|
" # Define a model with two hidden layers\n",
|
||||||
" # And ReLU activations between them\n",
|
" # And ReLU activations between them\n",
|
||||||
" model = nn.Sequential(\n",
|
" model = nn.Sequential(\n",
|
||||||
" nn.Linear(D_i, D_k),\n",
|
" nn.Linear(D_i, D_k),\n",
|
||||||
@@ -148,7 +147,7 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"source": [
|
||||||
"def fit_model(model, data):\n",
|
"def fit_model(model, data, n_epoch):\n",
|
||||||
"\n",
|
"\n",
|
||||||
" # choose cross entropy loss function (equation 5.24)\n",
|
" # choose cross entropy loss function (equation 5.24)\n",
|
||||||
" loss_function = torch.nn.CrossEntropyLoss()\n",
|
" loss_function = torch.nn.CrossEntropyLoss()\n",
|
||||||
@@ -157,7 +156,6 @@
|
|||||||
" optimizer = torch.optim.SGD(model.parameters(), lr = 0.01, momentum=0.9)\n",
|
" optimizer = torch.optim.SGD(model.parameters(), lr = 0.01, momentum=0.9)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
" # create 100 dummy data points and store in data loader class\n",
|
|
||||||
" x_train = torch.tensor(data['x'].astype('float32'))\n",
|
" x_train = torch.tensor(data['x'].astype('float32'))\n",
|
||||||
" y_train = torch.tensor(data['y'].transpose().astype('long'))\n",
|
" y_train = torch.tensor(data['y'].transpose().astype('long'))\n",
|
||||||
" x_test= torch.tensor(data['x_test'].astype('float32'))\n",
|
" x_test= torch.tensor(data['x_test'].astype('float32'))\n",
|
||||||
@@ -166,9 +164,6 @@
|
|||||||
" # load the data into a class that creates the batches\n",
|
" # load the data into a class that creates the batches\n",
|
||||||
" data_loader = DataLoader(TensorDataset(x_train,y_train), batch_size=100, shuffle=True, worker_init_fn=np.random.seed(1))\n",
|
" data_loader = DataLoader(TensorDataset(x_train,y_train), batch_size=100, shuffle=True, worker_init_fn=np.random.seed(1))\n",
|
||||||
"\n",
|
"\n",
|
||||||
" # loop over the dataset n_epoch times\n",
|
|
||||||
" n_epoch = 1000\n",
|
|
||||||
"\n",
|
|
||||||
" for epoch in range(n_epoch):\n",
|
" for epoch in range(n_epoch):\n",
|
||||||
" # loop over batches\n",
|
" # loop over batches\n",
|
||||||
" for i, batch in enumerate(data_loader):\n",
|
" for i, batch in enumerate(data_loader):\n",
|
||||||
@@ -205,12 +200,24 @@
|
|||||||
"execution_count": null,
|
"execution_count": null,
|
||||||
"outputs": []
|
"outputs": []
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [
|
||||||
|
"def count_parameters(model):\n",
|
||||||
|
" return sum(p.numel() for p in model.parameters() if p.requires_grad)"
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"id": "AQNCmFNV6JpV"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"source": [
|
"source": [
|
||||||
"The following code produces the double descent curve by training the model with different numbers of hidden units and plotting the test error.\n",
|
"The following code produces the double descent curve by training the model with different numbers of hidden units and plotting the test error.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"TO DO:\n",
|
"TODO:\n",
|
||||||
"\n",
|
"\n",
|
||||||
"*Before* you run the code, and considering that there are 4000 training examples predict:<br>\n",
|
"*Before* you run the code, and considering that there are 4000 training examples predict:<br>\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -228,19 +235,27 @@
|
|||||||
"# This code will take a while (~30 mins on GPU) to run! Go and make a cup of coffee!\n",
|
"# This code will take a while (~30 mins on GPU) to run! Go and make a cup of coffee!\n",
|
||||||
"\n",
|
"\n",
|
||||||
"hidden_variables = np.array([2,4,6,8,10,14,18,22,26,30,35,40,45,50,55,60,70,80,90,100,120,140,160,180,200,250,300,400]) ;\n",
|
"hidden_variables = np.array([2,4,6,8,10,14,18,22,26,30,35,40,45,50,55,60,70,80,90,100,120,140,160,180,200,250,300,400]) ;\n",
|
||||||
|
"\n",
|
||||||
"errors_train_all = np.zeros_like(hidden_variables)\n",
|
"errors_train_all = np.zeros_like(hidden_variables)\n",
|
||||||
"errors_test_all = np.zeros_like(hidden_variables)\n",
|
"errors_test_all = np.zeros_like(hidden_variables)\n",
|
||||||
|
"total_weights_all = np.zeros_like(hidden_variables)\n",
|
||||||
|
"\n",
|
||||||
|
"# loop over the dataset n_epoch times\n",
|
||||||
|
"n_epoch = 1000\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# For each hidden variable size\n",
|
"# For each hidden variable size\n",
|
||||||
"for c_hidden in range(len(hidden_variables)):\n",
|
"for c_hidden in range(len(hidden_variables)):\n",
|
||||||
" print(f'Training model with {hidden_variables[c_hidden]:3d} hidden variables')\n",
|
" print(f'Training model with {hidden_variables[c_hidden]:3d} hidden variables')\n",
|
||||||
" # Get a model\n",
|
" # Get a model\n",
|
||||||
" model = get_model(hidden_variables[c_hidden]) ;\n",
|
" model = get_model(hidden_variables[c_hidden]) ;\n",
|
||||||
|
" # Count and store number of weights\n",
|
||||||
|
" total_weights_all[c_hidden] = count_parameters(model)\n",
|
||||||
" # Train the model\n",
|
" # Train the model\n",
|
||||||
" errors_train, errors_test = fit_model(model, data)\n",
|
" errors_train, errors_test = fit_model(model, data, n_epoch)\n",
|
||||||
" # Store the results\n",
|
" # Store the results\n",
|
||||||
" errors_train_all[c_hidden] = errors_train\n",
|
" errors_train_all[c_hidden] = errors_train\n",
|
||||||
" errors_test_all[c_hidden]= errors_test"
|
" errors_test_all[c_hidden]= errors_test\n",
|
||||||
|
"\n"
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "K4OmBZGHWXpk"
|
"id": "K4OmBZGHWXpk"
|
||||||
@@ -251,12 +266,29 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"source": [
|
||||||
|
"import matplotlib.pyplot as plt\n",
|
||||||
|
"import numpy as np\n",
|
||||||
|
"\n",
|
||||||
|
"# Assuming data['y'] is available and contains the training examples\n",
|
||||||
|
"num_training_examples = len(data['y'])\n",
|
||||||
|
"\n",
|
||||||
|
"# Find the index where total_weights_all is closest to num_training_examples\n",
|
||||||
|
"closest_index = np.argmin(np.abs(np.array(total_weights_all) - num_training_examples))\n",
|
||||||
|
"\n",
|
||||||
|
"# Get the corresponding value of hidden variables\n",
|
||||||
|
"hidden_variable_at_num_training_examples = hidden_variables[closest_index]\n",
|
||||||
|
"\n",
|
||||||
"# Plot the results\n",
|
"# Plot the results\n",
|
||||||
"fig, ax = plt.subplots()\n",
|
"fig, ax = plt.subplots()\n",
|
||||||
"ax.plot(hidden_variables, errors_train_all,'r-',label='train')\n",
|
"ax.plot(hidden_variables, errors_train_all, 'r-', label='train')\n",
|
||||||
"ax.plot(hidden_variables, errors_test_all,'b-',label='test')\n",
|
"ax.plot(hidden_variables, errors_test_all, 'b-', label='test')\n",
|
||||||
"ax.set_ylim(0,100);\n",
|
"\n",
|
||||||
"ax.set_xlabel('No hidden variables'); ax.set_ylabel('Error')\n",
|
"# Add a vertical line at the point where total weights equal the number of training examples\n",
|
||||||
|
"ax.axvline(x=hidden_variable_at_num_training_examples, color='g', linestyle='--', label='N(weights) = N(train)')\n",
|
||||||
|
"\n",
|
||||||
|
"ax.set_ylim(0, 100)\n",
|
||||||
|
"ax.set_xlabel('No. hidden variables')\n",
|
||||||
|
"ax.set_ylabel('Error')\n",
|
||||||
"ax.legend()\n",
|
"ax.legend()\n",
|
||||||
"plt.show()\n"
|
"plt.show()\n"
|
||||||
],
|
],
|
||||||
@@ -265,6 +297,24 @@
|
|||||||
},
|
},
|
||||||
"execution_count": null,
|
"execution_count": null,
|
||||||
"outputs": []
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [],
|
||||||
|
"metadata": {
|
||||||
|
"id": "KT4X8_hE5NFb"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"source": [],
|
||||||
|
"metadata": {
|
||||||
|
"id": "iGKZSfVF2r4z"
|
||||||
|
},
|
||||||
|
"execution_count": null,
|
||||||
|
"outputs": []
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -33,7 +33,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates the strange properties of high-dimensional spaces as discussed in the notes at the end of chapter 8.\n",
|
"This notebook investigates the strange properties of high-dimensional spaces as discussed in the notes at the end of chapter 8.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
],
|
],
|
||||||
@@ -134,7 +134,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"# Volume of a hypersphere\n",
|
"# Volume of a hypersphere\n",
|
||||||
"\n",
|
"\n",
|
||||||
"In the second part of this notebook we calculate the volume of a hypersphere of radius 0.5 (i.e., of diameter 1) as a function of the radius. Note that you you can check your answer by doing the calculation for 2D using the standard formula for the area of a circle and making sure it matches."
|
"In the second part of this notebook we calculate the volume of a hypersphere of radius 0.5 (i.e., of diameter 1) as a function of the radius. Note that you can check your answer by doing the calculation for 2D using the standard formula for the area of a circle and making sure it matches."
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "b2FYKV1SL4Z7"
|
"id": "b2FYKV1SL4Z7"
|
||||||
@@ -224,7 +224,7 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"source": [
|
"source": [
|
||||||
"You should see see that by the time we get to 300 dimensions most of the volume is in the outer 1 percent. <br><br>\n",
|
"You should see that by the time we get to 300 dimensions most of the volume is in the outer 1 percent. <br><br>\n",
|
||||||
"\n",
|
"\n",
|
||||||
"The conclusion of all of this is that in high dimensions you should be sceptical of your intuitions about how things work. I have tried to visualize many things in one or two dimensions in the book, but you should also be sceptical about these visualizations!"
|
"The conclusion of all of this is that in high dimensions you should be sceptical of your intuitions about how things work. I have tried to visualize many things in one or two dimensions in the book, but you should also be sceptical about these visualizations!"
|
||||||
],
|
],
|
||||||
@@ -233,4 +233,4 @@
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,7 +4,6 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"colab": {
|
"colab": {
|
||||||
"provenance": [],
|
"provenance": [],
|
||||||
"authorship_tag": "ABX9TyPJzymRTuvoWggIskM2Kamc",
|
|
||||||
"include_colab_link": true
|
"include_colab_link": true
|
||||||
},
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
@@ -33,7 +32,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates adding L2 regularization to the loss function for the Gabor model as in figure 9.1.\n",
|
"This notebook investigates adding L2 regularization to the loss function for the Gabor model as in figure 9.1.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n"
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n"
|
||||||
],
|
],
|
||||||
@@ -178,7 +177,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"def draw_loss_function(compute_loss, data, model, my_colormap, phi_iters = None):\n",
|
"def draw_loss_function(compute_loss, data, model, my_colormap, phi_iters = None):\n",
|
||||||
"\n",
|
"\n",
|
||||||
" # Make grid of intercept/slope values to plot\n",
|
" # Make grid of offset/frequency values to plot\n",
|
||||||
" offsets_mesh, freqs_mesh = np.meshgrid(np.arange(-10,10.0,0.1), np.arange(2.5,22.5,0.1))\n",
|
" offsets_mesh, freqs_mesh = np.meshgrid(np.arange(-10,10.0,0.1), np.arange(2.5,22.5,0.1))\n",
|
||||||
" loss_mesh = np.zeros_like(freqs_mesh)\n",
|
" loss_mesh = np.zeros_like(freqs_mesh)\n",
|
||||||
" # Compute loss for every set of parameters\n",
|
" # Compute loss for every set of parameters\n",
|
||||||
@@ -304,7 +303,7 @@
|
|||||||
"for c_step in range (n_steps):\n",
|
"for c_step in range (n_steps):\n",
|
||||||
" # Do gradient descent step\n",
|
" # Do gradient descent step\n",
|
||||||
" phi_all[:,c_step+1:c_step+2] = gradient_descent_step(phi_all[:,c_step:c_step+1],data, model)\n",
|
" phi_all[:,c_step+1:c_step+2] = gradient_descent_step(phi_all[:,c_step:c_step+1],data, model)\n",
|
||||||
" # Measure loss and draw model every 4th step\n",
|
" # Measure loss and draw model every 8th step\n",
|
||||||
" if c_step % 8 == 0:\n",
|
" if c_step % 8 == 0:\n",
|
||||||
" loss = compute_loss(data[0,:], data[1,:], model, phi_all[:,c_step+1:c_step+2])\n",
|
" loss = compute_loss(data[0,:], data[1,:], model, phi_all[:,c_step+1:c_step+2])\n",
|
||||||
" draw_model(data,model,phi_all[:,c_step+1], \"Iteration %d, loss = %f\"%(c_step+1,loss))\n",
|
" draw_model(data,model,phi_all[:,c_step+1], \"Iteration %d, loss = %f\"%(c_step+1,loss))\n",
|
||||||
@@ -369,7 +368,7 @@
|
|||||||
"# Code to draw the regularization function\n",
|
"# Code to draw the regularization function\n",
|
||||||
"def draw_reg_function():\n",
|
"def draw_reg_function():\n",
|
||||||
"\n",
|
"\n",
|
||||||
" # Make grid of intercept/slope values to plot\n",
|
" # Make grid of offset/frequency values to plot\n",
|
||||||
" offsets_mesh, freqs_mesh = np.meshgrid(np.arange(-10,10.0,0.1), np.arange(2.5,22.5,0.1))\n",
|
" offsets_mesh, freqs_mesh = np.meshgrid(np.arange(-10,10.0,0.1), np.arange(2.5,22.5,0.1))\n",
|
||||||
" loss_mesh = np.zeros_like(freqs_mesh)\n",
|
" loss_mesh = np.zeros_like(freqs_mesh)\n",
|
||||||
" # Compute loss for every set of parameters\n",
|
" # Compute loss for every set of parameters\n",
|
||||||
@@ -399,7 +398,7 @@
|
|||||||
"# Code to draw loss function with regularization\n",
|
"# Code to draw loss function with regularization\n",
|
||||||
"def draw_loss_function_reg(data, model, lambda_, my_colormap, phi_iters = None):\n",
|
"def draw_loss_function_reg(data, model, lambda_, my_colormap, phi_iters = None):\n",
|
||||||
"\n",
|
"\n",
|
||||||
" # Make grid of intercept/slope values to plot\n",
|
" # Make grid of offset/frequency values to plot\n",
|
||||||
" offsets_mesh, freqs_mesh = np.meshgrid(np.arange(-10,10.0,0.1), np.arange(2.5,22.5,0.1))\n",
|
" offsets_mesh, freqs_mesh = np.meshgrid(np.arange(-10,10.0,0.1), np.arange(2.5,22.5,0.1))\n",
|
||||||
" loss_mesh = np.zeros_like(freqs_mesh)\n",
|
" loss_mesh = np.zeros_like(freqs_mesh)\n",
|
||||||
" # Compute loss for every set of parameters\n",
|
" # Compute loss for every set of parameters\n",
|
||||||
@@ -458,14 +457,14 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"source": [
|
||||||
"def dldphi0(phi, lambda_):\n",
|
"def dregdphi0(phi, lambda_):\n",
|
||||||
" # TODO compute the derivative with respect to phi0\n",
|
" # TODO compute the derivative with respect to phi0\n",
|
||||||
" # Replace this line:]\n",
|
" # Replace this line:]\n",
|
||||||
" deriv = 0\n",
|
" deriv = 0\n",
|
||||||
"\n",
|
"\n",
|
||||||
" return deriv\n",
|
" return deriv\n",
|
||||||
"\n",
|
"\n",
|
||||||
"def dldphi1(phi, lambda_):\n",
|
"def dregdphi1(phi, lambda_):\n",
|
||||||
" # TODO compute the derivative with respect to phi1\n",
|
" # TODO compute the derivative with respect to phi1\n",
|
||||||
" # Replace this line:]\n",
|
" # Replace this line:]\n",
|
||||||
" deriv = 0\n",
|
" deriv = 0\n",
|
||||||
@@ -475,8 +474,8 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"def compute_gradient2(data_x, data_y, phi, lambda_):\n",
|
"def compute_gradient2(data_x, data_y, phi, lambda_):\n",
|
||||||
" dl_dphi0 = gabor_deriv_phi0(data_x, data_y, phi[0],phi[1])+dldphi0(np.squeeze(phi), lambda_)\n",
|
" dl_dphi0 = gabor_deriv_phi0(data_x, data_y, phi[0],phi[1])+dregdphi0(np.squeeze(phi), lambda_)\n",
|
||||||
" dl_dphi1 = gabor_deriv_phi1(data_x, data_y, phi[0],phi[1])+dldphi1(np.squeeze(phi), lambda_)\n",
|
" dl_dphi1 = gabor_deriv_phi1(data_x, data_y, phi[0],phi[1])+dregdphi1(np.squeeze(phi), lambda_)\n",
|
||||||
" # Return the gradient\n",
|
" # Return the gradient\n",
|
||||||
" return np.array([[dl_dphi0],[dl_dphi1]])\n",
|
" return np.array([[dl_dphi0],[dl_dphi1]])\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -512,7 +511,7 @@
|
|||||||
"for c_step in range (n_steps):\n",
|
"for c_step in range (n_steps):\n",
|
||||||
" # Do gradient descent step\n",
|
" # Do gradient descent step\n",
|
||||||
" phi_all[:,c_step+1:c_step+2] = gradient_descent_step2(phi_all[:,c_step:c_step+1],lambda_, data, model)\n",
|
" phi_all[:,c_step+1:c_step+2] = gradient_descent_step2(phi_all[:,c_step:c_step+1],lambda_, data, model)\n",
|
||||||
" # Measure loss and draw model every 4th step\n",
|
" # Measure loss and draw model every 8th step\n",
|
||||||
" if c_step % 8 == 0:\n",
|
" if c_step % 8 == 0:\n",
|
||||||
" loss = compute_loss2(data[0,:], data[1,:], model, phi_all[:,c_step+1:c_step+2], lambda_)\n",
|
" loss = compute_loss2(data[0,:], data[1,:], model, phi_all[:,c_step+1:c_step+2], lambda_)\n",
|
||||||
" draw_model(data,model,phi_all[:,c_step+1], \"Iteration %d, loss = %f\"%(c_step+1,loss))\n",
|
" draw_model(data,model,phi_all[:,c_step+1], \"Iteration %d, loss = %f\"%(c_step+1,loss))\n",
|
||||||
@@ -528,11 +527,11 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"source": [
|
"source": [
|
||||||
"You should see that the gradient descent algorithm now finds the correct minimum. By applying a tiny bit of domain knowledge (the parameter phi0 tends to be near zero and the parameters phi1 tends to be near 12.5), we get a better solution. However, the cost is that this solution is slightly biased towards this prior knowledge."
|
"You should see that the gradient descent algorithm now finds the correct minimum. By applying a tiny bit of domain knowledge (the parameter phi0 tends to be near zero and the parameter phi1 tends to be near 12.5), we get a better solution. However, the cost is that this solution is slightly biased towards this prior knowledge."
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "wrszSLrqZG4k"
|
"id": "wrszSLrqZG4k"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
@@ -4,7 +4,6 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"colab": {
|
"colab": {
|
||||||
"provenance": [],
|
"provenance": [],
|
||||||
"authorship_tag": "ABX9TyOR3WOJwfTlMD8eOLsPfPrz",
|
|
||||||
"include_colab_link": true
|
"include_colab_link": true
|
||||||
},
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
@@ -33,7 +32,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates how the finite step sizes in gradient descent cause the trajectory to deviate and how this can be explained by adding an implicit regularization term. It recreates figure 9.3 from the book.\n",
|
"This notebook investigates how the finite step sizes in gradient descent cause the trajectory to deviate and how this can be explained by adding an implicit regularization term. It recreates figure 9.3 from the book.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n"
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n"
|
||||||
],
|
],
|
||||||
@@ -140,7 +139,7 @@
|
|||||||
" fig.set_size_inches(7,7)\n",
|
" fig.set_size_inches(7,7)\n",
|
||||||
" ax.contourf(phi0mesh, phi1mesh, loss_function, 256, cmap=my_colormap);\n",
|
" ax.contourf(phi0mesh, phi1mesh, loss_function, 256, cmap=my_colormap);\n",
|
||||||
" ax.contour(phi0mesh, phi1mesh, loss_function, 20, colors=['#80808080'])\n",
|
" ax.contour(phi0mesh, phi1mesh, loss_function, 20, colors=['#80808080'])\n",
|
||||||
" ax.set_xlabel('$\\phi_{0}$'); ax.set_ylabel('$\\phi_{1}$')\n",
|
" ax.set_xlabel(r'$\\phi_{0}$'); ax.set_ylabel(r'$\\phi_{1}$')\n",
|
||||||
"\n",
|
"\n",
|
||||||
" if grad_path_typical_lr is not None:\n",
|
" if grad_path_typical_lr is not None:\n",
|
||||||
" ax.plot(grad_path_typical_lr[0,:], grad_path_typical_lr[1,:],'ro-')\n",
|
" ax.plot(grad_path_typical_lr[0,:], grad_path_typical_lr[1,:],'ro-')\n",
|
||||||
@@ -335,4 +334,4 @@
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
@@ -33,7 +33,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates how ensembling can improve the performance of models. We'll work with the simplified neural network model (figure 8.4 of book) which we can fit in closed form, and so we can eliminate any errors due to not finding the global maximum.\n",
|
"This notebook investigates how ensembling can improve the performance of models. We'll work with the simplified neural network model (figure 8.4 of book) which we can fit in closed form, and so we can eliminate any errors due to not finding the global maximum.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n"
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n"
|
||||||
],
|
],
|
||||||
@@ -52,7 +52,7 @@
|
|||||||
"# import libraries\n",
|
"# import libraries\n",
|
||||||
"import numpy as np\n",
|
"import numpy as np\n",
|
||||||
"import matplotlib.pyplot as plt\n",
|
"import matplotlib.pyplot as plt\n",
|
||||||
"# Define seed so get same results each time\n",
|
"# Define seed to get same results each time\n",
|
||||||
"np.random.seed(1)"
|
"np.random.seed(1)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -80,7 +80,7 @@
|
|||||||
" for i in range(n_data):\n",
|
" for i in range(n_data):\n",
|
||||||
" x[i] = np.random.uniform(i/n_data, (i+1)/n_data, 1)\n",
|
" x[i] = np.random.uniform(i/n_data, (i+1)/n_data, 1)\n",
|
||||||
"\n",
|
"\n",
|
||||||
" # y value from running through functoin and adding noise\n",
|
" # y value from running through function and adding noise\n",
|
||||||
" y = np.ones(n_data)\n",
|
" y = np.ones(n_data)\n",
|
||||||
" for i in range(n_data):\n",
|
" for i in range(n_data):\n",
|
||||||
" y[i] = true_function(x[i])\n",
|
" y[i] = true_function(x[i])\n",
|
||||||
@@ -96,7 +96,7 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"source": [
|
||||||
"# Draw the fitted function, together win uncertainty used to generate points\n",
|
"# Draw the fitted function, together with uncertainty used to generate points\n",
|
||||||
"def plot_function(x_func, y_func, x_data=None,y_data=None, x_model = None, y_model =None, sigma_func = None, sigma_model=None):\n",
|
"def plot_function(x_func, y_func, x_data=None,y_data=None, x_model = None, y_model =None, sigma_func = None, sigma_model=None):\n",
|
||||||
"\n",
|
"\n",
|
||||||
" fig,ax = plt.subplots()\n",
|
" fig,ax = plt.subplots()\n",
|
||||||
@@ -137,7 +137,7 @@
|
|||||||
"n_data = 15\n",
|
"n_data = 15\n",
|
||||||
"x_data,y_data = generate_data(n_data, sigma_func)\n",
|
"x_data,y_data = generate_data(n_data, sigma_func)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Plot the functinon, data and uncertainty\n",
|
"# Plot the function, data and uncertainty\n",
|
||||||
"plot_function(x_func, y_func, x_data, y_data, sigma_func=sigma_func)"
|
"plot_function(x_func, y_func, x_data, y_data, sigma_func=sigma_func)"
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
@@ -216,7 +216,7 @@
|
|||||||
"# Closed form solution\n",
|
"# Closed form solution\n",
|
||||||
"beta, omega = fit_model_closed_form(x_data,y_data,n_hidden=14)\n",
|
"beta, omega = fit_model_closed_form(x_data,y_data,n_hidden=14)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Get prediction for model across graph grange\n",
|
"# Get prediction for model across graph range\n",
|
||||||
"x_model = np.linspace(0,1,100);\n",
|
"x_model = np.linspace(0,1,100);\n",
|
||||||
"y_model = network(x_model, beta, omega)\n",
|
"y_model = network(x_model, beta, omega)\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -297,7 +297,7 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"source": [
|
||||||
"# Plot the median of the results\n",
|
"# Plot the mean of the results\n",
|
||||||
"# TODO -- find the mean prediction\n",
|
"# TODO -- find the mean prediction\n",
|
||||||
"# Replace this line\n",
|
"# Replace this line\n",
|
||||||
"y_model_mean = all_y_model[0,:]\n",
|
"y_model_mean = all_y_model[0,:]\n",
|
||||||
@@ -325,4 +325,4 @@
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,18 +1,16 @@
|
|||||||
{
|
{
|
||||||
"cells": [
|
"cells": [
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"colab_type": "text",
|
"id": "view-in-github",
|
||||||
"id": "view-in-github"
|
"colab_type": "text"
|
||||||
},
|
},
|
||||||
"source": [
|
"source": [
|
||||||
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap09/9_4_Bayesian_Approach.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap09/9_4_Bayesian_Approach.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "el8l05WQEO46"
|
"id": "el8l05WQEO46"
|
||||||
@@ -22,7 +20,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates the Bayesian approach to model fitting and reproduces figure 9.11 from the book.\n",
|
"This notebook investigates the Bayesian approach to model fitting and reproduces figure 9.11 from the book.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n"
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n"
|
||||||
]
|
]
|
||||||
@@ -38,7 +36,7 @@
|
|||||||
"# import libraries\n",
|
"# import libraries\n",
|
||||||
"import numpy as np\n",
|
"import numpy as np\n",
|
||||||
"import matplotlib.pyplot as plt\n",
|
"import matplotlib.pyplot as plt\n",
|
||||||
"# Define seed so get same results each time\n",
|
"# Define seed to get same results each time\n",
|
||||||
"np.random.seed(1)"
|
"np.random.seed(1)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -87,7 +85,7 @@
|
|||||||
},
|
},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"# Draw the fitted function, together win uncertainty used to generate points\n",
|
"# Draw the fitted function, together with uncertainty used to generate points\n",
|
||||||
"def plot_function(x_func, y_func, x_data=None,y_data=None, x_model = None, y_model =None, sigma_func = None, sigma_model=None):\n",
|
"def plot_function(x_func, y_func, x_data=None,y_data=None, x_model = None, y_model =None, sigma_func = None, sigma_model=None):\n",
|
||||||
"\n",
|
"\n",
|
||||||
" fig,ax = plt.subplots()\n",
|
" fig,ax = plt.subplots()\n",
|
||||||
@@ -159,7 +157,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "i8T_QduzeBmM"
|
"id": "i8T_QduzeBmM"
|
||||||
@@ -195,7 +192,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "JojV6ueRk49G"
|
"id": "JojV6ueRk49G"
|
||||||
@@ -211,7 +207,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "YX0O_Ciwp4W1"
|
"id": "YX0O_Ciwp4W1"
|
||||||
@@ -225,7 +220,7 @@
|
|||||||
" &\\propto&\\text{Norm}_{\\boldsymbol\\phi}\\biggl[\\frac{1}{\\sigma^2}\\left(\\frac{1}{\\sigma^2}\\mathbf{H}\\mathbf{H}^T+\\frac{1}{\\sigma_p^2}\\mathbf{I}\\right)^{-1}\\mathbf{H}\\mathbf{y},\\left(\\frac{1}{\\sigma^2}\\mathbf{H}\\mathbf{H}^T+\\frac{1}{\\sigma_p^2}\\mathbf{I}\\right)^{-1}\\biggr].\n",
|
" &\\propto&\\text{Norm}_{\\boldsymbol\\phi}\\biggl[\\frac{1}{\\sigma^2}\\left(\\frac{1}{\\sigma^2}\\mathbf{H}\\mathbf{H}^T+\\frac{1}{\\sigma_p^2}\\mathbf{I}\\right)^{-1}\\mathbf{H}\\mathbf{y},\\left(\\frac{1}{\\sigma^2}\\mathbf{H}\\mathbf{H}^T+\\frac{1}{\\sigma_p^2}\\mathbf{I}\\right)^{-1}\\biggr].\n",
|
||||||
"\\end{align}\n",
|
"\\end{align}\n",
|
||||||
"\n",
|
"\n",
|
||||||
"In fact, since this already a normal distribution, the constant of proportionality must be one and we can write\n",
|
"In fact, since this is already a normal distribution, the constant of proportionality must be one and we can write\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\\begin{align}\n",
|
"\\begin{align}\n",
|
||||||
" Pr(\\boldsymbol\\phi|\\{\\mathbf{x}_{i},\\mathbf{y}_{i}\\}) &=& \\text{Norm}_{\\boldsymbol\\phi}\\biggl[\\frac{1}{\\sigma^2}\\left(\\frac{1}{\\sigma^2}\\mathbf{H}\\mathbf{H}^T+\\frac{1}{\\sigma_p^2}\\mathbf{I}\\right)^{-1}\\mathbf{H}\\mathbf{y},\\left(\\frac{1}{\\sigma^2}\\mathbf{H}\\mathbf{H}^T+\\frac{1}{\\sigma_p^2}\\mathbf{I}\\right)^{-1}\\biggr].\n",
|
" Pr(\\boldsymbol\\phi|\\{\\mathbf{x}_{i},\\mathbf{y}_{i}\\}) &=& \\text{Norm}_{\\boldsymbol\\phi}\\biggl[\\frac{1}{\\sigma^2}\\left(\\frac{1}{\\sigma^2}\\mathbf{H}\\mathbf{H}^T+\\frac{1}{\\sigma_p^2}\\mathbf{I}\\right)^{-1}\\mathbf{H}\\mathbf{y},\\left(\\frac{1}{\\sigma^2}\\mathbf{H}\\mathbf{H}^T+\\frac{1}{\\sigma_p^2}\\mathbf{I}\\right)^{-1}\\biggr].\n",
|
||||||
@@ -277,7 +272,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "GjPnlG4q0UFK"
|
"id": "GjPnlG4q0UFK"
|
||||||
@@ -334,7 +328,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "GiNg5EroUiUb"
|
"id": "GiNg5EroUiUb"
|
||||||
@@ -343,17 +336,16 @@
|
|||||||
"Now we need to perform inference for a new data points $\\mathbf{x}^*$ with corresponding hidden values $\\mathbf{h}^*$. Instead of having a single estimate of the parameters, we have a distribution over the possible parameters. So we marginalize (integrate) over this distribution to account for all possible values:\n",
|
"Now we need to perform inference for a new data points $\\mathbf{x}^*$ with corresponding hidden values $\\mathbf{h}^*$. Instead of having a single estimate of the parameters, we have a distribution over the possible parameters. So we marginalize (integrate) over this distribution to account for all possible values:\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\\begin{align}\n",
|
"\\begin{align}\n",
|
||||||
"Pr(y^*|\\mathbf{x}^*) &=& \\int Pr(y^{*}|\\mathbf{x}^*,\\boldsymbol\\phi)Pr(\\boldsymbol\\phi|\\{\\mathbf{x}_{i},\\mathbf{y}_{i}\\}) d\\boldsymbol\\phi\\\\\n",
|
"Pr(y^*|\\mathbf{x}^*) &= \\int Pr(y^{*}|\\mathbf{x}^*,\\boldsymbol\\phi)Pr(\\boldsymbol\\phi|\\{\\mathbf{x}_{i},\\mathbf{y}_{i}\\}) d\\boldsymbol\\phi\\\\\n",
|
||||||
"&=& \\int \\text{Norm}_{y^*}\\bigl[[\\mathbf{h}^{*T},1]\\boldsymbol\\phi,\\sigma^2\\bigr]\\cdot\\text{Norm}_{\\boldsymbol\\phi}\\biggl[\\frac{1}{\\sigma^2}\\left(\\frac{1}{\\sigma^2}\\mathbf{H}\\mathbf{H}^T+\\frac{1}{\\sigma_p^2}\\mathbf{I}\\right)^{-1}\\mathbf{H}\\mathbf{y},\\left(\\frac{1}{\\sigma^2}\\mathbf{H}\\mathbf{H}^T+\\frac{1}{\\sigma_p^2}\\mathbf{I}\\right)^{-1}\\biggr]d\\boldsymbol\\phi\\\\\n",
|
"&= \\int \\text{Norm}_{y^*}\\bigl[[\\mathbf{h}^{*T},1]\\boldsymbol\\phi,\\sigma^2\\bigr]\\cdot\\text{Norm}_{\\boldsymbol\\phi}\\biggl[\\frac{1}{\\sigma^2}\\left(\\frac{1}{\\sigma^2}\\mathbf{H}\\mathbf{H}^T+\\frac{1}{\\sigma_p^2}\\mathbf{I}\\right)^{-1}\\mathbf{H}\\mathbf{y},\\left(\\frac{1}{\\sigma^2}\\mathbf{H}\\mathbf{H}^T+\\frac{1}{\\sigma_p^2}\\mathbf{I}\\right)^{-1}\\biggr]d\\boldsymbol\\phi\\\\\n",
|
||||||
"&=& \\text{Norm}_{y^*}\\biggl[\\frac{1}{\\sigma^2} [\\mathbf{h}^{*T},1]\\left(\\frac{1}{\\sigma^2}\\mathbf{H}\\mathbf{H}^T+\\frac{1}{\\sigma_p^2}\\mathbf{I}\\right)^{-1}\\mathbf{H}\\mathbf{y}, [\\mathbf{h}^{*T},1]\\left(\\frac{1}{\\sigma^2}\\mathbf{H}\\mathbf{H}^T+\\frac{1}{\\sigma_p^2}\\mathbf{I}\\right)^{-1}\n",
|
"&= \\text{Norm}_{y^*}\\biggl[\\frac{1}{\\sigma^2} [\\mathbf{h}^{*T},1]\\left(\\frac{1}{\\sigma^2}\\mathbf{H}\\mathbf{H}^T+\\frac{1}{\\sigma_p^2}\\mathbf{I}\\right)^{-1}\\mathbf{H}\\mathbf{y}, [\\mathbf{h}^{*T},1]\\left(\\frac{1}{\\sigma^2}\\mathbf{H}\\mathbf{H}^T+\\frac{1}{\\sigma_p^2}\\mathbf{I}\\right)^{-1}\n",
|
||||||
"[\\mathbf{h}^*;1]\\biggr]\n",
|
"[\\mathbf{h}^*;1]\\biggr],\n",
|
||||||
"\\end{align}\n",
|
"\\end{align}\n",
|
||||||
"\n",
|
"\n",
|
||||||
|
"where the notation $[\\mathbf{h}^{*T},1]$ is a row vector containing $\\mathbf{h}^{*T}$ with a one appended to the end and $[\\mathbf{h}^{*};1 ]$ is a column vector containing $\\mathbf{h}^{*}$ with a one appended to the end.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"To compute this, we reformulated the integrand using the relations from appendices C.3.3 and C.3.4 as the product of a normal distribution in $\\boldsymbol\\phi$ and a constant with respect\n",
|
||||||
"To compute this, we reformulated the integrand using the relations from appendices\n",
|
|
||||||
"C.3.3 and C.3.4 as the product of a normal distribution in $\\boldsymbol\\phi$ and a constant with respect\n",
|
|
||||||
"to $\\boldsymbol\\phi$. The integral of the normal distribution must be one, and so the final result is just the constant. This constant is itself a normal distribution in $y^*$. <br>\n",
|
"to $\\boldsymbol\\phi$. The integral of the normal distribution must be one, and so the final result is just the constant. This constant is itself a normal distribution in $y^*$. <br>\n",
|
||||||
"\n",
|
"\n",
|
||||||
"If you feel so inclined you can work through the math of this yourself.\n",
|
"If you feel so inclined you can work through the math of this yourself.\n",
|
||||||
@@ -404,7 +396,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "8Hcbe_16sK0F"
|
"id": "8Hcbe_16sK0F"
|
||||||
@@ -419,9 +410,8 @@
|
|||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"colab": {
|
"colab": {
|
||||||
"authorship_tag": "ABX9TyMB8B4269DVmrcLoCWrhzKF",
|
"provenance": [],
|
||||||
"include_colab_link": true,
|
"include_colab_link": true
|
||||||
"provenance": []
|
|
||||||
},
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "Python 3",
|
"display_name": "Python 3",
|
||||||
@@ -433,4 +423,4 @@
|
|||||||
},
|
},
|
||||||
"nbformat": 4,
|
"nbformat": 4,
|
||||||
"nbformat_minor": 0
|
"nbformat_minor": 0
|
||||||
}
|
}
|
||||||
@@ -4,7 +4,6 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"colab": {
|
"colab": {
|
||||||
"provenance": [],
|
"provenance": [],
|
||||||
"authorship_tag": "ABX9TyM38ZVBK4/xaHk5Ys5lF6dN",
|
|
||||||
"include_colab_link": true
|
"include_colab_link": true
|
||||||
},
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
@@ -33,7 +32,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates data augmentation for the MNIST-1D model.\n",
|
"This notebook investigates data augmentation for the MNIST-1D model.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n"
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n"
|
||||||
],
|
],
|
||||||
@@ -44,8 +43,8 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"source": [
|
||||||
"# Run this if you're in a Colab to make a local copy of the MNIST 1D repository\n",
|
"# Run this if you're in a Colab to install MNIST 1D repository\n",
|
||||||
"!git clone https://github.com/greydanus/mnist1d"
|
"!pip install git+https://github.com/greydanus/mnist1d"
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "syvgxgRr3myY"
|
"id": "syvgxgRr3myY"
|
||||||
@@ -95,7 +94,7 @@
|
|||||||
"D_k = 200 # Hidden dimensions\n",
|
"D_k = 200 # Hidden dimensions\n",
|
||||||
"D_o = 10 # Output dimensions\n",
|
"D_o = 10 # Output dimensions\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Define a model with two hidden layers of size 100\n",
|
"# Define a model with two hidden layers of size 200\n",
|
||||||
"# And ReLU activations between them\n",
|
"# And ReLU activations between them\n",
|
||||||
"model = nn.Sequential(\n",
|
"model = nn.Sequential(\n",
|
||||||
"nn.Linear(D_i, D_k),\n",
|
"nn.Linear(D_i, D_k),\n",
|
||||||
@@ -108,10 +107,7 @@
|
|||||||
" # Initialize the parameters with He initialization\n",
|
" # Initialize the parameters with He initialization\n",
|
||||||
" if isinstance(layer_in, nn.Linear):\n",
|
" if isinstance(layer_in, nn.Linear):\n",
|
||||||
" nn.init.kaiming_uniform_(layer_in.weight)\n",
|
" nn.init.kaiming_uniform_(layer_in.weight)\n",
|
||||||
" layer_in.bias.data.fill_(0.0)\n",
|
" layer_in.bias.data.fill_(0.0)\n"
|
||||||
"\n",
|
|
||||||
"# Call the function you just defined\n",
|
|
||||||
"model.apply(weights_init)"
|
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "JfIFWFIL33eF"
|
"id": "JfIFWFIL33eF"
|
||||||
@@ -186,7 +182,7 @@
|
|||||||
"ax.plot(errors_test,'b-',label='test')\n",
|
"ax.plot(errors_test,'b-',label='test')\n",
|
||||||
"ax.set_ylim(0,100); ax.set_xlim(0,n_epoch)\n",
|
"ax.set_ylim(0,100); ax.set_xlim(0,n_epoch)\n",
|
||||||
"ax.set_xlabel('Epoch'); ax.set_ylabel('Error')\n",
|
"ax.set_xlabel('Epoch'); ax.set_ylabel('Error')\n",
|
||||||
"ax.set_title('TrainError %3.2f, Test Error %3.2f'%(errors_train[-1],errors_test[-1]))\n",
|
"ax.set_title('Train Error %3.2f, Test Error %3.2f'%(errors_train[-1],errors_test[-1]))\n",
|
||||||
"ax.legend()\n",
|
"ax.legend()\n",
|
||||||
"plt.show()"
|
"plt.show()"
|
||||||
],
|
],
|
||||||
@@ -233,7 +229,7 @@
|
|||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"source": [
|
||||||
"n_data_orig = data['x'].shape[0]\n",
|
"n_data_orig = data['x'].shape[0]\n",
|
||||||
"# We'll double the amount o fdata\n",
|
"# We'll double the amount of data\n",
|
||||||
"n_data_augment = n_data_orig+4000\n",
|
"n_data_augment = n_data_orig+4000\n",
|
||||||
"augmented_x = np.zeros((n_data_augment, D_i))\n",
|
"augmented_x = np.zeros((n_data_augment, D_i))\n",
|
||||||
"augmented_y = np.zeros(n_data_augment)\n",
|
"augmented_y = np.zeros(n_data_augment)\n",
|
||||||
|
|||||||
@@ -33,7 +33,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates 1D convolutional layers.\n",
|
"This notebook investigates 1D convolutional layers.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n"
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n"
|
||||||
],
|
],
|
||||||
|
|||||||
@@ -4,7 +4,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"colab": {
|
"colab": {
|
||||||
"provenance": [],
|
"provenance": [],
|
||||||
"authorship_tag": "ABX9TyNJodaaCLMRWL9vTl8B/iLI",
|
"authorship_tag": "ABX9TyNb46PJB/CC1pcHGfjpUUZg",
|
||||||
"include_colab_link": true
|
"include_colab_link": true
|
||||||
},
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
@@ -33,7 +33,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates a 1D convolutional network for MNIST-1D as in figure 10.7 and 10.8a.\n",
|
"This notebook investigates a 1D convolutional network for MNIST-1D as in figure 10.7 and 10.8a.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
|
||||||
"\n"
|
"\n"
|
||||||
@@ -45,8 +45,8 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"source": [
|
||||||
"# Run this if you're in a Colab to make a local copy of the MNIST 1D repository\n",
|
"# Run this if you're in a Colab to install MNIST 1D repository\n",
|
||||||
"!git clone https://github.com/greydanus/mnist1d"
|
"!pip install git+https://github.com/greydanus/mnist1d"
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "D5yLObtZCi9J"
|
"id": "D5yLObtZCi9J"
|
||||||
|
|||||||
@@ -33,7 +33,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates the 2D convolution operation. It asks you to hand code the convolution so we can be sure that we are computing the same thing as in PyTorch. The next notebook uses the convolutional layers in PyTorch directly.\n",
|
"This notebook investigates the 2D convolution operation. It asks you to hand code the convolution so we can be sure that we are computing the same thing as in PyTorch. The next notebook uses the convolutional layers in PyTorch directly.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
],
|
],
|
||||||
|
|||||||
@@ -4,7 +4,6 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"colab": {
|
"colab": {
|
||||||
"provenance": [],
|
"provenance": [],
|
||||||
"authorship_tag": "ABX9TyMbSR8fzpXvO6TIQdO7bI0H",
|
|
||||||
"include_colab_link": true
|
"include_colab_link": true
|
||||||
},
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
@@ -31,9 +30,9 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"# **Notebook 10.4: Downsampling and Upsampling**\n",
|
"# **Notebook 10.4: Downsampling and Upsampling**\n",
|
||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates the down sampling and downsampling methods discussed in section 10.4 of the book.\n",
|
"This notebook investigates the upsampling and downsampling methods discussed in section 10.4 of the book.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n"
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n"
|
||||||
],
|
],
|
||||||
@@ -71,9 +70,9 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"source": [
|
||||||
"def subsample(x_in):\n",
|
"def downsample(x_in):\n",
|
||||||
" x_out = np.zeros(( int(np.ceil(x_in.shape[0]/2)), int(np.ceil(x_in.shape[1]/2)) ))\n",
|
" x_out = np.zeros(( int(np.ceil(x_in.shape[0]/2)), int(np.ceil(x_in.shape[1]/2)) ))\n",
|
||||||
" # TO DO -- write the subsampling routine\n",
|
" # TODO -- write the downsampling routine\n",
|
||||||
" # Replace this line\n",
|
" # Replace this line\n",
|
||||||
" x_out = x_out\n",
|
" x_out = x_out\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -91,8 +90,8 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"print(\"Original:\")\n",
|
"print(\"Original:\")\n",
|
||||||
"print(orig_4_4)\n",
|
"print(orig_4_4)\n",
|
||||||
"print(\"Subsampled:\")\n",
|
"print(\"Downsampled:\")\n",
|
||||||
"print(subsample(orig_4_4))"
|
"print(downsample(orig_4_4))"
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "O_i0y72_JwGZ"
|
"id": "O_i0y72_JwGZ"
|
||||||
@@ -127,24 +126,24 @@
|
|||||||
"image = Image.open('test_image.png')\n",
|
"image = Image.open('test_image.png')\n",
|
||||||
"# convert image to numpy array\n",
|
"# convert image to numpy array\n",
|
||||||
"data = asarray(image)\n",
|
"data = asarray(image)\n",
|
||||||
"data_subsample = subsample(data);\n",
|
"data_downsample = downsample(data);\n",
|
||||||
"\n",
|
"\n",
|
||||||
"plt.figure(figsize=(5,5))\n",
|
"plt.figure(figsize=(5,5))\n",
|
||||||
"plt.imshow(data, cmap='gray')\n",
|
"plt.imshow(data, cmap='gray')\n",
|
||||||
"plt.show()\n",
|
"plt.show()\n",
|
||||||
"\n",
|
"\n",
|
||||||
"plt.figure(figsize=(5,5))\n",
|
"plt.figure(figsize=(5,5))\n",
|
||||||
"plt.imshow(data_subsample, cmap='gray')\n",
|
"plt.imshow(data_downsample, cmap='gray')\n",
|
||||||
"plt.show()\n",
|
"plt.show()\n",
|
||||||
"\n",
|
"\n",
|
||||||
"data_subsample2 = subsample(data_subsample)\n",
|
"data_downsample2 = downsample(data_downsample)\n",
|
||||||
"plt.figure(figsize=(5,5))\n",
|
"plt.figure(figsize=(5,5))\n",
|
||||||
"plt.imshow(data_subsample2, cmap='gray')\n",
|
"plt.imshow(data_downsample2, cmap='gray')\n",
|
||||||
"plt.show()\n",
|
"plt.show()\n",
|
||||||
"\n",
|
"\n",
|
||||||
"data_subsample3 = subsample(data_subsample2)\n",
|
"data_downsample3 = downsample(data_downsample2)\n",
|
||||||
"plt.figure(figsize=(5,5))\n",
|
"plt.figure(figsize=(5,5))\n",
|
||||||
"plt.imshow(data_subsample3, cmap='gray')\n",
|
"plt.imshow(data_downsample3, cmap='gray')\n",
|
||||||
"plt.show()"
|
"plt.show()"
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
@@ -159,7 +158,7 @@
|
|||||||
"# Now let's try max-pooling\n",
|
"# Now let's try max-pooling\n",
|
||||||
"def maxpool(x_in):\n",
|
"def maxpool(x_in):\n",
|
||||||
" x_out = np.zeros(( int(np.floor(x_in.shape[0]/2)), int(np.floor(x_in.shape[1]/2)) ))\n",
|
" x_out = np.zeros(( int(np.floor(x_in.shape[0]/2)), int(np.floor(x_in.shape[1]/2)) ))\n",
|
||||||
" # TO DO -- write the maxpool routine\n",
|
" # TODO -- write the maxpool routine\n",
|
||||||
" # Replace this line\n",
|
" # Replace this line\n",
|
||||||
" x_out = x_out\n",
|
" x_out = x_out\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -230,7 +229,7 @@
|
|||||||
"# Finally, let's try mean pooling\n",
|
"# Finally, let's try mean pooling\n",
|
||||||
"def meanpool(x_in):\n",
|
"def meanpool(x_in):\n",
|
||||||
" x_out = np.zeros(( int(np.floor(x_in.shape[0]/2)), int(np.floor(x_in.shape[1]/2)) ))\n",
|
" x_out = np.zeros(( int(np.floor(x_in.shape[0]/2)), int(np.floor(x_in.shape[1]/2)) ))\n",
|
||||||
" # TO DO -- write the meanpool routine\n",
|
" # TODO -- write the meanpool routine\n",
|
||||||
" # Replace this line\n",
|
" # Replace this line\n",
|
||||||
" x_out = x_out\n",
|
" x_out = x_out\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -301,7 +300,7 @@
|
|||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"source": [
|
||||||
"# Define 2 by 2 original patch\n",
|
"# Define 2 by 2 original patch\n",
|
||||||
"orig_2_2 = np.array([[2, 4], [4,8]])\n",
|
"orig_2_2 = np.array([[6, 8], [8,4]])\n",
|
||||||
"print(orig_2_2)"
|
"print(orig_2_2)"
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
@@ -316,7 +315,7 @@
|
|||||||
"# Let's first use the duplication method\n",
|
"# Let's first use the duplication method\n",
|
||||||
"def duplicate(x_in):\n",
|
"def duplicate(x_in):\n",
|
||||||
" x_out = np.zeros(( x_in.shape[0]*2, x_in.shape[1]*2 ))\n",
|
" x_out = np.zeros(( x_in.shape[0]*2, x_in.shape[1]*2 ))\n",
|
||||||
" # TO DO -- write the duplication routine\n",
|
" # TODO -- write the duplication routine\n",
|
||||||
" # Replace this line\n",
|
" # Replace this line\n",
|
||||||
" x_out = x_out\n",
|
" x_out = x_out\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -345,11 +344,11 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"source": [
|
||||||
"# Let's re-upsample, sub-sampled rick\n",
|
"# Let's re-upsample, downsampled rick\n",
|
||||||
"data_duplicate = duplicate(data_subsample3);\n",
|
"data_duplicate = duplicate(data_downsample3);\n",
|
||||||
"\n",
|
"\n",
|
||||||
"plt.figure(figsize=(5,5))\n",
|
"plt.figure(figsize=(5,5))\n",
|
||||||
"plt.imshow(data_subsample3, cmap='gray')\n",
|
"plt.imshow(data_downsample3, cmap='gray')\n",
|
||||||
"plt.show()\n",
|
"plt.show()\n",
|
||||||
"\n",
|
"\n",
|
||||||
"plt.figure(figsize=(5,5))\n",
|
"plt.figure(figsize=(5,5))\n",
|
||||||
@@ -388,7 +387,7 @@
|
|||||||
"# The input x_high_res is the original high res image, from which you can deduce the position of the maximum index\n",
|
"# The input x_high_res is the original high res image, from which you can deduce the position of the maximum index\n",
|
||||||
"def max_unpool(x_in, x_high_res):\n",
|
"def max_unpool(x_in, x_high_res):\n",
|
||||||
" x_out = np.zeros(( x_in.shape[0]*2, x_in.shape[1]*2 ))\n",
|
" x_out = np.zeros(( x_in.shape[0]*2, x_in.shape[1]*2 ))\n",
|
||||||
" # TO DO -- write the subsampling routine\n",
|
" # TODO -- write the unpooling routine\n",
|
||||||
" # Replace this line\n",
|
" # Replace this line\n",
|
||||||
" x_out = x_out\n",
|
" x_out = x_out\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -417,7 +416,7 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"source": [
|
||||||
"# Let's re-upsample, sub-sampled rick\n",
|
"# Let's re-upsample, down-sampled rick\n",
|
||||||
"data_max_unpool= max_unpool(data_maxpool3,data_maxpool2);\n",
|
"data_max_unpool= max_unpool(data_maxpool3,data_maxpool2);\n",
|
||||||
"\n",
|
"\n",
|
||||||
"plt.figure(figsize=(5,5))\n",
|
"plt.figure(figsize=(5,5))\n",
|
||||||
@@ -460,7 +459,7 @@
|
|||||||
" x_out = np.zeros(( x_in.shape[0]*2, x_in.shape[1]*2 ))\n",
|
" x_out = np.zeros(( x_in.shape[0]*2, x_in.shape[1]*2 ))\n",
|
||||||
" x_in_pad = np.zeros((x_in.shape[0]+1, x_in.shape[1]+1))\n",
|
" x_in_pad = np.zeros((x_in.shape[0]+1, x_in.shape[1]+1))\n",
|
||||||
" x_in_pad[0:x_in.shape[0],0:x_in.shape[1]] = x_in\n",
|
" x_in_pad[0:x_in.shape[0],0:x_in.shape[1]] = x_in\n",
|
||||||
" # TO DO -- write the duplication routine\n",
|
" # TODO -- write the duplication routine\n",
|
||||||
" # Replace this line\n",
|
" # Replace this line\n",
|
||||||
" x_out = x_out\n",
|
" x_out = x_out\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -489,7 +488,7 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"source": [
|
||||||
"# Let's re-upsample, sub-sampled rick\n",
|
"# Let's re-upsample, down-sampled rick\n",
|
||||||
"data_bilinear = bilinear(data_meanpool3);\n",
|
"data_bilinear = bilinear(data_meanpool3);\n",
|
||||||
"\n",
|
"\n",
|
||||||
"plt.figure(figsize=(5,5))\n",
|
"plt.figure(figsize=(5,5))\n",
|
||||||
|
|||||||
@@ -1,26 +1,10 @@
|
|||||||
{
|
{
|
||||||
"nbformat": 4,
|
|
||||||
"nbformat_minor": 0,
|
|
||||||
"metadata": {
|
|
||||||
"colab": {
|
|
||||||
"provenance": [],
|
|
||||||
"authorship_tag": "ABX9TyNAcc98STMeyQgh9SbVHWG+",
|
|
||||||
"include_colab_link": true
|
|
||||||
},
|
|
||||||
"kernelspec": {
|
|
||||||
"name": "python3",
|
|
||||||
"display_name": "Python 3"
|
|
||||||
},
|
|
||||||
"language_info": {
|
|
||||||
"name": "python"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"cells": [
|
"cells": [
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "view-in-github",
|
"colab_type": "text",
|
||||||
"colab_type": "text"
|
"id": "view-in-github"
|
||||||
},
|
},
|
||||||
"source": [
|
"source": [
|
||||||
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap10/10_5_Convolution_For_MNIST.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap10/10_5_Convolution_For_MNIST.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
@@ -28,6 +12,9 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "t9vk9Elugvmi"
|
||||||
|
},
|
||||||
"source": [
|
"source": [
|
||||||
"# **Notebook 10.5: Convolution for MNIST**\n",
|
"# **Notebook 10.5: Convolution for MNIST**\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -35,16 +22,20 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"The code is adapted from https://nextjournal.com/gkoehler/pytorch-mnist\n",
|
"The code is adapted from https://nextjournal.com/gkoehler/pytorch-mnist\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
|
"\n",
|
||||||
|
"If you are using Google Colab, you can change your runtime to an instance with GPU support to speed up training, e.g. a T4 GPU. If you do this, the cell below should output ``device(type='cuda')``\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n"
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n"
|
||||||
],
|
]
|
||||||
"metadata": {
|
|
||||||
"id": "t9vk9Elugvmi"
|
|
||||||
}
|
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "YrXWAH7sUWvU"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"import torch\n",
|
"import torch\n",
|
||||||
"import torchvision\n",
|
"import torchvision\n",
|
||||||
@@ -52,23 +43,34 @@
|
|||||||
"import torch.nn.functional as F\n",
|
"import torch.nn.functional as F\n",
|
||||||
"import torch.optim as optim\n",
|
"import torch.optim as optim\n",
|
||||||
"import matplotlib.pyplot as plt\n",
|
"import matplotlib.pyplot as plt\n",
|
||||||
"import random"
|
"import random\n",
|
||||||
],
|
"device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n",
|
||||||
"metadata": {
|
"device"
|
||||||
"id": "YrXWAH7sUWvU"
|
]
|
||||||
},
|
|
||||||
"execution_count": null,
|
|
||||||
"outputs": []
|
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "wScBGXXFVadm"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"# Run this once to load the train and test data straight into a dataloader class\n",
|
"# Run this once to load the train and test data straight into a dataloader class\n",
|
||||||
"# that will provide the batches\n",
|
"# that will provide the batches\n",
|
||||||
|
"\n",
|
||||||
|
"# (It may complain that some files are missing because the files seem to have been\n",
|
||||||
|
"# reorganized on the underlying website, but it still seems to work). If everything is working\n",
|
||||||
|
"# properly, then the whole notebook should run to the end without further problems\n",
|
||||||
|
"# even before you make changes.\n",
|
||||||
"batch_size_train = 64\n",
|
"batch_size_train = 64\n",
|
||||||
"batch_size_test = 1000\n",
|
"batch_size_test = 1000\n",
|
||||||
|
"\n",
|
||||||
|
"# TODO Change this directory to point towards an existing directory (No change needed if using Google Colab)\n",
|
||||||
|
"myDir = '/files/'\n",
|
||||||
|
"\n",
|
||||||
"train_loader = torch.utils.data.DataLoader(\n",
|
"train_loader = torch.utils.data.DataLoader(\n",
|
||||||
" torchvision.datasets.MNIST('/files/', train=True, download=True,\n",
|
" torchvision.datasets.MNIST(myDir, train=True, download=True,\n",
|
||||||
" transform=torchvision.transforms.Compose([\n",
|
" transform=torchvision.transforms.Compose([\n",
|
||||||
" torchvision.transforms.ToTensor(),\n",
|
" torchvision.transforms.ToTensor(),\n",
|
||||||
" torchvision.transforms.Normalize(\n",
|
" torchvision.transforms.Normalize(\n",
|
||||||
@@ -77,22 +79,22 @@
|
|||||||
" batch_size=batch_size_train, shuffle=True)\n",
|
" batch_size=batch_size_train, shuffle=True)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"test_loader = torch.utils.data.DataLoader(\n",
|
"test_loader = torch.utils.data.DataLoader(\n",
|
||||||
" torchvision.datasets.MNIST('/files/', train=False, download=True,\n",
|
" torchvision.datasets.MNIST(myDir, train=False, download=True,\n",
|
||||||
" transform=torchvision.transforms.Compose([\n",
|
" transform=torchvision.transforms.Compose([\n",
|
||||||
" torchvision.transforms.ToTensor(),\n",
|
" torchvision.transforms.ToTensor(),\n",
|
||||||
" torchvision.transforms.Normalize(\n",
|
" torchvision.transforms.Normalize(\n",
|
||||||
" (0.1307,), (0.3081,))\n",
|
" (0.1307,), (0.3081,))\n",
|
||||||
" ])),\n",
|
" ])),\n",
|
||||||
" batch_size=batch_size_test, shuffle=True)"
|
" batch_size=batch_size_test, shuffle=True)"
|
||||||
],
|
]
|
||||||
"metadata": {
|
|
||||||
"id": "wScBGXXFVadm"
|
|
||||||
},
|
|
||||||
"execution_count": null,
|
|
||||||
"outputs": []
|
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "8bKADvLHbiV5"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"# Let's draw some of the training data\n",
|
"# Let's draw some of the training data\n",
|
||||||
"examples = enumerate(test_loader)\n",
|
"examples = enumerate(test_loader)\n",
|
||||||
@@ -107,24 +109,24 @@
|
|||||||
" plt.xticks([])\n",
|
" plt.xticks([])\n",
|
||||||
" plt.yticks([])\n",
|
" plt.yticks([])\n",
|
||||||
"plt.show()"
|
"plt.show()"
|
||||||
],
|
]
|
||||||
"metadata": {
|
|
||||||
"id": "8bKADvLHbiV5"
|
|
||||||
},
|
|
||||||
"execution_count": null,
|
|
||||||
"outputs": []
|
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"source": [
|
|
||||||
"Define the network. This is a more typical way to define a network than the sequential structure. We define a class for the network, and define the parameters in the constructor. Then we use a function called forward to actually run the network. It's easy to see how you might use residual connections in this format."
|
|
||||||
],
|
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "_sFvRDGrl4qe"
|
"id": "_sFvRDGrl4qe"
|
||||||
}
|
},
|
||||||
|
"source": [
|
||||||
|
"Define the network. This is a more typical way to define a network than the sequential structure. We define a class for the network, and define the parameters in the constructor. Then we use a function called forward to actually run the network. It's easy to see how you might use residual connections in this format."
|
||||||
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "EQkvw2KOPVl7"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"from os import X_OK\n",
|
"from os import X_OK\n",
|
||||||
"# TODO Change this class to implement\n",
|
"# TODO Change this class to implement\n",
|
||||||
@@ -165,52 +167,54 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\n"
|
"\n"
|
||||||
],
|
]
|
||||||
"metadata": {
|
|
||||||
"id": "EQkvw2KOPVl7"
|
|
||||||
},
|
|
||||||
"execution_count": null,
|
|
||||||
"outputs": []
|
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "qWZtkCZcU_dg"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"# He initialization of weights\n",
|
"# He initialization of weights\n",
|
||||||
"def weights_init(layer_in):\n",
|
"def weights_init(layer_in):\n",
|
||||||
" if isinstance(layer_in, nn.Linear):\n",
|
" if isinstance(layer_in, nn.Linear):\n",
|
||||||
" nn.init.kaiming_uniform_(layer_in.weight)\n",
|
" nn.init.kaiming_uniform_(layer_in.weight)\n",
|
||||||
" layer_in.bias.data.fill_(0.0)"
|
" layer_in.bias.data.fill_(0.0)"
|
||||||
],
|
]
|
||||||
"metadata": {
|
|
||||||
"id": "qWZtkCZcU_dg"
|
|
||||||
},
|
|
||||||
"execution_count": null,
|
|
||||||
"outputs": []
|
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "FslroPJJffrh"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"# Create network\n",
|
"# Create network\n",
|
||||||
"model = Net()\n",
|
"model = Net().to(device)\n",
|
||||||
"# Initialize model weights\n",
|
"# Initialize model weights\n",
|
||||||
"model.apply(weights_init)\n",
|
"model.apply(weights_init)\n",
|
||||||
"# Define optimizer\n",
|
"# Define optimizer\n",
|
||||||
"optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5)"
|
"optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5)"
|
||||||
],
|
]
|
||||||
"metadata": {
|
|
||||||
"id": "FslroPJJffrh"
|
|
||||||
},
|
|
||||||
"execution_count": null,
|
|
||||||
"outputs": []
|
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "xKQd9PzkQ766"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"# Main training routine\n",
|
"# Main training routine\n",
|
||||||
"def train(epoch):\n",
|
"def train(epoch):\n",
|
||||||
" model.train()\n",
|
" model.train()\n",
|
||||||
" # Get each\n",
|
" # Get each\n",
|
||||||
" for batch_idx, (data, target) in enumerate(train_loader):\n",
|
" for batch_idx, (data, target) in enumerate(train_loader):\n",
|
||||||
|
" data = data.to(device)\n",
|
||||||
|
" target = target.to(device)\n",
|
||||||
" optimizer.zero_grad()\n",
|
" optimizer.zero_grad()\n",
|
||||||
" output = model(data)\n",
|
" output = model(data)\n",
|
||||||
" loss = F.nll_loss(output, target)\n",
|
" loss = F.nll_loss(output, target)\n",
|
||||||
@@ -220,15 +224,15 @@
|
|||||||
" if batch_idx % 10 == 0:\n",
|
" if batch_idx % 10 == 0:\n",
|
||||||
" print('Train Epoch: {} [{}/{}]\\tLoss: {:.6f}'.format(\n",
|
" print('Train Epoch: {} [{}/{}]\\tLoss: {:.6f}'.format(\n",
|
||||||
" epoch, batch_idx * len(data), len(train_loader.dataset), loss.item()))"
|
" epoch, batch_idx * len(data), len(train_loader.dataset), loss.item()))"
|
||||||
],
|
]
|
||||||
"metadata": {
|
|
||||||
"id": "xKQd9PzkQ766"
|
|
||||||
},
|
|
||||||
"execution_count": null,
|
|
||||||
"outputs": []
|
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "Byn-f7qWRLxX"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"# Run on test data\n",
|
"# Run on test data\n",
|
||||||
"def test():\n",
|
"def test():\n",
|
||||||
@@ -237,6 +241,8 @@
|
|||||||
" correct = 0\n",
|
" correct = 0\n",
|
||||||
" with torch.no_grad():\n",
|
" with torch.no_grad():\n",
|
||||||
" for data, target in test_loader:\n",
|
" for data, target in test_loader:\n",
|
||||||
|
" data = data.to(device)\n",
|
||||||
|
" target = target.to(device)\n",
|
||||||
" output = model(data)\n",
|
" output = model(data)\n",
|
||||||
" test_loss += F.nll_loss(output, target, size_average=False).item()\n",
|
" test_loss += F.nll_loss(output, target, size_average=False).item()\n",
|
||||||
" pred = output.data.max(1, keepdim=True)[1]\n",
|
" pred = output.data.max(1, keepdim=True)[1]\n",
|
||||||
@@ -245,15 +251,15 @@
|
|||||||
" print('\\nTest set: Avg. loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\\n'.format(\n",
|
" print('\\nTest set: Avg. loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\\n'.format(\n",
|
||||||
" test_loss, correct, len(test_loader.dataset),\n",
|
" test_loss, correct, len(test_loader.dataset),\n",
|
||||||
" 100. * correct / len(test_loader.dataset)))"
|
" 100. * correct / len(test_loader.dataset)))"
|
||||||
],
|
]
|
||||||
"metadata": {
|
|
||||||
"id": "Byn-f7qWRLxX"
|
|
||||||
},
|
|
||||||
"execution_count": null,
|
|
||||||
"outputs": []
|
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "YgLaex1pfhqz"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"# Get initial performance\n",
|
"# Get initial performance\n",
|
||||||
"test()\n",
|
"test()\n",
|
||||||
@@ -262,15 +268,15 @@
|
|||||||
"for epoch in range(1, n_epochs + 1):\n",
|
"for epoch in range(1, n_epochs + 1):\n",
|
||||||
" train(epoch)\n",
|
" train(epoch)\n",
|
||||||
" test()"
|
" test()"
|
||||||
],
|
]
|
||||||
"metadata": {
|
|
||||||
"id": "YgLaex1pfhqz"
|
|
||||||
},
|
|
||||||
"execution_count": null,
|
|
||||||
"outputs": []
|
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "o7fRUAy9Se1B"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"# Run network on data we got before and show predictions\n",
|
"# Run network on data we got before and show predictions\n",
|
||||||
"output = model(example_data)\n",
|
"output = model(example_data)\n",
|
||||||
@@ -285,12 +291,23 @@
|
|||||||
" plt.xticks([])\n",
|
" plt.xticks([])\n",
|
||||||
" plt.yticks([])\n",
|
" plt.yticks([])\n",
|
||||||
"plt.show()"
|
"plt.show()"
|
||||||
],
|
]
|
||||||
"metadata": {
|
|
||||||
"id": "o7fRUAy9Se1B"
|
|
||||||
},
|
|
||||||
"execution_count": null,
|
|
||||||
"outputs": []
|
|
||||||
}
|
}
|
||||||
]
|
],
|
||||||
}
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"authorship_tag": "ABX9TyORZF8xy4X1yf4oRhRq8Rtm",
|
||||||
|
"include_colab_link": true,
|
||||||
|
"provenance": []
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"display_name": "Python 3",
|
||||||
|
"name": "python3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"name": "python"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 0
|
||||||
|
}
|
||||||
|
|||||||
@@ -4,7 +4,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"colab": {
|
"colab": {
|
||||||
"provenance": [],
|
"provenance": [],
|
||||||
"authorship_tag": "ABX9TyMLKg5ZmXqojcVrZD5BGm9g",
|
"authorship_tag": "ABX9TyP3VmRg51U+7NCfSYjRRrgv",
|
||||||
"include_colab_link": true
|
"include_colab_link": true
|
||||||
},
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
@@ -33,7 +33,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates the phenomenon of shattered gradients as discussed in section 11.1.1. It replicates some of the experiments in [Balduzzi et al. (2017)](https://arxiv.org/abs/1702.08591).\n",
|
"This notebook investigates the phenomenon of shattered gradients as discussed in section 11.1.1. It replicates some of the experiments in [Balduzzi et al. (2017)](https://arxiv.org/abs/1702.08591).\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
],
|
],
|
||||||
@@ -65,7 +65,7 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"source": [
|
||||||
"# K is width, D is number of hidden units in each layer\n",
|
"# K is depth, D is number of hidden units in each layer\n",
|
||||||
"def init_params(K, D):\n",
|
"def init_params(K, D):\n",
|
||||||
" # Set seed so we always get the same random numbers\n",
|
" # Set seed so we always get the same random numbers\n",
|
||||||
" np.random.seed(1)\n",
|
" np.random.seed(1)\n",
|
||||||
@@ -267,8 +267,8 @@
|
|||||||
" fig,ax = plt.subplots()\n",
|
" fig,ax = plt.subplots()\n",
|
||||||
" ax.plot(np.squeeze(x_in), np.squeeze(dydx), 'b-')\n",
|
" ax.plot(np.squeeze(x_in), np.squeeze(dydx), 'b-')\n",
|
||||||
" ax.set_xlim(-2,2)\n",
|
" ax.set_xlim(-2,2)\n",
|
||||||
" ax.set_xlabel('Input, $x$')\n",
|
" ax.set_xlabel(r'Input, $x$')\n",
|
||||||
" ax.set_ylabel('Gradient, $dy/dx$')\n",
|
" ax.set_ylabel(r'Gradient, $dy/dx$')\n",
|
||||||
" ax.set_title('No layers = %d'%(K))\n",
|
" ax.set_title('No layers = %d'%(K))\n",
|
||||||
" plt.show()"
|
" plt.show()"
|
||||||
],
|
],
|
||||||
@@ -389,4 +389,4 @@
|
|||||||
"outputs": []
|
"outputs": []
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,7 +4,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"colab": {
|
"colab": {
|
||||||
"provenance": [],
|
"provenance": [],
|
||||||
"authorship_tag": "ABX9TyMXS3SPB4cS/4qxix0lH/Hq",
|
"authorship_tag": "ABX9TyNIY8tswL9e48d5D53aSmHO",
|
||||||
"include_colab_link": true
|
"include_colab_link": true
|
||||||
},
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
@@ -33,7 +33,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook adapts the networks for MNIST1D to use residual connections.\n",
|
"This notebook adapts the networks for MNIST1D to use residual connections.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
|
||||||
"\n"
|
"\n"
|
||||||
@@ -45,8 +45,8 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"source": [
|
||||||
"# Run this if you're in a Colab to make a local copy of the MNIST 1D repository\n",
|
"# Run this if you're in a Colab to install MNIST 1D repository\n",
|
||||||
"!git clone https://github.com/greydanus/mnist1d"
|
"!pip install git+https://github.com/greydanus/mnist1d"
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "D5yLObtZCi9J"
|
"id": "D5yLObtZCi9J"
|
||||||
|
|||||||
@@ -4,7 +4,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"colab": {
|
"colab": {
|
||||||
"provenance": [],
|
"provenance": [],
|
||||||
"authorship_tag": "ABX9TyPVeAd3eDpEOCFh8CVyr1zz",
|
"authorship_tag": "ABX9TyPx2mM2zTHmDJeKeiE1RymT",
|
||||||
"include_colab_link": true
|
"include_colab_link": true
|
||||||
},
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
@@ -33,7 +33,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates the use of batch normalization in residual networks.\n",
|
"This notebook investigates the use of batch normalization in residual networks.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
|
||||||
"\n"
|
"\n"
|
||||||
@@ -45,8 +45,8 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"source": [
|
||||||
"# Run this if you're in a Colab to make a local copy of the MNIST 1D repository\n",
|
"# Run this if you're in a Colab to install MNIST 1D repository\n",
|
||||||
"!git clone https://github.com/greydanus/mnist1d"
|
"!pip install git+https://github.com/greydanus/mnist1d"
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "D5yLObtZCi9J"
|
"id": "D5yLObtZCi9J"
|
||||||
|
|||||||
@@ -33,7 +33,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook builds a self-attention mechanism from scratch, as discussed in section 12.2 of the book.\n",
|
"This notebook builds a self-attention mechanism from scratch, as discussed in section 12.2 of the book.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
|
||||||
"\n"
|
"\n"
|
||||||
|
|||||||
@@ -4,7 +4,6 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"colab": {
|
"colab": {
|
||||||
"provenance": [],
|
"provenance": [],
|
||||||
"authorship_tag": "ABX9TyMSk8qTqDYqFnRJVZKlsue0",
|
|
||||||
"include_colab_link": true
|
"include_colab_link": true
|
||||||
},
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
@@ -29,11 +28,11 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"source": [
|
"source": [
|
||||||
"# **Notebook 12.1: Multhead Self-Attention**\n",
|
"# **Notebook 12.2: Multihead Self-Attention**\n",
|
||||||
"\n",
|
"\n",
|
||||||
"This notebook builds a multihead self-attention mechanism as in figure 12.6\n",
|
"This notebook builds a multihead self-attention mechanism as in figure 12.6\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
|
||||||
"\n"
|
"\n"
|
||||||
@@ -147,9 +146,7 @@
|
|||||||
" exp_values = np.exp(data_in) ;\n",
|
" exp_values = np.exp(data_in) ;\n",
|
||||||
" # Sum over columns\n",
|
" # Sum over columns\n",
|
||||||
" denom = np.sum(exp_values, axis = 0);\n",
|
" denom = np.sum(exp_values, axis = 0);\n",
|
||||||
" # Replicate denominator to N rows\n",
|
" # Compute softmax (numpy broadcasts denominator to all rows automatically)\n",
|
||||||
" denom = np.matmul(np.ones((data_in.shape[0],1)), denom[np.newaxis,:])\n",
|
|
||||||
" # Compute softmax\n",
|
|
||||||
" softmax = exp_values / denom\n",
|
" softmax = exp_values / denom\n",
|
||||||
" # return the answer\n",
|
" # return the answer\n",
|
||||||
" return softmax"
|
" return softmax"
|
||||||
@@ -209,4 +206,4 @@
|
|||||||
"outputs": []
|
"outputs": []
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -33,7 +33,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook builds set of tokens from a text string as in figure 12.8 of the book.\n",
|
"This notebook builds set of tokens from a text string as in figure 12.8 of the book.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"I adapted this code from *SOMEWHERE*. If anyone recognizes it, can you let me know and I will give the proper attribution or rewrite if the license is not permissive.\n",
|
"I adapted this code from *SOMEWHERE*. If anyone recognizes it, can you let me know and I will give the proper attribution or rewrite if the license is not permissive.\n",
|
||||||
"\n",
|
"\n",
|
||||||
|
|||||||
@@ -33,7 +33,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This practical investigates neural decoding from transformer models. \n",
|
"This practical investigates neural decoding from transformer models. \n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
],
|
],
|
||||||
|
|||||||
@@ -33,7 +33,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates representing graphs with matrices as illustrated in figure 13.4 from the book.\n",
|
"This notebook investigates representing graphs with matrices as illustrated in figure 13.4 from the book.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
|
||||||
"\n"
|
"\n"
|
||||||
|
|||||||
@@ -4,7 +4,6 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"colab": {
|
"colab": {
|
||||||
"provenance": [],
|
"provenance": [],
|
||||||
"authorship_tag": "ABX9TyOMSGUFWT+YN0fwYHpMmHJM",
|
|
||||||
"include_colab_link": true
|
"include_colab_link": true
|
||||||
},
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
@@ -33,7 +32,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates representing graphs with matrices as illustrated in figure 13.4 from the book.\n",
|
"This notebook investigates representing graphs with matrices as illustrated in figure 13.4 from the book.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
],
|
],
|
||||||
@@ -99,7 +98,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"# TODO -- Define node matrix\n",
|
"# TODO -- Define node matrix\n",
|
||||||
"# There will be 9 nodes and 118 possible chemical elements\n",
|
"# There will be 9 nodes and 118 possible chemical elements\n",
|
||||||
"# so we'll define a 9x118 matrix. Each column represents one\n",
|
"# so we'll define a 118x9 matrix. Each column represents one\n",
|
||||||
"# node and is a one-hot vector (i.e. all zeros, except a single one at the\n",
|
"# node and is a one-hot vector (i.e. all zeros, except a single one at the\n",
|
||||||
"# chemical number of the element).\n",
|
"# chemical number of the element).\n",
|
||||||
"# Chemical numbers: Hydrogen-->1, Carbon-->6, Oxygen-->8\n",
|
"# Chemical numbers: Hydrogen-->1, Carbon-->6, Oxygen-->8\n",
|
||||||
@@ -241,4 +240,4 @@
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
@@ -33,7 +33,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates neighborhood sampling of graphs as in figure 13.10 from the book.\n",
|
"This notebook investigates neighborhood sampling of graphs as in figure 13.10 from the book.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
],
|
],
|
||||||
|
|||||||
@@ -33,7 +33,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook builds a graph attention mechanism from scratch, as discussed in section 13.8.6 of the book and illustrated in figure 13.12c\n",
|
"This notebook builds a graph attention mechanism from scratch, as discussed in section 13.8.6 of the book and illustrated in figure 13.12c\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
|
||||||
"\n"
|
"\n"
|
||||||
@@ -109,7 +109,7 @@
|
|||||||
"# Choose random values for the parameters\n",
|
"# Choose random values for the parameters\n",
|
||||||
"omega = np.random.normal(size=(D,D))\n",
|
"omega = np.random.normal(size=(D,D))\n",
|
||||||
"beta = np.random.normal(size=(D,1))\n",
|
"beta = np.random.normal(size=(D,1))\n",
|
||||||
"phi = np.random.normal(size=(1,2*D))"
|
"phi = np.random.normal(size=(2*D,1))"
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "79TSK7oLMobe"
|
"id": "79TSK7oLMobe"
|
||||||
@@ -210,4 +210,4 @@
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,7 +4,6 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"colab": {
|
"colab": {
|
||||||
"provenance": [],
|
"provenance": [],
|
||||||
"authorship_tag": "ABX9TyM0StKV3FIZ3MZqfflqC0Rv",
|
|
||||||
"include_colab_link": true
|
"include_colab_link": true
|
||||||
},
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
@@ -33,7 +32,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates the GAN toy example as illustrated in figure 15.1 in the book.\n",
|
"This notebook investigates the GAN toy example as illustrated in figure 15.1 in the book.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
],
|
],
|
||||||
@@ -339,7 +338,7 @@
|
|||||||
" print(\"Initial generator loss = \", compute_generator_loss(z, theta, phi0, phi1))\n",
|
" print(\"Initial generator loss = \", compute_generator_loss(z, theta, phi0, phi1))\n",
|
||||||
" for iter in range(n_iter):\n",
|
" for iter in range(n_iter):\n",
|
||||||
" # Get gradient\n",
|
" # Get gradient\n",
|
||||||
" dl_dtheta = compute_generator_gradient(x_real, x_syn, phi0, phi1)\n",
|
" dl_dtheta = compute_generator_gradient(z, theta, phi0, phi1)\n",
|
||||||
" # Take a gradient step (uphill, since we are trying to make synthesized data less well classified by discriminator)\n",
|
" # Take a gradient step (uphill, since we are trying to make synthesized data less well classified by discriminator)\n",
|
||||||
" theta = theta + alpha * dl_dtheta ;\n",
|
" theta = theta + alpha * dl_dtheta ;\n",
|
||||||
"\n",
|
"\n",
|
||||||
|
|||||||
@@ -32,7 +32,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates the GAN toy example as illustrated in figure 15.1 in the book.\n",
|
"This notebook investigates the GAN toy example as illustrated in figure 15.1 in the book.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
],
|
],
|
||||||
@@ -86,6 +86,7 @@
|
|||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"source": [
|
||||||
"# TODO Define the distance matrix from figure 15.8d\n",
|
"# TODO Define the distance matrix from figure 15.8d\n",
|
||||||
|
"# The index should be normalized before being used in the distance calculation.\n",
|
||||||
"# Replace this line\n",
|
"# Replace this line\n",
|
||||||
"dist_mat = np.zeros((10,10))\n",
|
"dist_mat = np.zeros((10,10))\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -128,7 +129,7 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"source": [
|
||||||
"draw_2D_heatmap(dist_mat,'Distance $|i-j|$', my_colormap)"
|
"draw_2D_heatmap(dist_mat,r'Distance $|i-j|$', my_colormap)"
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "G0HFPBXyHT6V"
|
"id": "G0HFPBXyHT6V"
|
||||||
@@ -197,7 +198,7 @@
|
|||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"source": [
|
||||||
"TP = np.array(opt.x).reshape(10,10)\n",
|
"TP = np.array(opt.x).reshape(10,10)\n",
|
||||||
"draw_2D_heatmap(TP,'Transport plan $\\mathbf{P}$', my_colormap)"
|
"draw_2D_heatmap(TP,r'Transport plan $\\mathbf{P}$', my_colormap)"
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "nZGfkrbRV_D0"
|
"id": "nZGfkrbRV_D0"
|
||||||
@@ -218,7 +219,8 @@
|
|||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"source": [
|
||||||
"was = np.sum(TP * dist_mat)\n",
|
"was = np.sum(TP * dist_mat)\n",
|
||||||
"print(\"Wasserstein distance = \", was)"
|
"print(\"Your Wasserstein distance = \", was)\n",
|
||||||
|
"print(\"Correct answer = 0.15148578811369506\")"
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "yiQ_8j-Raq3c"
|
"id": "yiQ_8j-Raq3c"
|
||||||
@@ -242,4 +244,4 @@
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -33,7 +33,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates a 1D normalizing flows example similar to that illustrated in figures 16.1 to 16.3 in the book.\n",
|
"This notebook investigates a 1D normalizing flows example similar to that illustrated in figures 16.1 to 16.3 in the book.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
],
|
],
|
||||||
|
|||||||
@@ -33,7 +33,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates a 1D normalizing flows example similar to that illustrated in figure 16.7 in the book.\n",
|
"This notebook investigates a 1D normalizing flows example similar to that illustrated in figure 16.7 in the book.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
],
|
],
|
||||||
|
|||||||
@@ -22,7 +22,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates a 1D normalizing flows example similar to that illustrated in figure 16.9 in the book.\n",
|
"This notebook investigates a 1D normalizing flows example similar to that illustrated in figure 16.9 in the book.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
]
|
]
|
||||||
|
|||||||
@@ -1,18 +1,16 @@
|
|||||||
{
|
{
|
||||||
"cells": [
|
"cells": [
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"colab_type": "text",
|
"id": "view-in-github",
|
||||||
"id": "view-in-github"
|
"colab_type": "text"
|
||||||
},
|
},
|
||||||
"source": [
|
"source": [
|
||||||
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap17/17_1_Latent_Variable_Models.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap17/17_1_Latent_Variable_Models.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "t9vk9Elugvmi"
|
"id": "t9vk9Elugvmi"
|
||||||
@@ -22,7 +20,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates a non-linear latent variable model similar to that in figures 17.2 and 17.3 of the book.\n",
|
"This notebook investigates a non-linear latent variable model similar to that in figures 17.2 and 17.3 of the book.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
]
|
]
|
||||||
@@ -43,7 +41,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "IyVn-Gi-p7wf"
|
"id": "IyVn-Gi-p7wf"
|
||||||
@@ -55,7 +52,7 @@
|
|||||||
"Pr(z) = \\text{Norm}_{z}[0,1]\n",
|
"Pr(z) = \\text{Norm}_{z}[0,1]\n",
|
||||||
"\\end{equation}\n",
|
"\\end{equation}\n",
|
||||||
"\n",
|
"\n",
|
||||||
"As in figure 17.2, we'll assume that the output is two dimensional, we we need to define a function that maps from the 1D latent variable to two dimensions. Usually, we would use a neural network, but in this case, we'll just define an arbitrary relationship.\n",
|
"As in figure 17.2, we'll assume that the output is two dimensional, we need to define a function that maps from the 1D latent variable to two dimensions. Usually, we would use a neural network, but in this case, we'll just define an arbitrary relationship.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\\begin{align}\n",
|
"\\begin{align}\n",
|
||||||
"x_{1} &=& 0.5\\cdot\\exp\\Bigl[\\sin\\bigl[2+ 3.675 z \\bigr]\\Bigr]\\\\\n",
|
"x_{1} &=& 0.5\\cdot\\exp\\Bigl[\\sin\\bigl[2+ 3.675 z \\bigr]\\Bigr]\\\\\n",
|
||||||
@@ -79,7 +76,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "KB9FU34onW1j"
|
"id": "KB9FU34onW1j"
|
||||||
@@ -145,7 +141,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "sQg2gKR5zMrF"
|
"id": "sQg2gKR5zMrF"
|
||||||
@@ -223,7 +218,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "0X4NwixzqxtZ"
|
"id": "0X4NwixzqxtZ"
|
||||||
@@ -254,7 +248,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "25xqXnmFo-PH"
|
"id": "25xqXnmFo-PH"
|
||||||
@@ -281,7 +274,7 @@
|
|||||||
"# We can't integrate this function in closed form\n",
|
"# We can't integrate this function in closed form\n",
|
||||||
"# So let's approximate it as a sum over the z values (z = np.arange(-3,3,0.01))\n",
|
"# So let's approximate it as a sum over the z values (z = np.arange(-3,3,0.01))\n",
|
||||||
"# You will need the functions get_likelihood() and get_prior()\n",
|
"# You will need the functions get_likelihood() and get_prior()\n",
|
||||||
"# To make this a valid probability distribution, you need to divide\n",
|
"# To make this a valid probability distribution, you need to multiply\n",
|
||||||
"# By the z-increment (0.01)\n",
|
"# By the z-increment (0.01)\n",
|
||||||
"# Replace this line\n",
|
"# Replace this line\n",
|
||||||
"pr_x1_x2 = np.zeros_like(x1_mesh)\n",
|
"pr_x1_x2 = np.zeros_like(x1_mesh)\n",
|
||||||
@@ -292,7 +285,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "W264N7By_h9y"
|
"id": "W264N7By_h9y"
|
||||||
@@ -320,7 +312,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "D7N7oqLe-eJO"
|
"id": "D7N7oqLe-eJO"
|
||||||
@@ -388,9 +379,8 @@
|
|||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"colab": {
|
"colab": {
|
||||||
"authorship_tag": "ABX9TyOSEQVqxE5KrXmsZVh9M3gq",
|
"provenance": [],
|
||||||
"include_colab_link": true,
|
"include_colab_link": true
|
||||||
"provenance": []
|
|
||||||
},
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "Python 3",
|
"display_name": "Python 3",
|
||||||
@@ -402,4 +392,4 @@
|
|||||||
},
|
},
|
||||||
"nbformat": 4,
|
"nbformat": 4,
|
||||||
"nbformat_minor": 0
|
"nbformat_minor": 0
|
||||||
}
|
}
|
||||||
@@ -1,18 +1,16 @@
|
|||||||
{
|
{
|
||||||
"cells": [
|
"cells": [
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"colab_type": "text",
|
"id": "view-in-github",
|
||||||
"id": "view-in-github"
|
"colab_type": "text"
|
||||||
},
|
},
|
||||||
"source": [
|
"source": [
|
||||||
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap17/17_2_Reparameterization_Trick.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap17/17_2_Reparameterization_Trick.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "t9vk9Elugvmi"
|
"id": "t9vk9Elugvmi"
|
||||||
@@ -22,7 +20,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates the reparameterization trick as described in section 17.7 of the book.\n",
|
"This notebook investigates the reparameterization trick as described in section 17.7 of the book.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
]
|
]
|
||||||
@@ -40,7 +38,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "paLz5RukZP1J"
|
"id": "paLz5RukZP1J"
|
||||||
@@ -114,7 +111,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "r5Hl2QkimWx9"
|
"id": "r5Hl2QkimWx9"
|
||||||
@@ -139,13 +135,12 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"fig,ax = plt.subplots()\n",
|
"fig,ax = plt.subplots()\n",
|
||||||
"ax.plot(phi_vals, expected_vals,'r-')\n",
|
"ax.plot(phi_vals, expected_vals,'r-')\n",
|
||||||
"ax.set_xlabel('Parameter $\\phi$')\n",
|
"ax.set_xlabel(r'Parameter $\\phi$')\n",
|
||||||
"ax.set_ylabel('$\\mathbb{E}_{Pr(x|\\phi)}[f[x]]$')\n",
|
"ax.set_ylabel(r'$\\mathbb{E}_{Pr(x|\\phi)}[f[x]]$')\n",
|
||||||
"plt.show()"
|
"plt.show()"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "zTCykVeWqj_O"
|
"id": "zTCykVeWqj_O"
|
||||||
@@ -253,13 +248,12 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"fig,ax = plt.subplots()\n",
|
"fig,ax = plt.subplots()\n",
|
||||||
"ax.plot(phi_vals, deriv_vals,'r-')\n",
|
"ax.plot(phi_vals, deriv_vals,'r-')\n",
|
||||||
"ax.set_xlabel('Parameter $\\phi$')\n",
|
"ax.set_xlabel(r'Parameter $\\phi$')\n",
|
||||||
"ax.set_ylabel('$\\partial/\\partial\\phi\\mathbb{E}_{Pr(x|\\phi)}[f[x]]$')\n",
|
"ax.set_ylabel(r'$\\partial/\\partial\\phi\\mathbb{E}_{Pr(x|\\phi)}[f[x]]$')\n",
|
||||||
"plt.show()"
|
"plt.show()"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "ASu4yKSwAEYI"
|
"id": "ASu4yKSwAEYI"
|
||||||
@@ -269,7 +263,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "xoFR1wifc8-b"
|
"id": "xoFR1wifc8-b"
|
||||||
@@ -366,13 +359,12 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"fig,ax = plt.subplots()\n",
|
"fig,ax = plt.subplots()\n",
|
||||||
"ax.plot(phi_vals, deriv_vals,'r-')\n",
|
"ax.plot(phi_vals, deriv_vals,'r-')\n",
|
||||||
"ax.set_xlabel('Parameter $\\phi$')\n",
|
"ax.set_xlabel(r'Parameter $\\phi$')\n",
|
||||||
"ax.set_ylabel('$\\partial/\\partial\\phi\\mathbb{E}_{Pr(x|\\phi)}[f[x]]$')\n",
|
"ax.set_ylabel(r'$\\partial/\\partial\\phi\\mathbb{E}_{Pr(x|\\phi)}[f[x]]$')\n",
|
||||||
"plt.show()"
|
"plt.show()"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "1TWBiUC7bQSw"
|
"id": "1TWBiUC7bQSw"
|
||||||
@@ -403,7 +395,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "d-0tntSYdKPR"
|
"id": "d-0tntSYdKPR"
|
||||||
@@ -415,9 +406,8 @@
|
|||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"colab": {
|
"colab": {
|
||||||
"authorship_tag": "ABX9TyOxO2/0DTH4n4zhC97qbagY",
|
"provenance": [],
|
||||||
"include_colab_link": true,
|
"include_colab_link": true
|
||||||
"provenance": []
|
|
||||||
},
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "Python 3",
|
"display_name": "Python 3",
|
||||||
@@ -429,4 +419,4 @@
|
|||||||
},
|
},
|
||||||
"nbformat": 4,
|
"nbformat": 4,
|
||||||
"nbformat_minor": 0
|
"nbformat_minor": 0
|
||||||
}
|
}
|
||||||
@@ -1,18 +1,16 @@
|
|||||||
{
|
{
|
||||||
"cells": [
|
"cells": [
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"colab_type": "text",
|
"id": "view-in-github",
|
||||||
"id": "view-in-github"
|
"colab_type": "text"
|
||||||
},
|
},
|
||||||
"source": [
|
"source": [
|
||||||
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap17/17_3_Importance_Sampling.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
"<a href=\"https://colab.research.google.com/github/udlbook/udlbook/blob/main/Notebooks/Chap17/17_3_Importance_Sampling.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "t9vk9Elugvmi"
|
"id": "t9vk9Elugvmi"
|
||||||
@@ -22,7 +20,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates importance sampling as described in section 17.8.1 of the book.\n",
|
"This notebook investigates importance sampling as described in section 17.8.1 of the book.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
]
|
]
|
||||||
@@ -40,7 +38,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "f7a6xqKjkmvT"
|
"id": "f7a6xqKjkmvT"
|
||||||
@@ -61,7 +58,7 @@
|
|||||||
"by drawing $I$ samples $y_i$ and using the formula:\n",
|
"by drawing $I$ samples $y_i$ and using the formula:\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\\begin{equation}\n",
|
"\\begin{equation}\n",
|
||||||
"\\mathbb{E}_{y}\\Bigl[\\exp\\bigl[- (y-1)^4\\bigr]\\Bigr] \\approx \\frac{1}{I} \\sum_{i=1}^I \\exp\\bigl[-(y-1)^4 \\bigr]\n",
|
"\\mathbb{E}_{y}\\Bigl[\\exp\\bigl[- (y-1)^4\\bigr]\\Bigr] \\approx \\frac{1}{I} \\sum_{i=1}^I \\exp\\bigl[-(y_i-1)^4 \\bigr]\n",
|
||||||
"\\end{equation}"
|
"\\end{equation}"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@@ -126,7 +123,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "Jr4UPcqmnXCS"
|
"id": "Jr4UPcqmnXCS"
|
||||||
@@ -166,8 +162,8 @@
|
|||||||
"mean_all = np.zeros_like(n_sample_all)\n",
|
"mean_all = np.zeros_like(n_sample_all)\n",
|
||||||
"variance_all = np.zeros_like(n_sample_all)\n",
|
"variance_all = np.zeros_like(n_sample_all)\n",
|
||||||
"for i in range(len(n_sample_all)):\n",
|
"for i in range(len(n_sample_all)):\n",
|
||||||
" print(\"Computing mean and variance for expectation with %d samples\"%(n_sample_all[i]))\n",
|
" mean_all[i],variance_all[i] = compute_mean_variance(n_sample_all[i])\n",
|
||||||
" mean_all[i],variance_all[i] = compute_mean_variance(n_sample_all[i])"
|
" print(\"No samples: \", n_sample_all[i], \", Mean: \", mean_all[i], \", Variance: \", variance_all[i])"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -189,7 +185,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "XTUpxFlSuOl7"
|
"id": "XTUpxFlSuOl7"
|
||||||
@@ -199,7 +194,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "6hxsl3Pxo1TT"
|
"id": "6hxsl3Pxo1TT"
|
||||||
@@ -234,7 +228,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "G9Xxo0OJsIqD"
|
"id": "G9Xxo0OJsIqD"
|
||||||
@@ -283,7 +276,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "2sVDqP0BvxqM"
|
"id": "2sVDqP0BvxqM"
|
||||||
@@ -313,8 +305,8 @@
|
|||||||
"mean_all2 = np.zeros_like(n_sample_all)\n",
|
"mean_all2 = np.zeros_like(n_sample_all)\n",
|
||||||
"variance_all2 = np.zeros_like(n_sample_all)\n",
|
"variance_all2 = np.zeros_like(n_sample_all)\n",
|
||||||
"for i in range(len(n_sample_all)):\n",
|
"for i in range(len(n_sample_all)):\n",
|
||||||
" print(\"Computing variance for expectation with %d samples\"%(n_sample_all[i]))\n",
|
" mean_all2[i], variance_all2[i] = compute_mean_variance2(n_sample_all[i])\n",
|
||||||
" mean_all2[i], variance_all2[i] = compute_mean_variance2(n_sample_all[i])"
|
" print(\"No samples: \", n_sample_all[i], \", Mean: \", mean_all2[i], \", Variance: \", variance_all2[i])"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -348,7 +340,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "EtBP6NeLwZqz"
|
"id": "EtBP6NeLwZqz"
|
||||||
@@ -360,7 +351,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "_wuF-NoQu1--"
|
"id": "_wuF-NoQu1--"
|
||||||
@@ -432,8 +422,8 @@
|
|||||||
"mean_all2b = np.zeros_like(n_sample_all)\n",
|
"mean_all2b = np.zeros_like(n_sample_all)\n",
|
||||||
"variance_all2b = np.zeros_like(n_sample_all)\n",
|
"variance_all2b = np.zeros_like(n_sample_all)\n",
|
||||||
"for i in range(len(n_sample_all)):\n",
|
"for i in range(len(n_sample_all)):\n",
|
||||||
" print(\"Computing variance for expectation with %d samples\"%(n_sample_all[i]))\n",
|
" mean_all2b[i], variance_all2b[i] = compute_mean_variance2b(n_sample_all[i])\n",
|
||||||
" mean_all2b[i], variance_all2b[i] = compute_mean_variance2b(n_sample_all[i])"
|
" print(\"No samples: \", n_sample_all[i], \", Mean: \", mean_all2b[i], \", Variance: \", variance_all2b[i])"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -478,7 +468,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "y8rgge9MNiOc"
|
"id": "y8rgge9MNiOc"
|
||||||
@@ -490,9 +479,8 @@
|
|||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"colab": {
|
"colab": {
|
||||||
"authorship_tag": "ABX9TyNecz9/CDOggPSmy1LjT/Dv",
|
"provenance": [],
|
||||||
"include_colab_link": true,
|
"include_colab_link": true
|
||||||
"provenance": []
|
|
||||||
},
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "Python 3",
|
"display_name": "Python 3",
|
||||||
@@ -504,4 +492,4 @@
|
|||||||
},
|
},
|
||||||
"nbformat": 4,
|
"nbformat": 4,
|
||||||
"nbformat_minor": 0
|
"nbformat_minor": 0
|
||||||
}
|
}
|
||||||
@@ -20,7 +20,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates the diffusion encoder as described in section 18.2 of the book.\n",
|
"This notebook investigates the diffusion encoder as described in section 18.2 of the book.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
]
|
]
|
||||||
|
|||||||
@@ -22,7 +22,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates the diffusion encoder as described in section 18.3 and 18.4 of the book.\n",
|
"This notebook investigates the diffusion encoder as described in section 18.3 and 18.4 of the book.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
]
|
]
|
||||||
|
|||||||
@@ -22,7 +22,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates the reparameterized model as described in section 18.5 of the book and implements algorithms 18.1 and 18.2.\n",
|
"This notebook investigates the reparameterized model as described in section 18.5 of the book and implements algorithms 18.1 and 18.2.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
]
|
]
|
||||||
|
|||||||
@@ -22,7 +22,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates the reparameterized model as described in section 18.5 of the book and computers the results shown in figure 18.10c-f. These models are based on the paper \"Denoising diffusion implicit models\" which can be found [here](https://arxiv.org/pdf/2010.02502.pdf).\n",
|
"This notebook investigates the reparameterized model as described in section 18.5 of the book and computers the results shown in figure 18.10c-f. These models are based on the paper \"Denoising diffusion implicit models\" which can be found [here](https://arxiv.org/pdf/2010.02502.pdf).\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
]
|
]
|
||||||
|
|||||||
@@ -33,7 +33,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates Markov decision processes as described in section 19.1 of the book.\n",
|
"This notebook investigates Markov decision processes as described in section 19.1 of the book.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
],
|
],
|
||||||
|
|||||||
@@ -4,7 +4,6 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"colab": {
|
"colab": {
|
||||||
"provenance": [],
|
"provenance": [],
|
||||||
"authorship_tag": "ABX9TyOlD6kmCxX3SKKuh3oJikKA",
|
|
||||||
"include_colab_link": true
|
"include_colab_link": true
|
||||||
},
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
@@ -33,7 +32,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates the dynamic programming approach to tabular reinforcement learning as described in figure 19.10 of the book.\n",
|
"This notebook investigates the dynamic programming approach to tabular reinforcement learning as described in figure 19.10 of the book.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
],
|
],
|
||||||
@@ -393,7 +392,7 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"source": [
|
||||||
"# Update the state values for the current policy, by making the values at at adjacent\n",
|
"# Update the state values for the current policy, by making the values at adjacent\n",
|
||||||
"# states compatible with the Bellman equation (equation 19.11)\n",
|
"# states compatible with the Bellman equation (equation 19.11)\n",
|
||||||
"def policy_evaluation(policy, state_values, rewards, transition_probabilities_given_action, gamma):\n",
|
"def policy_evaluation(policy, state_values, rewards, transition_probabilities_given_action, gamma):\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -406,6 +405,10 @@
|
|||||||
" state_values_new[state] = 3.0\n",
|
" state_values_new[state] = 3.0\n",
|
||||||
" break\n",
|
" break\n",
|
||||||
"\n",
|
"\n",
|
||||||
|
" # TODO -- Write this function (from equation 19.11, but bear in mind policy is deterministic here)\n",
|
||||||
|
" # Replace this line\n",
|
||||||
|
" state_values_new[state] = 0\n",
|
||||||
|
"\n",
|
||||||
" return state_values_new\n",
|
" return state_values_new\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# Greedily choose the action that maximizes the value for each state.\n",
|
"# Greedily choose the action that maximizes the value for each state.\n",
|
||||||
|
|||||||
@@ -22,7 +22,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"NOTE! There is a mistake in Figure 19.11 in the first printing of the book, so check the errata to avoid becoming confused. Apologies!\n",
|
"NOTE! There is a mistake in Figure 19.11 in the first printing of the book, so check the errata to avoid becoming confused. Apologies!\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -437,7 +437,7 @@
|
|||||||
" new_state = np.random.choice(a=np.arange(0,transition_probabilities_given_action.shape[0]),p = transition_probabilities_given_action[:,state,action])\n",
|
" new_state = np.random.choice(a=np.arange(0,transition_probabilities_given_action.shape[0]),p = transition_probabilities_given_action[:,state,action])\n",
|
||||||
" # Return the reward\n",
|
" # Return the reward\n",
|
||||||
" reward = reward_structure[new_state]\n",
|
" reward = reward_structure[new_state]\n",
|
||||||
" is_terminal = new_state in [terminal_states]\n",
|
" is_terminal = new_state in terminal_states\n",
|
||||||
"\n",
|
"\n",
|
||||||
" return new_state, reward, action, is_terminal"
|
" return new_state, reward, action, is_terminal"
|
||||||
]
|
]
|
||||||
|
|||||||
@@ -20,7 +20,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates temporal difference methods for tabular reinforcement learning as described in section 19.3.3 of the book\n",
|
"This notebook investigates temporal difference methods for tabular reinforcement learning as described in section 19.3.3 of the book\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -265,7 +265,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"In this icy environment the penguin is at one of the discrete cells in the gridworld. The agent starts each episode on a randomly chosen cell. The environment state dynamics are captured by the transition probabilities $Pr(s_{t+1} |s_t, a_t)$ where $s_t$ is the current state, $a_t$ is the action chosen, and $s_{t+1}$ is the next state at decision stage t. At each decision stage, the penguin can move in one of four directions: $a=0$ means try to go upward, $a=1$, right, $a=2$ down and $a=3$ left.\n",
|
"In this icy environment the penguin is at one of the discrete cells in the gridworld. The agent starts each episode on a randomly chosen cell. The environment state dynamics are captured by the transition probabilities $Pr(s_{t+1} |s_t, a_t)$ where $s_t$ is the current state, $a_t$ is the action chosen, and $s_{t+1}$ is the next state at decision stage t. At each decision stage, the penguin can move in one of four directions: $a=0$ means try to go upward, $a=1$, right, $a=2$ down and $a=3$ left.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"However, the ice is slippery, so we don't always go the direction we want to: every time the agent chooses an action, with 0.25 probability, the environment changes the action taken to a differenct action, which is uniformly sampled from the other available actions.\n",
|
"However, the ice is slippery, so we don't always go the direction we want to: every time the agent chooses an action, with 0.25 probability, the environment changes the action taken to a different action, which is uniformly sampled from the other available actions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"The rewards are deterministic; the penguin will receive a reward of +3 if it reaches the fish, -2 if it slips into a hole and 0 otherwise.\n",
|
"The rewards are deterministic; the penguin will receive a reward of +3 if it reaches the fish, -2 if it slips into a hole and 0 otherwise.\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -470,7 +470,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
" # Return the reward -- here the reward is for arriving at the state\n",
|
" # Return the reward -- here the reward is for arriving at the state\n",
|
||||||
" reward = reward_structure[new_state]\n",
|
" reward = reward_structure[new_state]\n",
|
||||||
" is_terminal = new_state in [terminal_states]\n",
|
" is_terminal = new_state in terminal_states\n",
|
||||||
"\n",
|
"\n",
|
||||||
" return new_state, reward, action, is_terminal"
|
" return new_state, reward, action, is_terminal"
|
||||||
]
|
]
|
||||||
|
|||||||
@@ -34,7 +34,7 @@
|
|||||||
"This notebook investigates the method of control variates as described in figure 19.16\n",
|
"This notebook investigates the method of control variates as described in figure 19.16\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
],
|
],
|
||||||
|
|||||||
@@ -4,7 +4,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"colab": {
|
"colab": {
|
||||||
"provenance": [],
|
"provenance": [],
|
||||||
"authorship_tag": "ABX9TyPkSYbEjOcEmLt8tU6HxNuR",
|
"authorship_tag": "ABX9TyNgBRvfIlngVobKuLE6leM+",
|
||||||
"include_colab_link": true
|
"include_colab_link": true
|
||||||
},
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
@@ -33,7 +33,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates training the network with random data, as illustrated in figure 20.1.\n",
|
"This notebook investigates training the network with random data, as illustrated in figure 20.1.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
|
||||||
"\n"
|
"\n"
|
||||||
@@ -45,8 +45,8 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"source": [
|
||||||
"# Run this if you're in a Colab to make a local copy of the MNIST 1D repository\n",
|
"# Run this if you're in a Colab to install MNIST 1D repository\n",
|
||||||
"!git clone https://github.com/greydanus/mnist1d"
|
"!pip install git+https://github.com/greydanus/mnist1d"
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "D5yLObtZCi9J"
|
"id": "D5yLObtZCi9J"
|
||||||
|
|||||||
@@ -4,7 +4,7 @@
|
|||||||
"metadata": {
|
"metadata": {
|
||||||
"colab": {
|
"colab": {
|
||||||
"provenance": [],
|
"provenance": [],
|
||||||
"authorship_tag": "ABX9TyOo4vm4MXcIvAzVlMCaLikH",
|
"authorship_tag": "ABX9TyO6xuszaG4nNAcWy/3juLkn",
|
||||||
"include_colab_link": true
|
"include_colab_link": true
|
||||||
},
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
@@ -33,7 +33,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates training a network with full batch gradient descent as in figure 20.2. There is also a version (notebook takes a long time to run), but this didn't speed it up much for me. If you run out of CoLab time, you'll need to download the Python file and run locally.\n",
|
"This notebook investigates training a network with full batch gradient descent as in figure 20.2. There is also a version (notebook takes a long time to run), but this didn't speed it up much for me. If you run out of CoLab time, you'll need to download the Python file and run locally.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
],
|
],
|
||||||
@@ -44,8 +44,8 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"source": [
|
||||||
"# Run this if you're in a Colab to make a local copy of the MNIST 1D repository\n",
|
"# Run this if you're in a Colab to install MNIST 1D repository\n",
|
||||||
"!git clone https://github.com/greydanus/mnist1d"
|
"!pip install git+https://github.com/greydanus/mnist1d"
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "D5yLObtZCi9J"
|
"id": "D5yLObtZCi9J"
|
||||||
|
|||||||
@@ -5,7 +5,7 @@
|
|||||||
"colab": {
|
"colab": {
|
||||||
"provenance": [],
|
"provenance": [],
|
||||||
"gpuType": "T4",
|
"gpuType": "T4",
|
||||||
"authorship_tag": "ABX9TyMjPBfDONmjqTSyEQDP2gjY",
|
"authorship_tag": "ABX9TyOG/5A+P053/x1IfFg52z4V",
|
||||||
"include_colab_link": true
|
"include_colab_link": true
|
||||||
},
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
@@ -35,7 +35,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates training a network with full batch gradient descent as in figure 20.2. This is the GPU version (notebook takes a long time to run). If you are using Colab then you need to go change the runtime type to GPU on the Runtime menu. Even then, you may run out of time. If that's the case, you'll need to download the Python file and run locally.\n",
|
"This notebook investigates training a network with full batch gradient descent as in figure 20.2. This is the GPU version (notebook takes a long time to run). If you are using Colab then you need to go change the runtime type to GPU on the Runtime menu. Even then, you may run out of time. If that's the case, you'll need to download the Python file and run locally.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n",
|
||||||
"\n"
|
"\n"
|
||||||
@@ -47,8 +47,8 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"source": [
|
||||||
"# Run this if you're in a Colab to make a local copy of the MNIST 1D repository\n",
|
"# Run this if you're in a Colab to install MNIST 1D repository\n",
|
||||||
"!git clone https://github.com/greydanus/mnist1d"
|
"!pip install git+https://github.com/greydanus/mnist1d"
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "D5yLObtZCi9J"
|
"id": "D5yLObtZCi9J"
|
||||||
|
|||||||
@@ -32,7 +32,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates the phenomenon of lottery tickets as discussed in section 20.2.7. This notebook is highly derivative of the MNIST-1D code hosted by Sam Greydanus at https://github.com/greydanus/mnist1d. \n",
|
"This notebook investigates the phenomenon of lottery tickets as discussed in section 20.2.7. This notebook is highly derivative of the MNIST-1D code hosted by Sam Greydanus at https://github.com/greydanus/mnist1d. \n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions."
|
||||||
]
|
]
|
||||||
@@ -43,7 +43,8 @@
|
|||||||
"id": "Sg2i1QmhKW5d"
|
"id": "Sg2i1QmhKW5d"
|
||||||
},
|
},
|
||||||
"source": [
|
"source": [
|
||||||
"# Run this if you're in a Colab\n",
|
"# Run this if you're in a Colab to install MNIST 1D repository\n",
|
||||||
|
"!pip install git+https://github.com/greydanus/mnist1d\n",
|
||||||
"!git clone https://github.com/greydanus/mnist1d"
|
"!git clone https://github.com/greydanus/mnist1d"
|
||||||
],
|
],
|
||||||
"execution_count": null,
|
"execution_count": null,
|
||||||
@@ -95,6 +96,12 @@
|
|||||||
"id": "I-vm_gh5xTJs"
|
"id": "I-vm_gh5xTJs"
|
||||||
},
|
},
|
||||||
"source": [
|
"source": [
|
||||||
|
"from mnist1d.data import get_dataset, get_dataset_args\n",
|
||||||
|
"from mnist1d.utils import set_seed, to_pickle, from_pickle\n",
|
||||||
|
"\n",
|
||||||
|
"import sys ; sys.path.append('./mnist1d/notebooks')\n",
|
||||||
|
"from train import get_model_args, train_model\n",
|
||||||
|
"\n",
|
||||||
"args = mnist1d.get_dataset_args()\n",
|
"args = mnist1d.get_dataset_args()\n",
|
||||||
"data = mnist1d.get_dataset(args=args) # by default, this will download a pre-made dataset from the GitHub repo\n",
|
"data = mnist1d.get_dataset(args=args) # by default, this will download a pre-made dataset from the GitHub repo\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -210,7 +217,7 @@
|
|||||||
" # we would return [1,1,0,0,1]\n",
|
" # we would return [1,1,0,0,1]\n",
|
||||||
" # Remember that these are torch tensors and not numpy arrays\n",
|
" # Remember that these are torch tensors and not numpy arrays\n",
|
||||||
" # Replace this function:\n",
|
" # Replace this function:\n",
|
||||||
" mask = torch.ones_like(scores)\n",
|
" mask = torch.ones_like(absolute_weights)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
" return mask"
|
" return mask"
|
||||||
@@ -237,7 +244,6 @@
|
|||||||
"def find_lottery_ticket(model, dataset, args, sparsity_schedule, criteria_fn=None, **kwargs):\n",
|
"def find_lottery_ticket(model, dataset, args, sparsity_schedule, criteria_fn=None, **kwargs):\n",
|
||||||
"\n",
|
"\n",
|
||||||
" criteria_fn = lambda init_params, final_params: final_params.abs()\n",
|
" criteria_fn = lambda init_params, final_params: final_params.abs()\n",
|
||||||
"\n",
|
|
||||||
" init_params = model.get_layer_vecs()\n",
|
" init_params = model.get_layer_vecs()\n",
|
||||||
" stats = {'train_losses':[], 'test_losses':[], 'train_accs':[], 'test_accs':[]}\n",
|
" stats = {'train_losses':[], 'test_losses':[], 'train_accs':[], 'test_accs':[]}\n",
|
||||||
" models = []\n",
|
" models = []\n",
|
||||||
@@ -253,7 +259,7 @@
|
|||||||
" model.set_layer_masks(masks)\n",
|
" model.set_layer_masks(masks)\n",
|
||||||
"\n",
|
"\n",
|
||||||
" # training process\n",
|
" # training process\n",
|
||||||
" results = mnist1d.train_model(dataset, model, args)\n",
|
" results = train_model(dataset, model, args)\n",
|
||||||
" model = results['checkpoints'][-1]\n",
|
" model = results['checkpoints'][-1]\n",
|
||||||
"\n",
|
"\n",
|
||||||
" # store stats\n",
|
" # store stats\n",
|
||||||
@@ -291,7 +297,8 @@
|
|||||||
},
|
},
|
||||||
"source": [
|
"source": [
|
||||||
"# train settings\n",
|
"# train settings\n",
|
||||||
"model_args = mnist1d.get_model_args()\n",
|
"from train import get_model_args, train_model\n",
|
||||||
|
"model_args = get_model_args()\n",
|
||||||
"model_args.total_steps = 1501\n",
|
"model_args.total_steps = 1501\n",
|
||||||
"model_args.hidden_size = 500\n",
|
"model_args.hidden_size = 500\n",
|
||||||
"model_args.print_every = 5000 # print never\n",
|
"model_args.print_every = 5000 # print never\n",
|
||||||
|
|||||||
@@ -33,7 +33,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook builds uses the network for classification of MNIST from Notebook 10.5. The code is adapted from https://nextjournal.com/gkoehler/pytorch-mnist, and uses the fast gradient sign attack of [Goodfellow et al. (2015)](https://arxiv.org/abs/1412.6572). Having trained, the network, we search for adversarial examples -- inputs which look very similar to class A, but are mistakenly classified as class B. We do this by starting with a correctly classified example and perturbing it according to the gradients of the network so that the output changes.\n",
|
"This notebook builds uses the network for classification of MNIST from Notebook 10.5. The code is adapted from https://nextjournal.com/gkoehler/pytorch-mnist, and uses the fast gradient sign attack of [Goodfellow et al. (2015)](https://arxiv.org/abs/1412.6572). Having trained, the network, we search for adversarial examples -- inputs which look very similar to class A, but are mistakenly classified as class B. We do this by starting with a correctly classified example and perturbing it according to the gradients of the network so that the output changes.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n"
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n"
|
||||||
],
|
],
|
||||||
|
|||||||
@@ -22,7 +22,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates a post-processing method for bias mitigation (see figure 21.2 in the book). It based on this [blog](https://www.borealisai.com/research-blogs/tutorial1-bias-and-fairness-ai/) that I wrote for Borealis AI in 2019, which itself was derived from [this blog](https://research.google.com/bigpicture/attacking-discrimination-in-ml/) by Wattenberg, Viégas, and Hardt.\n",
|
"This notebook investigates a post-processing method for bias mitigation (see figure 21.2 in the book). It based on this [blog](https://www.borealisai.com/research-blogs/tutorial1-bias-and-fairness-ai/) that I wrote for Borealis AI in 2019, which itself was derived from [this blog](https://research.google.com/bigpicture/attacking-discrimination-in-ml/) by Wattenberg, Viégas, and Hardt.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n"
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n"
|
||||||
]
|
]
|
||||||
@@ -137,7 +137,7 @@
|
|||||||
"id": "CfZ-srQtmff2"
|
"id": "CfZ-srQtmff2"
|
||||||
},
|
},
|
||||||
"source": [
|
"source": [
|
||||||
"Why might the distributions for blue and yellow populations be different? It could be that the behaviour of the populations is identical, but the credit rating algorithm is biased; it may favor one population over another or simply be more noisy for one group. Alternatively, it could be that that the populations genuinely behave differently. In practice, the differences in blue and yellow distributions are probably attributable to a combination of these factors.\n",
|
"Why might the distributions for blue and yellow populations be different? It could be that the behaviour of the populations is identical, but the credit rating algorithm is biased; it may favor one population over another or simply be more noisy for one group. Alternatively, it could be that the populations genuinely behave differently. In practice, the differences in blue and yellow distributions are probably attributable to a combination of these factors.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Let’s assume that we can’t retrain the credit score prediction algorithm; our job is to adjudicate whether each individual is refused the loan ($\\hat{y}=0$)\n",
|
"Let’s assume that we can’t retrain the credit score prediction algorithm; our job is to adjudicate whether each individual is refused the loan ($\\hat{y}=0$)\n",
|
||||||
" or granted it ($\\hat{y}=1$). Since we only have the credit score\n",
|
" or granted it ($\\hat{y}=1$). Since we only have the credit score\n",
|
||||||
@@ -328,7 +328,7 @@
|
|||||||
},
|
},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"# TO DO -- try to change the two thresholds so the overall probability of getting the loan is 0.6 for each group\n",
|
"# TODO -- try to change the two thresholds so the overall probability of getting the loan is 0.6 for each group\n",
|
||||||
"# Change the values in these lines\n",
|
"# Change the values in these lines\n",
|
||||||
"tau0 = 0.3\n",
|
"tau0 = 0.3\n",
|
||||||
"tau1 = -0.1\n",
|
"tau1 = -0.1\n",
|
||||||
@@ -382,7 +382,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"# Equal opportunity:\n",
|
"# Equal opportunity:\n",
|
||||||
"\n",
|
"\n",
|
||||||
"The thresholds are chosen so that so that the true positive rate is is the same for both population. Of the people who pay back the loan, the same proportion are offered credit in each group. In terms of the two ROC curves, it means choosing thresholds so that the vertical position on each curve is the same without regard for the horizontal position."
|
"The thresholds are chosen so that so that the true positive rate is the same for both population. Of the people who pay back the loan, the same proportion are offered credit in each group. In terms of the two ROC curves, it means choosing thresholds so that the vertical position on each curve is the same without regard for the horizontal position."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -393,7 +393,7 @@
|
|||||||
},
|
},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"# TO DO -- try to change the two thresholds so the true positive are 0.8 for each group\n",
|
"# TODO --try to change the two thresholds so the true positive are 0.8 for each group\n",
|
||||||
"# Change the values in these lines so that both points on the curves have a height of 0.8\n",
|
"# Change the values in these lines so that both points on the curves have a height of 0.8\n",
|
||||||
"tau0 = -0.1\n",
|
"tau0 = -0.1\n",
|
||||||
"tau1 = -0.7\n",
|
"tau1 = -0.7\n",
|
||||||
|
|||||||
@@ -33,7 +33,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"This notebook investigates the LIME explainability method as depicted in figure 21.3 of the book.\n",
|
"This notebook investigates the LIME explainability method as depicted in figure 21.3 of the book.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Work through the cells below, running each cell in turn. In various places you will see the words \"TO DO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
"Work through the cells below, running each cell in turn. In various places you will see the words \"TODO\". Follow the instructions at these places and make predictions about what is going to happen or write code to complete the functions.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n"
|
"Contact me at udlbookmail@gmail.com if you find any mistakes or have any suggestions.\n"
|
||||||
],
|
],
|
||||||
|
|||||||
7
Notebooks/LICENSE (MIT)
Normal file
7
Notebooks/LICENSE (MIT)
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
Copyright 2023 Simon Prince
|
||||||
|
|
||||||
|
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
||||||
|
|
||||||
|
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
|
||||||
|
|
||||||
|
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||||
326
Trees/LinearRegression_FitModel.ipynb
Normal file
326
Trees/LinearRegression_FitModel.ipynb
Normal file
File diff suppressed because one or more lines are too long
357
Trees/LinearRegression_FitModel_Answers.ipynb
Normal file
357
Trees/LinearRegression_FitModel_Answers.ipynb
Normal file
File diff suppressed because one or more lines are too long
343
Trees/LinearRegression_FitModel_Quadratic.ipynb
Normal file
343
Trees/LinearRegression_FitModel_Quadratic.ipynb
Normal file
File diff suppressed because one or more lines are too long
277
Trees/LinearRegression_LossFunction.ipynb
Normal file
277
Trees/LinearRegression_LossFunction.ipynb
Normal file
File diff suppressed because one or more lines are too long
325
Trees/LinearRegression_LossFunction_Answers.ipynb
Normal file
325
Trees/LinearRegression_LossFunction_Answers.ipynb
Normal file
File diff suppressed because one or more lines are too long
489
Trees/SAT_Construction.ipynb
Normal file
489
Trees/SAT_Construction.ipynb
Normal file
File diff suppressed because one or more lines are too long
271
Trees/SAT_Construction2.ipynb
Normal file
271
Trees/SAT_Construction2.ipynb
Normal file
File diff suppressed because one or more lines are too long
261
Trees/SAT_Construction2_Answers.ipynb
Normal file
261
Trees/SAT_Construction2_Answers.ipynb
Normal file
File diff suppressed because one or more lines are too long
570
Trees/SAT_Construction_Answers.ipynb
Normal file
570
Trees/SAT_Construction_Answers.ipynb
Normal file
File diff suppressed because one or more lines are too long
1061
Trees/SAT_Crossword.ipynb
Normal file
1061
Trees/SAT_Crossword.ipynb
Normal file
File diff suppressed because one or more lines are too long
911
Trees/SAT_Crossword_Answers.ipynb
Normal file
911
Trees/SAT_Crossword_Answers.ipynb
Normal file
File diff suppressed because one or more lines are too long
248
Trees/SAT_Exhaustive.ipynb
Normal file
248
Trees/SAT_Exhaustive.ipynb
Normal file
File diff suppressed because one or more lines are too long
250
Trees/SAT_Exhaustive_Answers.ipynb
Normal file
250
Trees/SAT_Exhaustive_Answers.ipynb
Normal file
File diff suppressed because one or more lines are too long
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user