From 6b2f25101e19aae53d49b1d05cf0b061156820a5 Mon Sep 17 00:00:00 2001 From: Pietro Monticone <38562595+pitmonticone@users.noreply.github.com> Date: Thu, 30 Nov 2023 16:36:35 +0100 Subject: [PATCH] Update CM20315_Gradients_II.ipynb --- CM20315/CM20315_Gradients_II.ipynb | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/CM20315/CM20315_Gradients_II.ipynb b/CM20315/CM20315_Gradients_II.ipynb index ebefa76..d98b315 100644 --- a/CM20315/CM20315_Gradients_II.ipynb +++ b/CM20315/CM20315_Gradients_II.ipynb @@ -32,7 +32,7 @@ "source": [ "# Gradients II: Backpropagation algorithm\n", "\n", - "In this practical, we'll investigate the backpropagation algoritithm. This computes the gradients of the loss with respect to all of the parameters (weights and biases) in the network. We'll use these gradients when we run stochastic gradient descent." + "In this practical, we'll investigate the backpropagation algorithm. This computes the gradients of the loss with respect to all of the parameters (weights and biases) in the network. We'll use these gradients when we run stochastic gradient descent." ], "metadata": { "id": "L6chybAVFJW2" @@ -53,7 +53,7 @@ { "cell_type": "markdown", "source": [ - "First let's define a neural network. We'll just choose the weights and biaes randomly for now" + "First let's define a neural network. We'll just choose the weights and biases randomly for now" ], "metadata": { "id": "nnUoI0m6GyjC" @@ -178,7 +178,7 @@ { "cell_type": "markdown", "source": [ - "Now let's define a loss function. We'll just use the least squaures loss function. We'll also write a function to compute dloss_doutpu" + "Now let's define a loss function. We'll just use the least squares loss function. We'll also write a function to compute dloss_doutpu" ], "metadata": { "id": "SxVTKp3IcoBF"