From c5fafbca972a91a36daf861f508ce1f619f426e9 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Jan=20Hr=C4=8Dek?= Date: Tue, 30 Jul 2024 11:16:30 +0200 Subject: [PATCH] Fix duplicate word occurrences in notebooks --- Blogs/BorealisGradientFlow.ipynb | 4 ++-- Blogs/BorealisNTK.ipynb | 4 ++-- CM20315/CM20315_Coursework_IV.ipynb | 2 +- CM20315/CM20315_Loss_II.ipynb | 2 +- CM20315/CM20315_Loss_III.ipynb | 2 +- CM20315_2023/CM20315_Coursework_IV.ipynb | 4 ++-- CM20315_2023/CM20315_Coursework_V_2023.ipynb | 4 ++-- Notebooks/Chap05/5_3_Multiclass_Cross_entropy_Loss.ipynb | 4 ++-- Notebooks/Chap08/8_4_High_Dimensional_Spaces.ipynb | 2 +- Notebooks/Chap19/19_2_Dynamic_Programming.ipynb | 4 ++-- Notebooks/Chap21/21_1_Bias_Mitigation.ipynb | 4 ++-- 11 files changed, 18 insertions(+), 18 deletions(-) diff --git a/Blogs/BorealisGradientFlow.ipynb b/Blogs/BorealisGradientFlow.ipynb index 483bd0e..53cb3cb 100644 --- a/Blogs/BorealisGradientFlow.ipynb +++ b/Blogs/BorealisGradientFlow.ipynb @@ -31,7 +31,7 @@ "source": [ "# Gradient flow\n", "\n", - "This notebook replicates some of the results in the the Borealis AI [blog](https://www.borealisai.com/research-blogs/gradient-flow/) on gradient flow. \n" + "This notebook replicates some of the results in the Borealis AI [blog](https://www.borealisai.com/research-blogs/gradient-flow/) on gradient flow. \n" ], "metadata": { "id": "ucrRRJ4dq8_d" @@ -398,4 +398,4 @@ "outputs": [] } ] -} \ No newline at end of file +} diff --git a/Blogs/BorealisNTK.ipynb b/Blogs/BorealisNTK.ipynb index d2062a8..ee697d8 100644 --- a/Blogs/BorealisNTK.ipynb +++ b/Blogs/BorealisNTK.ipynb @@ -166,7 +166,7 @@ { "cell_type": "markdown", "source": [ - "Routines to calculate the empirical and analytical NTK (i.e. the NTK with infinite hidden units) for the the shallow network" + "Routines to calculate the empirical and analytical NTK (i.e. the NTK with infinite hidden units) for the shallow network" ], "metadata": { "id": "mxW8E5kYIzlj" @@ -1106,4 +1106,4 @@ }, "nbformat": 4, "nbformat_minor": 0 -} \ No newline at end of file +} diff --git a/CM20315/CM20315_Coursework_IV.ipynb b/CM20315/CM20315_Coursework_IV.ipynb index dfdb5e9..e3e759e 100644 --- a/CM20315/CM20315_Coursework_IV.ipynb +++ b/CM20315/CM20315_Coursework_IV.ipynb @@ -128,7 +128,7 @@ "\n", "In part (b) of the practical we calculate the volume of a hypersphere of radius 0.5 (i.e., of diameter 1) as a function of the radius. You will find that the volume decreases to almost nothing in high dimensions. All of the volume is in the corners of the unit hypercube (which always has volume 1). Double weird.\n", "\n", - "Note that you you can check your answer by doing the calculation for 2D using the standard formula for the area of a circle and making sure it matches." + "Note that you can check your answer by doing the calculation for 2D using the standard formula for the area of a circle and making sure it matches." ], "metadata": { "id": "b2FYKV1SL4Z7" diff --git a/CM20315/CM20315_Loss_II.ipynb b/CM20315/CM20315_Loss_II.ipynb index 9846d61..d51cd70 100644 --- a/CM20315/CM20315_Loss_II.ipynb +++ b/CM20315/CM20315_Loss_II.ipynb @@ -199,7 +199,7 @@ { "cell_type": "markdown", "source": [ - "The left is model output and the right is the model output after the sigmoid has been applied, so it now lies in the range [0,1] and represents the probability, that y=1. The black dots show the training data. We'll compute the the likelihood and the negative log likelihood." + "The left is model output and the right is the model output after the sigmoid has been applied, so it now lies in the range [0,1] and represents the probability, that y=1. The black dots show the training data. We'll compute the likelihood and the negative log likelihood." ], "metadata": { "id": "MvVX6tl9AEXF" diff --git a/CM20315/CM20315_Loss_III.ipynb b/CM20315/CM20315_Loss_III.ipynb index ef914c9..dae3ae4 100644 --- a/CM20315/CM20315_Loss_III.ipynb +++ b/CM20315/CM20315_Loss_III.ipynb @@ -218,7 +218,7 @@ { "cell_type": "markdown", "source": [ - "The left is model output and the right is the model output after the softmax has been applied, so it now lies in the range [0,1] and represents the probability, that y=0 (red), 1 (green) and 2 (blue) The dots at the bottom show the training data with the same color scheme. So we want the red curve to be high where there are red dots, the green curve to be high where there are green dotsmand the blue curve to be high where there are blue dots We'll compute the the likelihood and the negative log likelihood." + "The left is model output and the right is the model output after the softmax has been applied, so it now lies in the range [0,1] and represents the probability, that y=0 (red), 1 (green) and 2 (blue) The dots at the bottom show the training data with the same color scheme. So we want the red curve to be high where there are red dots, the green curve to be high where there are green dotsmand the blue curve to be high where there are blue dots We'll compute the likelihood and the negative log likelihood." ], "metadata": { "id": "MvVX6tl9AEXF" diff --git a/CM20315_2023/CM20315_Coursework_IV.ipynb b/CM20315_2023/CM20315_Coursework_IV.ipynb index 8f98ad5..e44f330 100644 --- a/CM20315_2023/CM20315_Coursework_IV.ipynb +++ b/CM20315_2023/CM20315_Coursework_IV.ipynb @@ -128,7 +128,7 @@ "\n", "In part (b) of the practical we calculate the volume of a hypersphere of radius 0.5 (i.e., of diameter 1) as a function of the radius. You will find that the volume decreases to almost nothing in high dimensions. All of the volume is in the corners of the unit hypercube (which always has volume 1). Double weird.\n", "\n", - "Note that you you can check your answer by doing the calculation for 2D using the standard formula for the area of a circle and making sure it matches." + "Note that you can check your answer by doing the calculation for 2D using the standard formula for the area of a circle and making sure it matches." ], "metadata": { "id": "b2FYKV1SL4Z7" @@ -209,4 +209,4 @@ "outputs": [] } ] -} \ No newline at end of file +} diff --git a/CM20315_2023/CM20315_Coursework_V_2023.ipynb b/CM20315_2023/CM20315_Coursework_V_2023.ipynb index 483ea94..fb1dfde 100644 --- a/CM20315_2023/CM20315_Coursework_V_2023.ipynb +++ b/CM20315_2023/CM20315_Coursework_V_2023.ipynb @@ -214,7 +214,7 @@ { "cell_type": "code", "source": [ - "# Compute the derivative of the the loss with respect to the function output f_val\n", + "# Compute the derivative of the loss with respect to the function output f_val\n", "def dl_df(f_val,y):\n", " # Compute sigmoid of network output\n", " sig_f_val = sig(f_val)\n", @@ -522,4 +522,4 @@ "outputs": [] } ] -} \ No newline at end of file +} diff --git a/Notebooks/Chap05/5_3_Multiclass_Cross_entropy_Loss.ipynb b/Notebooks/Chap05/5_3_Multiclass_Cross_entropy_Loss.ipynb index dbab0a1..1021d74 100644 --- a/Notebooks/Chap05/5_3_Multiclass_Cross_entropy_Loss.ipynb +++ b/Notebooks/Chap05/5_3_Multiclass_Cross_entropy_Loss.ipynb @@ -211,7 +211,7 @@ "id": "MvVX6tl9AEXF" }, "source": [ - "The left is model output and the right is the model output after the softmax has been applied, so it now lies in the range [0,1] and represents the probability, that y=0 (red), 1 (green) and 2 (blue). The dots at the bottom show the training data with the same color scheme. So we want the red curve to be high where there are red dots, the green curve to be high where there are green dots, and the blue curve to be high where there are blue dots We'll compute the the likelihood and the negative log likelihood." + "The left is model output and the right is the model output after the softmax has been applied, so it now lies in the range [0,1] and represents the probability, that y=0 (red), 1 (green) and 2 (blue). The dots at the bottom show the training data with the same color scheme. So we want the red curve to be high where there are red dots, the green curve to be high where there are green dots, and the blue curve to be high where there are blue dots We'll compute the likelihood and the negative log likelihood." ] }, { @@ -460,4 +460,4 @@ }, "nbformat": 4, "nbformat_minor": 0 -} \ No newline at end of file +} diff --git a/Notebooks/Chap08/8_4_High_Dimensional_Spaces.ipynb b/Notebooks/Chap08/8_4_High_Dimensional_Spaces.ipynb index 80b27f0..f9f914d 100644 --- a/Notebooks/Chap08/8_4_High_Dimensional_Spaces.ipynb +++ b/Notebooks/Chap08/8_4_High_Dimensional_Spaces.ipynb @@ -134,7 +134,7 @@ "source": [ "# Volume of a hypersphere\n", "\n", - "In the second part of this notebook we calculate the volume of a hypersphere of radius 0.5 (i.e., of diameter 1) as a function of the radius. Note that you you can check your answer by doing the calculation for 2D using the standard formula for the area of a circle and making sure it matches." + "In the second part of this notebook we calculate the volume of a hypersphere of radius 0.5 (i.e., of diameter 1) as a function of the radius. Note that you can check your answer by doing the calculation for 2D using the standard formula for the area of a circle and making sure it matches." ], "metadata": { "id": "b2FYKV1SL4Z7" diff --git a/Notebooks/Chap19/19_2_Dynamic_Programming.ipynb b/Notebooks/Chap19/19_2_Dynamic_Programming.ipynb index a1a2859..62a1275 100644 --- a/Notebooks/Chap19/19_2_Dynamic_Programming.ipynb +++ b/Notebooks/Chap19/19_2_Dynamic_Programming.ipynb @@ -393,7 +393,7 @@ { "cell_type": "code", "source": [ - "# Update the state values for the current policy, by making the values at at adjacent\n", + "# Update the state values for the current policy, by making the values at adjacent\n", "# states compatible with the Bellman equation (equation 19.11)\n", "def policy_evaluation(policy, state_values, rewards, transition_probabilities_given_action, gamma):\n", "\n", @@ -527,4 +527,4 @@ } } ] -} \ No newline at end of file +} diff --git a/Notebooks/Chap21/21_1_Bias_Mitigation.ipynb b/Notebooks/Chap21/21_1_Bias_Mitigation.ipynb index dba4b1a..2235dd0 100644 --- a/Notebooks/Chap21/21_1_Bias_Mitigation.ipynb +++ b/Notebooks/Chap21/21_1_Bias_Mitigation.ipynb @@ -137,7 +137,7 @@ "id": "CfZ-srQtmff2" }, "source": [ - "Why might the distributions for blue and yellow populations be different? It could be that the behaviour of the populations is identical, but the credit rating algorithm is biased; it may favor one population over another or simply be more noisy for one group. Alternatively, it could be that that the populations genuinely behave differently. In practice, the differences in blue and yellow distributions are probably attributable to a combination of these factors.\n", + "Why might the distributions for blue and yellow populations be different? It could be that the behaviour of the populations is identical, but the credit rating algorithm is biased; it may favor one population over another or simply be more noisy for one group. Alternatively, it could be that the populations genuinely behave differently. In practice, the differences in blue and yellow distributions are probably attributable to a combination of these factors.\n", "\n", "Let’s assume that we can’t retrain the credit score prediction algorithm; our job is to adjudicate whether each individual is refused the loan ($\\hat{y}=0$)\n", " or granted it ($\\hat{y}=1$). Since we only have the credit score\n", @@ -382,7 +382,7 @@ "source": [ "# Equal opportunity:\n", "\n", - "The thresholds are chosen so that so that the true positive rate is is the same for both population. Of the people who pay back the loan, the same proportion are offered credit in each group. In terms of the two ROC curves, it means choosing thresholds so that the vertical position on each curve is the same without regard for the horizontal position." + "The thresholds are chosen so that so that the true positive rate is the same for both population. Of the people who pay back the loan, the same proportion are offered credit in each group. In terms of the two ROC curves, it means choosing thresholds so that the vertical position on each curve is the same without regard for the horizontal position." ] }, {