From 89305a0c6b6941acf8cda5beca2c6c549ba7e767 Mon Sep 17 00:00:00 2001 From: Kareim Tarek AbdelAzeem <49312818+KareimGazer@users.noreply.github.com> Date: Wed, 10 Jul 2024 08:18:29 +0300 Subject: [PATCH] Typo in "Stepping With a Learning Rate" section adjust our parameters in the *opposite* direction of the slope. because when a line has a positive slope it goes up left to right and vice versa pointing away from the minimal point. --- 04_mnist_basics.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/04_mnist_basics.ipynb b/04_mnist_basics.ipynb index 675bb5b36..64a1e2c89 100644 --- a/04_mnist_basics.ipynb +++ b/04_mnist_basics.ipynb @@ -2870,7 +2870,7 @@ "w -= gradient(w) * lr\n", "```\n", "\n", - "This is known as *stepping* your parameters, using an *optimizer step*. Notice how we _subtract_ the `gradient * lr` from the parameter to update it. This allows us to adjust the parameter in the direction of the slope by increasing the parameter when the slope is negative and decreasing the parameter when the slope is positive. We want to adjust our parameters in the direction of the slope because our goal in deep learning is to _minimize_ the loss.\n", + "This is known as *stepping* your parameters, using an *optimizer step*. Notice how we _subtract_ the `gradient * lr` from the parameter to update it. This allows us to adjust the parameter in the opposite direction of the slope by increasing the parameter when the slope is negative and decreasing the parameter when the slope is positive. We want to adjust our parameters in the direction of the slope because our goal in deep learning is to _minimize_ the loss.\n", "\n", "If you pick a learning rate that's too low, it can mean having to do a lot of steps. <> illustrates that." ]