Skip to content

Commit 18caae7

Browse files
authoredMay 22, 2023
docs: fix typos (#1)
Found via `codespell -S data`
1 parent bf9d047 commit 18caae7

15 files changed

+34
-34
lines changed
 

‎03_gradient/gradient_descend.livemd

+3-3
Original file line numberDiff line numberDiff line change
@@ -99,7 +99,7 @@ end
9999
```elixir
100100
alias VegaLite, as: Vl
101101

102-
# Generate a sequence that will be used as `weigth`
102+
# Generate a sequence that will be used as `weight`
103103
# From -1 to -4, step 0.01
104104
weights = Enum.map(-100..400, &(&1 / 100))
105105

@@ -149,7 +149,7 @@ defmodule C3.LinearRegressionWithoutBias do
149149
end
150150

151151
@doc """
152-
Returns the derivate of the loss curve
152+
Returns the derivative of the loss curve
153153
"""
154154
def gradient(x, y, weight) do
155155
predictions = predict(x, weight, 0)
@@ -213,7 +213,7 @@ defmodule C3.LinearRegressionWithBias do
213213
end
214214

215215
@doc """
216-
Returns the derivate of the loss curve
216+
Returns the derivative of the loss curve
217217
"""
218218
def gradient(x, y, weight, bias) do
219219
predictions = predict(x, weight, bias)

‎04_hyperspace/multiple_regression.livemd

+1-1
Original file line numberDiff line numberDiff line change
@@ -112,7 +112,7 @@ defmodule C4.MultipleLinearRegression do
112112
end
113113

114114
@doc """
115-
Returns the derivate of the loss curve.
115+
Returns the derivative of the loss curve.
116116
"""
117117
defn gradient(x, y, weight) do
118118
# in python:

‎05_discerning/classifier.livemd

+1-1
Original file line numberDiff line numberDiff line change
@@ -102,7 +102,7 @@ defmodule C5.Classifier do
102102
end
103103

104104
@doc """
105-
Returns the derivate of the loss curve.
105+
Returns the derivative of the loss curve.
106106
"""
107107
defn gradient(x, y, weight) do
108108
# in python:

‎06_real/digit_classifier.livemd

+1-1
Original file line numberDiff line numberDiff line change
@@ -183,7 +183,7 @@ defmodule C5.Classifier do
183183
end
184184

185185
@doc """
186-
Returns the derivate of the loss curve.
186+
Returns the derivative of the loss curve.
187187
"""
188188
defn gradient(x, y, weight) do
189189
# in python:

‎07_final/multiclass_classifier.livemd

+1-1
Original file line numberDiff line numberDiff line change
@@ -186,7 +186,7 @@ defmodule C7.Classifier do
186186
end
187187

188188
@doc """
189-
Returns the derivate of the loss curve.
189+
Returns the derivative of the loss curve.
190190
"""
191191
defn gradient(x, y, weight) do
192192
# in python:

‎07_final/sonar_classifier.livemd

+1-1
Original file line numberDiff line numberDiff line change
@@ -198,7 +198,7 @@ defmodule C7.Classifier do
198198
end
199199

200200
@doc """
201-
Returns the derivate of the loss curve.
201+
Returns the derivative of the loss curve.
202202
"""
203203
@spec gradient(Nx.Tensor.t(), Nx.Tensor.t(), Nx.Tensor.t()) :: Nx.Tensor.t()
204204
defn gradient(x, y, weight) do

‎07_final/sonar_seed_comparison.ex

+1-1
Original file line numberDiff line numberDiff line change
@@ -183,7 +183,7 @@ defmodule C7.Classifier do
183183
end
184184

185185
@doc """
186-
Returns the derivate of the loss curve.
186+
Returns the derivative of the loss curve.
187187
"""
188188
@spec gradient(Nx.Tensor.t(), Nx.Tensor.t(), Nx.Tensor.t()) :: Nx.Tensor.t()
189189
defn gradient(x, y, weight) do

‎11_training/neural_network.livemd

+1-1
Original file line numberDiff line numberDiff line change
@@ -207,7 +207,7 @@ defmodule C11.Classifier do
207207
208208
Each element in the tensors is now constrained between 0 and 1,
209209
but the activation functions used for `h` and `y_hat` are
210-
differents:
210+
different:
211211
- `sigmoid` for the hidden layer `h`
212212
- `softmax` for the prediction tensor `y_hat`
213213

‎12_classifiers/12_classifiers_01.livemd

+3-3
Original file line numberDiff line numberDiff line change
@@ -185,7 +185,7 @@ weight = C12.Perceptron.train(x_train, y_train, x_test, y_test, iterations, lr)
185185

186186
The idea:
187187

188-
* Generate a grid of points and use the min/max values from the inital dataset to compute the boundaries.
188+
* Generate a grid of points and use the min/max values from the initial dataset to compute the boundaries.
189189
* Classify each point using the weight computed before with the initial dataset
190190
* Plot the result highlighting the "decision boundary"
191191

@@ -200,7 +200,7 @@ y =
200200
x_train
201201
|> Nx.slice_along_axis(2, 1, axis: 1)
202202

203-
# Compute the grid bounderies
203+
# Compute the grid boundaries
204204
x_min =
205205
x
206206
|> Nx.to_flat_list()
@@ -223,7 +223,7 @@ y_max =
223223

224224
padding = 0.05
225225

226-
bounderies = %{
226+
boundaries = %{
227227
x_min: x_min - abs(x_min * padding),
228228
x_max: x_max + abs(x_max * padding),
229229
y_min: y_min - abs(y_min * padding),

‎12_classifiers/12_classifiers_02.livemd

+7-7
Original file line numberDiff line numberDiff line change
@@ -191,7 +191,7 @@ weight = C12.Perceptron.train(x_train, y_train, x_test, y_test, iterations, lr)
191191

192192
The idea:
193193

194-
* Generate a grid of points and use the min/max values from the inital dataset to compute the boundaries.
194+
* Generate a grid of points and use the min/max values from the initial dataset to compute the boundaries.
195195
* Classify each point using the weight computed before with the initial dataset
196196
* Plot the result highlighting the "decision boundary"
197197

@@ -206,7 +206,7 @@ y =
206206
x_train
207207
|> Nx.slice_along_axis(2, 1, axis: 1)
208208

209-
# Compute the grid bounderies
209+
# Compute the grid boundaries
210210
x_min =
211211
x
212212
|> Nx.to_flat_list()
@@ -229,7 +229,7 @@ y_max =
229229

230230
padding = 0.05
231231

232-
bounderies = %{
232+
boundaries = %{
233233
x_min: x_min - abs(x_min * padding),
234234
x_max: x_max + abs(x_max * padding),
235235
y_min: y_min - abs(y_min * padding),
@@ -477,12 +477,12 @@ _Same steps used with the perceptron_
477477

478478
The idea:
479479

480-
* Generate a grid of points and use the min/max values from the inital dataset to compute the boundaries.
480+
* Generate a grid of points and use the min/max values from the initial dataset to compute the boundaries.
481481
* Classify each point using the weight computed before with the initial dataset
482482
* Plot the result highlighting the "decision boundary"
483483

484484
```elixir
485-
# Get x from the tensor (this time `x` is not pre-pended by the bias column)
485+
# Get x from the tensor (this time `x` is not prepended by the bias column)
486486
x =
487487
x_train
488488
|> Nx.slice_along_axis(0, 1, axis: 1)
@@ -492,7 +492,7 @@ y =
492492
x_train
493493
|> Nx.slice_along_axis(1, 1, axis: 1)
494494

495-
# Compute the grid bounderies
495+
# Compute the grid boundaries
496496
x_min =
497497
x
498498
|> Nx.to_flat_list()
@@ -515,7 +515,7 @@ y_max =
515515

516516
padding = 0.05
517517

518-
bounderies = %{
518+
boundaries = %{
519519
x_min: x_min - abs(x_min * padding),
520520
x_max: x_max + abs(x_max * padding),
521521
y_min: y_min - abs(y_min * padding),

‎12_classifiers/circles_data.livemd

+7-7
Original file line numberDiff line numberDiff line change
@@ -189,7 +189,7 @@ weight = C12.Perceptron.train(x_train, y_train, x_test, y_test, iterations, lr)
189189

190190
The idea:
191191

192-
* Generate a grid of points and use the min/max values from the inital dataset to compute the boundaries.
192+
* Generate a grid of points and use the min/max values from the initial dataset to compute the boundaries.
193193
* Classify each point using the weight computed before with the initial dataset
194194
* Plot the result highlighting the "decision boundary"
195195

@@ -204,7 +204,7 @@ y =
204204
x_train
205205
|> Nx.slice_along_axis(2, 1, axis: 1)
206206

207-
# Compute the grid bounderies
207+
# Compute the grid boundaries
208208
x_min =
209209
x
210210
|> Nx.to_flat_list()
@@ -227,7 +227,7 @@ y_max =
227227

228228
padding = 0.05
229229

230-
bounderies = %{
230+
boundaries = %{
231231
x_min: x_min - abs(x_min * padding),
232232
x_max: x_max + abs(x_max * padding),
233233
y_min: y_min - abs(y_min * padding),
@@ -465,12 +465,12 @@ _Same steps used with the perceptron_
465465

466466
The idea:
467467

468-
* Generate a grid of points and use the min/max values from the inital dataset to compute the boundaries.
468+
* Generate a grid of points and use the min/max values from the initial dataset to compute the boundaries.
469469
* Classify each point using the weight computed before with the initial dataset
470470
* Plot the result highlighting the "decision boundary"
471471

472472
```elixir
473-
# Get x from the tensor (this time `x` is not pre-pended by the bias column)
473+
# Get x from the tensor (this time `x` is not prepended by the bias column)
474474
x =
475475
x_train
476476
|> Nx.slice_along_axis(0, 1, axis: 1)
@@ -480,7 +480,7 @@ y =
480480
x_train
481481
|> Nx.slice_along_axis(1, 1, axis: 1)
482482

483-
# Compute the grid bounderies
483+
# Compute the grid boundaries
484484
x_min =
485485
x
486486
|> Nx.to_flat_list()
@@ -503,7 +503,7 @@ y_max =
503503

504504
padding = 0.05
505505

506-
bounderies = %{
506+
boundaries = %{
507507
x_min: x_min - abs(x_min * padding),
508508
x_max: x_max + abs(x_max * padding),
509509
y_min: y_min - abs(y_min * padding),

‎16_deeper/16_deeper.livemd

+3-3
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@ defmodule C16.EchidnaDataset do
8080

8181
# After MinMaxScaling, the distributions are not centered
8282
# at zero and the standard deviation is not 1.
83-
# Thefore, subtract 0.5 to rescale data between -0.5 and 0.5
83+
# Therefore, subtract 0.5 to rescale data between -0.5 and 0.5
8484
(x_raw - min) / (max - min) - 0.5
8585
end
8686

@@ -287,7 +287,7 @@ defmodule C16.Plotter do
287287
# Get y from the tensor
288288
y = Nx.slice_along_axis(inputs, 2, 1, axis: 1)
289289

290-
# Compute the grid bounderies
290+
# Compute the grid boundaries
291291
x_min = x |> Nx.to_flat_list() |> Enum.min()
292292
x_max = x |> Nx.to_flat_list() |> Enum.max()
293293
y_min = y |> Nx.to_flat_list() |> Enum.min()
@@ -354,7 +354,7 @@ Axon.Display.as_graph(new_model, template)
354354
# epochs are defined in the previous model's training
355355

356356
# Set `eps` option in the RMSprop to prevent division by zero (NaN)
357-
# By deafult in Axon is 1.0e-8, I tried with 1.0e-7 (Keras default) and
357+
# By default in Axon is 1.0e-8, I tried with 1.0e-7 (Keras default) and
358358
# it was still returning NaN.
359359
epsilon = 1.0e-4
360360

‎17_overfitting/17_overfitting.livemd

+2-2
Original file line numberDiff line numberDiff line change
@@ -83,7 +83,7 @@ defmodule C17.EchidnaDataset do
8383

8484
# After MinMaxScaling, the distributions are not centered
8585
# at zero and the standard deviation is not 1.
86-
# Thefore, subtract 0.5 to rescale data between -0.5 and 0.5
86+
# Therefore, subtract 0.5 to rescale data between -0.5 and 0.5
8787
(x_raw - min) / (max - min) - 0.5
8888
end
8989

@@ -132,7 +132,7 @@ validation_data = [{x_validation, y_validation}]
132132
epochs = 30_000
133133

134134
# Set `eps` option in the RMSprop to prevent division by zero (NaN)
135-
# By deafult in Axon is 1.0e-8, I tried with 1.0e-7 (Keras default) and
135+
# By default in Axon is 1.0e-8, I tried with 1.0e-7 (Keras default) and
136136
# it was still returning NaN.
137137
epsilon = 1.0e-4
138138

‎19_beyond/beyond_vanilla_networks.livemd

+1-1
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ rows = 4
4444

4545
key = Nx.Random.key(42)
4646

47-
# Compute random indeces
47+
# Compute random indices
4848
indices =
4949
{elem(images_shape, 0) - 1}
5050
|> Nx.iota()

‎README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ If you want to launch Livebook via Docker:
5252

5353
### Differences between Livebook an Jupyter books
5454

55-
* I could replicate all the different Jupyter books in Elixir with Livebook/Nx/Axon, apart from the 2nd section of Chapter 17, where the book introduces L1/L2 regularization tecniques and these are not supported by [Axon](https://github.com/elixir-nx/axon) out of the box (more details in the corresponding Livebook).
55+
* I could replicate all the different Jupyter books in Elixir with Livebook/Nx/Axon, apart from the 2nd section of Chapter 17, where the book introduces L1/L2 regularization techniques and these are not supported by [Axon](https://github.com/elixir-nx/axon) out of the box (more details in the corresponding Livebook).
5656

5757
### Code Style
5858

0 commit comments

Comments
 (0)