Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions 03_gradient/gradient_descend.livemd
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ end
```elixir
alias VegaLite, as: Vl

# Generate a sequence that will be used as `weigth`
# Generate a sequence that will be used as `weight`
# From -1 to -4, step 0.01
weights = Enum.map(-100..400, &(&1 / 100))

Expand Down Expand Up @@ -149,7 +149,7 @@ defmodule C3.LinearRegressionWithoutBias do
end

@doc """
Returns the derivate of the loss curve
Returns the derivative of the loss curve
"""
def gradient(x, y, weight) do
predictions = predict(x, weight, 0)
Expand Down Expand Up @@ -213,7 +213,7 @@ defmodule C3.LinearRegressionWithBias do
end

@doc """
Returns the derivate of the loss curve
Returns the derivative of the loss curve
"""
def gradient(x, y, weight, bias) do
predictions = predict(x, weight, bias)
Expand Down
2 changes: 1 addition & 1 deletion 04_hyperspace/multiple_regression.livemd
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ defmodule C4.MultipleLinearRegression do
end

@doc """
Returns the derivate of the loss curve.
Returns the derivative of the loss curve.
"""
defn gradient(x, y, weight) do
# in python:
Expand Down
2 changes: 1 addition & 1 deletion 05_discerning/classifier.livemd
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ defmodule C5.Classifier do
end

@doc """
Returns the derivate of the loss curve.
Returns the derivative of the loss curve.
"""
defn gradient(x, y, weight) do
# in python:
Expand Down
2 changes: 1 addition & 1 deletion 06_real/digit_classifier.livemd
Original file line number Diff line number Diff line change
Expand Up @@ -183,7 +183,7 @@ defmodule C5.Classifier do
end

@doc """
Returns the derivate of the loss curve.
Returns the derivative of the loss curve.
"""
defn gradient(x, y, weight) do
# in python:
Expand Down
2 changes: 1 addition & 1 deletion 07_final/multiclass_classifier.livemd
Original file line number Diff line number Diff line change
Expand Up @@ -186,7 +186,7 @@ defmodule C7.Classifier do
end

@doc """
Returns the derivate of the loss curve.
Returns the derivative of the loss curve.
"""
defn gradient(x, y, weight) do
# in python:
Expand Down
2 changes: 1 addition & 1 deletion 07_final/sonar_classifier.livemd
Original file line number Diff line number Diff line change
Expand Up @@ -198,7 +198,7 @@ defmodule C7.Classifier do
end

@doc """
Returns the derivate of the loss curve.
Returns the derivative of the loss curve.
"""
@spec gradient(Nx.Tensor.t(), Nx.Tensor.t(), Nx.Tensor.t()) :: Nx.Tensor.t()
defn gradient(x, y, weight) do
Expand Down
2 changes: 1 addition & 1 deletion 07_final/sonar_seed_comparison.ex
Original file line number Diff line number Diff line change
Expand Up @@ -183,7 +183,7 @@ defmodule C7.Classifier do
end

@doc """
Returns the derivate of the loss curve.
Returns the derivative of the loss curve.
"""
@spec gradient(Nx.Tensor.t(), Nx.Tensor.t(), Nx.Tensor.t()) :: Nx.Tensor.t()
defn gradient(x, y, weight) do
Expand Down
2 changes: 1 addition & 1 deletion 11_training/neural_network.livemd
Original file line number Diff line number Diff line change
Expand Up @@ -207,7 +207,7 @@ defmodule C11.Classifier do

Each element in the tensors is now constrained between 0 and 1,
but the activation functions used for `h` and `y_hat` are
differents:
different:
- `sigmoid` for the hidden layer `h`
- `softmax` for the prediction tensor `y_hat`

Expand Down
6 changes: 3 additions & 3 deletions 12_classifiers/12_classifiers_01.livemd
Original file line number Diff line number Diff line change
Expand Up @@ -185,7 +185,7 @@ weight = C12.Perceptron.train(x_train, y_train, x_test, y_test, iterations, lr)

The idea:

* Generate a grid of points and use the min/max values from the inital dataset to compute the boundaries.
* Generate a grid of points and use the min/max values from the initial dataset to compute the boundaries.
* Classify each point using the weight computed before with the initial dataset
* Plot the result highlighting the "decision boundary"

Expand All @@ -200,7 +200,7 @@ y =
x_train
|> Nx.slice_along_axis(2, 1, axis: 1)

# Compute the grid bounderies
# Compute the grid boundaries
x_min =
x
|> Nx.to_flat_list()
Expand All @@ -223,7 +223,7 @@ y_max =

padding = 0.05

bounderies = %{
boundaries = %{
x_min: x_min - abs(x_min * padding),
x_max: x_max + abs(x_max * padding),
y_min: y_min - abs(y_min * padding),
Expand Down
14 changes: 7 additions & 7 deletions 12_classifiers/12_classifiers_02.livemd
Original file line number Diff line number Diff line change
Expand Up @@ -191,7 +191,7 @@ weight = C12.Perceptron.train(x_train, y_train, x_test, y_test, iterations, lr)

The idea:

* Generate a grid of points and use the min/max values from the inital dataset to compute the boundaries.
* Generate a grid of points and use the min/max values from the initial dataset to compute the boundaries.
* Classify each point using the weight computed before with the initial dataset
* Plot the result highlighting the "decision boundary"

Expand All @@ -206,7 +206,7 @@ y =
x_train
|> Nx.slice_along_axis(2, 1, axis: 1)

# Compute the grid bounderies
# Compute the grid boundaries
x_min =
x
|> Nx.to_flat_list()
Expand All @@ -229,7 +229,7 @@ y_max =

padding = 0.05

bounderies = %{
boundaries = %{
x_min: x_min - abs(x_min * padding),
x_max: x_max + abs(x_max * padding),
y_min: y_min - abs(y_min * padding),
Expand Down Expand Up @@ -477,12 +477,12 @@ _Same steps used with the perceptron_

The idea:

* Generate a grid of points and use the min/max values from the inital dataset to compute the boundaries.
* Generate a grid of points and use the min/max values from the initial dataset to compute the boundaries.
* Classify each point using the weight computed before with the initial dataset
* Plot the result highlighting the "decision boundary"

```elixir
# Get x from the tensor (this time `x` is not pre-pended by the bias column)
# Get x from the tensor (this time `x` is not prepended by the bias column)
x =
x_train
|> Nx.slice_along_axis(0, 1, axis: 1)
Expand All @@ -492,7 +492,7 @@ y =
x_train
|> Nx.slice_along_axis(1, 1, axis: 1)

# Compute the grid bounderies
# Compute the grid boundaries
x_min =
x
|> Nx.to_flat_list()
Expand All @@ -515,7 +515,7 @@ y_max =

padding = 0.05

bounderies = %{
boundaries = %{
x_min: x_min - abs(x_min * padding),
x_max: x_max + abs(x_max * padding),
y_min: y_min - abs(y_min * padding),
Expand Down
14 changes: 7 additions & 7 deletions 12_classifiers/circles_data.livemd
Original file line number Diff line number Diff line change
Expand Up @@ -189,7 +189,7 @@ weight = C12.Perceptron.train(x_train, y_train, x_test, y_test, iterations, lr)

The idea:

* Generate a grid of points and use the min/max values from the inital dataset to compute the boundaries.
* Generate a grid of points and use the min/max values from the initial dataset to compute the boundaries.
* Classify each point using the weight computed before with the initial dataset
* Plot the result highlighting the "decision boundary"

Expand All @@ -204,7 +204,7 @@ y =
x_train
|> Nx.slice_along_axis(2, 1, axis: 1)

# Compute the grid bounderies
# Compute the grid boundaries
x_min =
x
|> Nx.to_flat_list()
Expand All @@ -227,7 +227,7 @@ y_max =

padding = 0.05

bounderies = %{
boundaries = %{
x_min: x_min - abs(x_min * padding),
x_max: x_max + abs(x_max * padding),
y_min: y_min - abs(y_min * padding),
Expand Down Expand Up @@ -465,12 +465,12 @@ _Same steps used with the perceptron_

The idea:

* Generate a grid of points and use the min/max values from the inital dataset to compute the boundaries.
* Generate a grid of points and use the min/max values from the initial dataset to compute the boundaries.
* Classify each point using the weight computed before with the initial dataset
* Plot the result highlighting the "decision boundary"

```elixir
# Get x from the tensor (this time `x` is not pre-pended by the bias column)
# Get x from the tensor (this time `x` is not prepended by the bias column)
x =
x_train
|> Nx.slice_along_axis(0, 1, axis: 1)
Expand All @@ -480,7 +480,7 @@ y =
x_train
|> Nx.slice_along_axis(1, 1, axis: 1)

# Compute the grid bounderies
# Compute the grid boundaries
x_min =
x
|> Nx.to_flat_list()
Expand All @@ -503,7 +503,7 @@ y_max =

padding = 0.05

bounderies = %{
boundaries = %{
x_min: x_min - abs(x_min * padding),
x_max: x_max + abs(x_max * padding),
y_min: y_min - abs(y_min * padding),
Expand Down
6 changes: 3 additions & 3 deletions 16_deeper/16_deeper.livemd
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ defmodule C16.EchidnaDataset do

# After MinMaxScaling, the distributions are not centered
# at zero and the standard deviation is not 1.
# Thefore, subtract 0.5 to rescale data between -0.5 and 0.5
# Therefore, subtract 0.5 to rescale data between -0.5 and 0.5
(x_raw - min) / (max - min) - 0.5
end

Expand Down Expand Up @@ -287,7 +287,7 @@ defmodule C16.Plotter do
# Get y from the tensor
y = Nx.slice_along_axis(inputs, 2, 1, axis: 1)

# Compute the grid bounderies
# Compute the grid boundaries
x_min = x |> Nx.to_flat_list() |> Enum.min()
x_max = x |> Nx.to_flat_list() |> Enum.max()
y_min = y |> Nx.to_flat_list() |> Enum.min()
Expand Down Expand Up @@ -354,7 +354,7 @@ Axon.Display.as_graph(new_model, template)
# epochs are defined in the previous model's training

# Set `eps` option in the RMSprop to prevent division by zero (NaN)
# By deafult in Axon is 1.0e-8, I tried with 1.0e-7 (Keras default) and
# By default in Axon is 1.0e-8, I tried with 1.0e-7 (Keras default) and
# it was still returning NaN.
epsilon = 1.0e-4

Expand Down
4 changes: 2 additions & 2 deletions 17_overfitting/17_overfitting.livemd
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ defmodule C17.EchidnaDataset do

# After MinMaxScaling, the distributions are not centered
# at zero and the standard deviation is not 1.
# Thefore, subtract 0.5 to rescale data between -0.5 and 0.5
# Therefore, subtract 0.5 to rescale data between -0.5 and 0.5
(x_raw - min) / (max - min) - 0.5
end

Expand Down Expand Up @@ -132,7 +132,7 @@ validation_data = [{x_validation, y_validation}]
epochs = 30_000

# Set `eps` option in the RMSprop to prevent division by zero (NaN)
# By deafult in Axon is 1.0e-8, I tried with 1.0e-7 (Keras default) and
# By default in Axon is 1.0e-8, I tried with 1.0e-7 (Keras default) and
# it was still returning NaN.
epsilon = 1.0e-4

Expand Down
2 changes: 1 addition & 1 deletion 19_beyond/beyond_vanilla_networks.livemd
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ rows = 4

key = Nx.Random.key(42)

# Compute random indeces
# Compute random indices
indices =
{elem(images_shape, 0) - 1}
|> Nx.iota()
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ If you want to launch Livebook via Docker:

### Differences between Livebook an Jupyter books

* I could replicate all the different Jupyter books in Elixir with Livebook/Nx/Axon, apart from the 2nd section of Chapter 17, where the book introduces L1/L2 regularization tecniques and these are not supported by [Axon](https://github.com/elixir-nx/axon) out of the box (more details in the corresponding Livebook).
* I could replicate all the different Jupyter books in Elixir with Livebook/Nx/Axon, apart from the 2nd section of Chapter 17, where the book introduces L1/L2 regularization techniques and these are not supported by [Axon](https://github.com/elixir-nx/axon) out of the box (more details in the corresponding Livebook).

### Code Style

Expand Down