Skip to content

Commit c39df02

Browse files
awguSvetlana Karslioglu
and
Svetlana Karslioglu
authored
Minor math render fix (#1977)
Co-authored-by: Svetlana Karslioglu <[email protected]>
1 parent 72b0ca7 commit c39df02

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

intermediate_source/autograd_saved_tensors_hooks_tutorial.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@
4545

4646

4747
######################################################################
48-
# We start with a simple example: :math: `y = a \mapsto \cdot b` , for which
48+
# We start with a simple example: :math:`y = a \cdot b` , for which
4949
# we know the gradients of :math:`y` with respect to :math:`a` and
5050
# :math:`b`:
5151
#
@@ -108,7 +108,7 @@ def f(x):
108108
######################################################################
109109
# In the example above, executing without grad would only have kept ``x``
110110
# and ``y`` in the scope, But the graph additionnally stores ``f(x)`` and
111-
# ``f(f(x)``. Hence, running a forward pass during training will be more
111+
# ``f(f(x))``. Hence, running a forward pass during training will be more
112112
# costly in memory usage than during evaluation (more precisely, when
113113
# autograd is not required).
114114
#

0 commit comments

Comments
 (0)