File tree 1 file changed +2
-2
lines changed
1 file changed +2
-2
lines changed Original file line number Diff line number Diff line change 45
45
46
46
47
47
######################################################################
48
- # We start with a simple example: :math: `y = a \mapsto \cdot b` , for which
48
+ # We start with a simple example: :math:`y = a \cdot b` , for which
49
49
# we know the gradients of :math:`y` with respect to :math:`a` and
50
50
# :math:`b`:
51
51
#
@@ -108,7 +108,7 @@ def f(x):
108
108
######################################################################
109
109
# In the example above, executing without grad would only have kept ``x``
110
110
# and ``y`` in the scope, But the graph additionnally stores ``f(x)`` and
111
- # ``f(f(x)``. Hence, running a forward pass during training will be more
111
+ # ``f(f(x)) ``. Hence, running a forward pass during training will be more
112
112
# costly in memory usage than during evaluation (more precisely, when
113
113
# autograd is not required).
114
114
#
You can’t perform that action at this time.
0 commit comments