Skip to content

Commit 0aff72f

Browse files
authored
DeepNorm (labmlai#114)
1 parent 3aaae6a commit 0aff72f

15 files changed

+1490
-25
lines changed

.gitignore

+2-1
Original file line numberDiff line numberDiff line change
@@ -13,4 +13,5 @@ labml_samples
1313
data
1414
logs
1515
html/
16-
diagrams/
16+
diagrams/
17+
.comet.config

.labml.yaml

+21
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
indicators:
2+
- class_name: Scalar
3+
is_print: false
4+
name: param.*
5+
options:
6+
comet: false
7+
- class_name: Scalar
8+
is_print: false
9+
name: grad.*
10+
options:
11+
comet: false
12+
- class_name: Scalar
13+
is_print: false
14+
name: module.*
15+
options:
16+
comet: false
17+
- class_name: Scalar
18+
is_print: false
19+
name: optim.*
20+
options:
21+
comet: false

docs/index.html

+10-9
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@
6969
</div>
7070
<h1><a href="index.html">labml.ai Annotated PyTorch Paper Implementations</a></h1>
7171
<p>This is a collection of simple PyTorch implementations of neural networks and related algorithms. <a href="https://github.com/labmlai/annotated_deep_learning_paper_implementations">These implementations</a> are documented with explanations, and the <a href="index.html">website</a> renders these as side-by-side formatted notes. We believe these would help you understand these algorithms better.</p>
72-
<p><img alt="Screenshot" src="https://nn.labml.ai/dqn-light.png"></p>
72+
<p><img alt="Screenshot" src="dqn-light.png"></p>
7373
<p>We are actively maintaining this repo and adding new implementations. <a href="https://twitter.com/labmlai"><img alt="Twitter" src="https://img.shields.io/twitter/follow/labmlai?style=social"></a> for updates.</p>
7474
<h2>Modules</h2>
7575
<h4><a href="transformers/index.html">Transformers</a></h4>
@@ -126,13 +126,14 @@ <h4>✨ <a href="optimizers/index.html">Optimizers</a></h4>
126126
<li><a href="optimizers/noam.html">Noam Optimizer</a> </li>
127127
<li><a href="optimizers/radam.html">Rectified Adam Optimizer</a> </li>
128128
<li><a href="optimizers/ada_belief.html">AdaBelief Optimizer</a></li></ul>
129-
<h4><a href="https://nn.labml.ai/normalization/index.html">Normalization Layers</a></h4>
130-
<ul><li><a href="https://nn.labml.ai/normalization/batch_norm/index.html">Batch Normalization</a> </li>
131-
<li><a href="https://nn.labml.ai/normalization/layer_norm/index.html">Layer Normalization</a> </li>
132-
<li><a href="https://nn.labml.ai/normalization/instance_norm/index.html">Instance Normalization</a> </li>
133-
<li><a href="https://nn.labml.ai/normalization/group_norm/index.html">Group Normalization</a> </li>
134-
<li><a href="https://nn.labml.ai/normalization/weight_standardization/index.html">Weight Standardization</a> </li>
135-
<li><a href="https://nn.labml.ai/normalization/batch_channel_norm/index.html">Batch-Channel Normalization</a></li></ul>
129+
<h4><a href="normalization/index.html">Normalization Layers</a></h4>
130+
<ul><li><a href="normalization/batch_norm/index.html">Batch Normalization</a> </li>
131+
<li><a href="normalization/layer_norm/index.html">Layer Normalization</a> </li>
132+
<li><a href="normalization/instance_norm/index.html">Instance Normalization</a> </li>
133+
<li><a href="normalization/group_norm/index.html">Group Normalization</a> </li>
134+
<li><a href="normalization/weight_standardization/index.html">Weight Standardization</a> </li>
135+
<li><a href="normalization/batch_channel_norm/index.html">Batch-Channel Normalization</a> </li>
136+
<li><a href="normalization/deep_norm/index.html">DeepNorm</a></li></ul>
136137
<h4><a href="distillation/index.html">Distillation</a></h4>
137138
<h4><a href="adaptive_computation/index.html">Adaptive Computation</a></h4>
138139
<ul><li><a href="adaptive_computation/ponder_net/index.html">PonderNet</a></li></ul>
@@ -146,7 +147,7 @@ <h3>Citing LabML</h3>
146147
<span class="w"> </span><span class="na">author</span><span class="w"> </span><span class="p">=</span><span class="w"> </span><span class="s">{Varuna Jayasiri, Nipun Wijerathne}</span><span class="p">,</span><span class="w"></span>
147148
<span class="w"> </span><span class="na">title</span><span class="w"> </span><span class="p">=</span><span class="w"> </span><span class="s">{labml.ai Annotated Paper Implementations}</span><span class="p">,</span><span class="w"></span>
148149
<span class="w"> </span><span class="na">year</span><span class="w"> </span><span class="p">=</span><span class="w"> </span><span class="s">{2020}</span><span class="p">,</span><span class="w"></span>
149-
<span class="w"> </span><span class="na">url</span><span class="w"> </span><span class="p">=</span><span class="w"> </span><span class="s">{https://nn.labml.ai/}</span><span class="p">,</span><span class="w"></span>
150+
<span class="w"> </span><span class="na">url</span><span class="w"> </span><span class="p">=</span><span class="w"> </span><span class="s">{}</span><span class="p">,</span><span class="w"></span>
150151
<span class="p">}</span><span class="w"></span></code></pre>
151152

152153
</div>

0 commit comments

Comments
 (0)