Skip to content

Commit a3a3eaa

Browse files
committed
New titles
1 parent b049a7c commit a3a3eaa

File tree

3 files changed

+16
-16
lines changed

3 files changed

+16
-16
lines changed

src/app/blog/lets-write-a-dht-1/page.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ export const post = {
77
date: '2025-09-18',
88
title: 'A DHT for iroh',
99
description:
10-
"Let's write a DHT for iroh - part 1",
10+
"Let's write a DHT for iroh - protocol",
1111
}
1212

1313
export const metadata = {

src/app/blog/lets-write-a-dht-2/page.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ export const post = {
77
date: '2025-09-19',
88
title: 'A DHT for iroh',
99
description:
10-
"Let's write a DHT for iroh - part 2",
10+
"Let's write a DHT for iroh - implementation",
1111
}
1212

1313
export const metadata = {

src/app/blog/lets-write-a-dht-3/page.mdx

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ export const post = {
77
date: '2025-09-20',
88
title: 'A DHT for iroh',
99
description:
10-
"Let's write a DHT for iroh - part 3",
10+
"Let's write a DHT for iroh - tests",
1111
}
1212

1313
export const metadata = {
@@ -136,27 +136,27 @@ We are using the [textplots] crate to get nice plots in the console, with some o
136136
```
137137

138138
<div className="not-prose">
139-
<img src="/blog/lets-write-a-dht/perfect_routing_tables_1k_-_histogram_-_commonality_with_perfect_set_of_20_ids.png" width={1000} alt="Removal of 100 nodes" />
139+
<img src="/blog/lets-write-a-dht-3/perfect_routing_tables_1k_-_histogram_-_commonality_with_perfect_set_of_20_ids.png" width={1000} alt="Removal of 100 nodes" />
140140
</div>
141141

142142
So far, so good. We get a 100% overlap with the perfect set of ids.
143143

144144
<div className="not-prose">
145-
<img src="/blog/lets-write-a-dht/perfect_routing_tables_1k_-_storage_usage_per_node.png" width={1000} alt="Removal of 100 nodes" />
145+
<img src="/blog/lets-write-a-dht-3/perfect_routing_tables_1k_-_storage_usage_per_node.png" width={1000} alt="Removal of 100 nodes" />
146146
</div>
147147

148148
<div className="not-prose">
149-
<img src="/blog/lets-write-a-dht/perfect_routing_tables_1k_-_histogram_-_storage_usage_per_node.png" width={1000} alt="Removal of 100 nodes" />
149+
<img src="/blog/lets-write-a-dht-3/perfect_routing_tables_1k_-_histogram_-_storage_usage_per_node.png" width={1000} alt="Removal of 100 nodes" />
150150
</div>
151151

152152
Data is evenly distributed over the DHT nodes. The histogram also looks reasonable. Since we have 100% overlap with the perfect set of nodes, the little bump at the end is just a blip, provided or XOR metric works.
153153

154154
<div className="not-prose">
155-
<img src="/blog/lets-write-a-dht/perfect_routing_tables_1k_-_routing_table_size_per_node.png" width={1000} alt="Removal of 100 nodes" />
155+
<img src="/blog/lets-write-a-dht-3/perfect_routing_tables_1k_-_routing_table_size_per_node.png" width={1000} alt="Removal of 100 nodes" />
156156
</div>
157157

158158
<div className="not-prose">
159-
<img src="/blog/lets-write-a-dht/perfect_routing_tables_1k_-_histogram_-_routing_table_size_per_node.png" width={1000} alt="Removal of 100 nodes" />
159+
<img src="/blog/lets-write-a-dht-3/perfect_routing_tables_1k_-_histogram_-_routing_table_size_per_node.png" width={1000} alt="Removal of 100 nodes" />
160160
</div>
161161

162162
All routing tables have roughly the same size. Not surprising since we have initialized them all with a randomized sequence of all node ids.
@@ -176,21 +176,21 @@ So let's see how bad it is.
176176
```
177177

178178
<div className="not-prose">
179-
<img src="/blog/lets-write-a-dht/just_bootstrap_1k_-_histogram_-_commonality_with_perfect_set_of_20_ids.png" width={1000} alt="Removal of 100 nodes" />
179+
<img src="/blog/lets-write-a-dht-3/just_bootstrap_1k_-_histogram_-_commonality_with_perfect_set_of_20_ids.png" width={1000} alt="Removal of 100 nodes" />
180180
</div>
181181

182182
Pretty bad. The routing gives between 0 and 11 correct nodes. Note that even with this very suboptimal routing table setup the lookup would work 100% of the time if you use the same node for storage and retrieval, since it always gives the same wrong answer. If you were to use different nodes, there is still some decent chance of an overlap.
183183

184184
Let's look at more stats:
185185

186186
<div className="not-prose">
187-
<img src="/blog/lets-write-a-dht/just_bootstrap_1k_-_storage_usage_per_node.png" width={1000} alt="Removal of 100 nodes" />
187+
<img src="/blog/lets-write-a-dht-3/just_bootstrap_1k_-_storage_usage_per_node.png" width={1000} alt="Removal of 100 nodes" />
188188
</div>
189189

190190
We interact with the node with index 500, and since each node only knows about nodes further right on the ring, the nodes to the left of our initial node are not used at all.
191191

192192
<div className="not-prose">
193-
<img src="/blog/lets-write-a-dht/just_bootstrap_1k_-_routing_table_size_per_node.png" width={1000} alt="Removal of 100 nodes" />
193+
<img src="/blog/lets-write-a-dht-3/just_bootstrap_1k_-_routing_table_size_per_node.png" width={1000} alt="Removal of 100 nodes" />
194194
</div>
195195

196196
We have only interacted with node 500, so it has learned a bit about the network. But the other nodes only have the initial 20 bootstrap nodes. We have run all nodes in transient mode, so they don't learn about other nodes unless they actively perform a query, which in this case only node 500 has done.
@@ -206,19 +206,19 @@ cargo test --release self_lookup_strategy -- --nocapture
206206
Initially things look just as bad as with just bootstrap nodes:
207207

208208
<div className="not-prose">
209-
<img src="/blog/lets-write-a-dht/self_lookup_strategy-0_-_histogram_-_commonality_with_perfect_set_of_20_ids.png" width={1000} alt="Removal of 100 nodes" />
209+
<img src="/blog/lets-write-a-dht-3/self_lookup_strategy-0_-_histogram_-_commonality_with_perfect_set_of_20_ids.png" width={1000} alt="Removal of 100 nodes" />
210210
</div>
211211

212212
But after just a few random lookups, routing results are close to perfect for this small DHT
213213

214214
<div className="not-prose">
215-
<img src="/blog/lets-write-a-dht/self_lookup_strategy-9_-_histogram_-_commonality_with_perfect_set_of_20_ids.png" width={1000} alt="Removal of 100 nodes" />
215+
<img src="/blog/lets-write-a-dht-3/self_lookup_strategy-9_-_histogram_-_commonality_with_perfect_set_of_20_ids.png" width={1000} alt="Removal of 100 nodes" />
216216
</div>
217217

218218
The node that is being probed can still be clearly seen, but after just a few self lookups at least all nodes have reasonably sized routing tables:
219219

220220
<div className="not-prose">
221-
<img src="/blog/lets-write-a-dht/self_lookup_strategy-9_-_routing_table_size_per_node.png" width={1000} alt="Removal of 100 nodes" />
221+
<img src="/blog/lets-write-a-dht-3/self_lookup_strategy-9_-_routing_table_size_per_node.png" width={1000} alt="Removal of 100 nodes" />
222222
</div>
223223

224224
## How big can you go?
@@ -291,7 +291,7 @@ But now we need to additionally show evolution of this bitmap over time. Fortuna
291291
Here it is:
292292

293293
<div className="not-prose">
294-
<img src="/blog/lets-write-a-dht/partition_1k.gif" width={1000} height={1000} alt="Addition of 100 partitioned nodes" />
294+
<img src="/blog/lets-write-a-dht-3/partition_1k.gif" width={1000} height={1000} alt="Addition of 100 partitioned nodes" />
295295
</div>
296296

297297
The big diagonal bar is the bootstrap nodes. As you can see they wrap around at 900. The lowest 100 rows are the routing tables of the initially disconnected nodes, and the rightmost 100 pixels is the knowledge of the connected nodes of the initially disconnected rows. Both are initially empty. As soon as the 100 partitioned nodes are connected, the 100 new nodes very quickly learn about the main swarm, and the main swarm somewhat more slowly learns about the 100 new nodes. So everything works as designed.
@@ -336,7 +336,7 @@ async fn remove_1k() -> TestResult<()> {
336336
And here is the resulting gif:
337337

338338
<div className="not-prose">
339-
<img src="/blog/lets-write-a-dht/remove_1k.gif" width={1000} height={1000} alt="Removal of 100 nodes" />
339+
<img src="/blog/lets-write-a-dht-3/remove_1k.gif" width={1000} height={1000} alt="Removal of 100 nodes" />
340340
</div>
341341

342342
You can clearly see all nodes quickly forgetting about the dead nodes (last 100 pixels in each row). So removal of dead nodes works in principle. You could of course accelerate this by explicitly pinging all routing table entries in regular intervals, but that would be costly, and only gradually forgetting about dead nodes might even have some advantages - there is a grace period where nodes could come back.

0 commit comments

Comments
 (0)