Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
182 changes: 86 additions & 96 deletions src/app/docs/quickstart/page.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ If you're ever confused about what to import, take a look at the imports in the

We'll assume you've set up [rust](https://www.rust-lang.org/) and [cargo](https://doc.rust-lang.org/cargo/) on your machine.

Initialize a new project by running `cargo init file-transfer`, then `cd file-transfer` and install all the packages we're going to use: `cargo add iroh iroh-base tokio anyhow && cargo add iroh-blobs --features rpc`.
Initialize a new project by running `cargo init file-transfer`, then `cd file-transfer` and install all the packages we're going to use: `cargo add iroh iroh-blobs tokio anyhow`.

From here on we'll be working inside the `src/main.rs` file.

Expand Down Expand Up @@ -87,8 +87,10 @@ async fn main() -> anyhow::Result<()> {
// connections in the iroh p2p world
let endpoint = Endpoint::builder().discovery_n0().bind().await?;

// We initialize the Blobs protocol in-memory
let blobs = Blobs::memory().build(&endpoint);
// We initialize an in-memory backing store for iroh-blobs
let store = MemStore::new();
// Then we initialize a struct that can accept blobs requests over iroh connections
let blobs = Blobs::new(&store, endpoint.clone(), None);

// ...

Expand All @@ -102,65 +104,10 @@ Learn more about what we mean by "protocol" on the [protocol documentation page]

With these two lines, we've initialized iroh-blobs and gave it access to our `Endpoint`.

This is not quite enough to make it answer requests from the network, for that we need to configure a so-called `Router` for protocols.
Similar to routers in webserver libraries, it runs a loop accepting incoming connections and routes them to the specific handler.
However, instead of handlers being organized by HTTP paths, it routes based on "ALPNs".
Read more about ALPNs and the router on the [protocol](/docs/concepts/protocol#alpns) and [router](/docs/concepts/router) documentation pages.

Now, using the `Router` we can finish the skeleton of our application integrating iroh and iroh-blobs:
At this point what we want to do depends on whether we want to accept incoming iroh connections from the network or create outbound iroh connections to other nodes.
Which one we want to do depends on if the executable was called with `send` as an argument or `receive`, so let's parse these two options out from the CLI arguments and match on them:

```rust
#[tokio::main]
async fn main() -> anyhow::Result<()> {
// Create an endpoint, it allows creating and accepting
// connections in the iroh p2p world
let endpoint = Endpoint::builder().discovery_n0().bind().await?;

// We initialize the Blobs protocol in-memory
let blobs = Blobs::memory().build(&endpoint);

// Now we build a router that accepts blobs connections & routes them
// to the blobs protocol.
let router = Router::builder(endpoint)
.accept(iroh_blobs::ALPN, blobs.clone())
.spawn();

// do *something*

// Gracefully shut down the router
println!("Shutting down.");
router.shutdown().await?;

Ok(())
}
```

I've also taken the liberty to make sure that we're gracefully shutting down the `Router` and all its protocols with it, in this case that's only iroh-blobs.


## Doing something

So far, this code works, but doesn't actually do anything besides spinning up a node and immediately shutting down.
If we put in a `tokio::time::timeout` or `tokio::signal::ctrl_c().await` in there, although it *would* actually respond to network requests for the blobs protocol, these responses are practically useless as we've stored no blobs to respond with.

Here's our plan for turning this into a CLI that actually does what we set out to build:
1. We'll grab a [`Blobs::client`](https://docs.rs/iroh-blobs/latest/iroh_blobs/net_protocol/struct.Blobs.html#method.client) to interact with the iroh-blobs node we're running locally.
2. We check the CLI arguments to find out whether you ran `cargo run -- send [PATH]` or `cargo run -- receive [TICKET] [PATH]`.
3. If we're supposed to send data:
- we'll use [`add_from_path`](https://docs.rs/iroh-blobs/latest/iroh_blobs/rpc/client/blobs/struct.Client.html#method.add_from_path) to index local data and make it available,
- print instructions for fetching said file,
- and then wait for Ctrl+C.
4. If we're supposed to receive data:
- we'll parse the ticket out of the CLI arguments,
- download the file using [`download`](https://docs.rs/iroh-blobs/latest/iroh_blobs/rpc/client/blobs/struct.Client.html#method.download),
- and copy the result to the local file system.

Phew okay! Here's how we'll grab an iroh-blobs client and look at the CLI arguments:

```rust
// We use a blobs client to interact with the blobs protocol we're running locally:
let blobs_client = blobs.client();

// Grab all passed in arguments, the first one is the binary itself, so we skip it.
let args: Vec<String> = std::env::args().skip(1).collect();
// Convert to &str, so we can pattern-match easily:
Expand All @@ -186,7 +133,10 @@ match arg_refs.as_slice() {
}
```

Now all we need to do is fill in the `todo!()`s one-by-one:
We're also going to print some simple help text when there's no arguments or we can't parse them.

What's left to do now is fill in the two `todo!()`s!


### Getting ready to send

Expand All @@ -195,9 +145,9 @@ If we want to make a file available over the network with iroh-blobs, we first n
<Note>
What does this step do?

It hashes the file using [BLAKE3](https://en.wikipedia.org/wiki/BLAKE_(hash_function)) and stores a so-called ["outboard"](https://github.com/oconnor663/bao?tab=readme-ov-file#outboard-mode) for that file.
This outboard file contains information about hashes of parts of this file.
All of this enables some extra features with iroh-blobs like automatically verifying the integrity of the file *during* streaming, verified range downloads and download resumption.
It hashes the file using [BLAKE3](https://en.wikipedia.org/wiki/BLAKE_(hash_function)) and remembers a so-called ["outboard"](https://github.com/oconnor663/bao?tab=readme-ov-file#outboard-mode) for that file.
This outboard contains information about hashes of parts of this file.
All of this enables some extra features with iroh-blobs like automatically verifying the integrity of the file *while it's streaming*, verified range downloads and download resumption.
</Note>

```rust
Expand All @@ -206,81 +156,121 @@ let abs_path = std::path::absolute(&filename)?;

println!("Hashing file.");

// keep the file in place and link it, instead of copying it into the in-memory blobs database
let in_place = true;
let blob = blobs_client
.add_from_path(abs_path, in_place, SetTagOption::Auto, WrapOption::NoWrap)
.await?
.finish()
.await?;
// When we import a blob, we get back a "tag" that refers to said blob in the store
// and allows us to control when/if it gets garbage-collected
let tag = store.blobs().add_path(abs_path).await?;
```

The `WrapOption::NoWrap` is just an indicator that we don't want to wrap the file with some metadata information about its file name.
We keep it simple here for now!
<Note>
For other use cases, there are other ways of importing blobs into iroh-blobs, you're not restricted to pulling them from the file system!
You can see other options available, such as [`add_slice`](https://docs.rs/iroh-blobs/latest/iroh_blobs/api/blobs/struct.Blobs.html#method.add_slice).
Make sure to also check out the options you can pass and their documentation for some interesting tidbits on performance.
</Note>

Now, we'll print a `BlobTicket`.
This ticket contains the `NodeId` of our `Endpoint` as well as the file's BLAKE3 hash.
The return value `tag` contains the final piece of information such that another node can fetch a blob from us.

We'll use a `BlobTicket` to put the file's BLAKE3 hash and our endpoint's `NodeId` into a single copy-able string:

```rust
let node_id = router.endpoint().node_id();
let ticket = BlobTicket::new(node_id.into(), blob.hash, blob.format)?;
let node_id = endpoint.node_id();
let ticket = BlobTicket::new(node_id.into(), tag.hash, tag.format);

println!("File hashed. Fetch this file by running:");
println!("cargo run --example transfer -- receive {ticket} {path}");
println!(
"cargo run --example transfer -- receive {ticket} {}",
filename.display()
);
```

Now we've imported the file and produced instructions for how to fetch it, but we're actually not yet actively listening for incoming connections yet! (iroh-blobs won't do so unless you specifically tell it to do that.)

For that we'll use iroh's `Router`.
Similar to routers in webserver libraries, it runs a loop accepting incoming connections and routes them to the specific handler.
However, instead of handlers being organized by HTTP paths, it routes based on "ALPNs".
Read more about ALPNs and the router on the [protocol](/docs/concepts/protocol#alpns) and [router](/docs/concepts/router) documentation pages.

In our case, we only need a single protocol, but constructing a router also takes care of running the accept loop, so that makes our life easier:

```rs
// For sending files we build a router that accepts blobs connections & routes them
// to the blobs protocol.
let router = Router::builder(endpoint)
.accept(iroh_blobs::ALPN, blobs)
.spawn();

tokio::signal::ctrl_c().await?;

// Gracefully shut down the node
println!("Shutting down.");
router.shutdown().await?;
```

And as you can see, as a final step we wait for the user to stop the file providing side by hitting `Ctrl+C` in the console.
And as you can see, as a final step we wait for the user to stop the file providing side by hitting `Ctrl+C` in the console and once they do so, we shut down the router gracefully.


### Connecting to the other side to receive

On the connection side, we got the `ticket` and the `path` from the CLI arguments and we can parse them into their `struct` versions.

With them parsed, we can call `blobs.download` with the information contained in the ticket and wait for the download to finish:
With them parsed
- we first construct a `Downloader` (that can help us coordinate multiple downloads from multiple peers if we'd want to)
- and then call `.download` with the information contained in the ticket and wait for the download to finish:

<Note>
Reusing the same downloader across multiple downloads can be more efficient, e.g. by reusing existing connections.
In this example we don't see this, but it might come in handy for your use case.
</Note>

```rust
let filename: PathBuf = filename.parse()?;
let abs_path = std::path::absolute(filename)?;
let ticket: BlobTicket = ticket.parse()?;

// For receiving files, we create a "downloader" that allows us to fetch files
// from other nodes via iroh connections
let downloader = store.downloader(&endpoint);

println!("Starting download.");

blobs_client
.download(ticket.hash(), ticket.node_addr().clone())
.await?
.finish()
downloader
.download(ticket.hash(), Some(ticket.node_addr().node_id))
.await?;

println!("Finished download.");
```

As a final step, we'll export the file we just downloaded into our blobs database to the desired file path:
<Note>
The return value of `.download()` is [`DownloadProgress`](https://docs.rs/iroh-blobs/latest/iroh_blobs/api/downloader/struct.DownloadProgress.html).
You can either `.await` it to wait for the download to finish, or you can stream out progress events instead, e.g. if you wanted to use this for showing a nice progress bar!
</Note>

As a final step, we'll export the file we just downloaded into our in-memory blobs database to the desired file path:

```rust
println!("Copying to destination.");

blobs_client
.export(
ticket.hash(),
abs_path,
ExportFormat::Blob,
ExportMode::Copy,
)
.await?
.finish()
.await?;
store.blobs().export(ticket.hash(), abs_path).await?;

println!("Finished copying.");
```

<Note>
This first downloads the file completely into memory, then copies it from memory to file in a second step.

There's ways to make this work without having to store the whole file in memory, but those involve setting up `Blobs::persistent` instead of `Blobs::memory` and using `blobs.export` with `EntryMode::TryReference`.
There's ways to make this work without having to store the whole file in memory!
This would involve setting up an `FsStore` instead of a `MemStore` and using `.export_with_opts` with `ExportMode::TryReference`.
Something similar can be done on the sending side!
We'll leave these changes as an exercise to the reader 😉
</Note>

Before we leave, we'll gracefully shut down our endpoint in the receive branch, too:

```rs
// Gracefully shut down the node
println!("Shutting down.");
endpoint.close().await;
```


## That's it!

Expand Down
2 changes: 1 addition & 1 deletion src/components/GithubStars.jsx
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ export default function GithubStars(props) {
return (
<Link href="https://github.com/n0-computer/iroh" className='p-2 -mt-2 flex text-sm leading-5 fill-zinc-400 text-zinc-600 transition hover:text-zinc-900 dark:text-zinc-400 dark:hover:text-zinc-600 dark:hover:fill-zinc-600 hover:bg-black/10 rounded'>
<GithubIcon className="h-5 w-5" />
<span className='ml-2 mt-0'>5.4k</span>
<span className='ml-2 mt-0'>5.5k</span>
</Link>
)
}
Loading