-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Autoconfigure network without DHCP #64
base: main
Are you sure you want to change the base?
Conversation
@@ -15,7 +15,8 @@ use krun::utils::env::find_in_path; | |||
use log::debug; | |||
use rustix::process::{getrlimit, setrlimit, Resource}; | |||
|
|||
fn main() -> Result<()> { | |||
#[tokio::main] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we go multi-threaded, some of our safety assumptions would no longer hold, e.g.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using tokio is a requirement of the rtnetlink crate. I think we can avoid the safety issues with something like this:
let rt = tokio::runtime::Runtime::new().unwrap();
rt.block_on(async {
if let Err(err) = configure_network().await {
eprintln!("Failed to configure network, continuing without it: {err}");
}
});
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we use tokio's current-thread scheduler?
Apparently it could be selected like this:
#[tokio::main(flavor = "current_thread")]
But it looks like we will still need to make sure nothing uses spawn_blocking
: https://users.rust-lang.org/t/why-does-tokios-current-thread-flavor-not-be-single-threaded/85129
How? Probably by using Handle::block_on
? But hmm... There's not really any point in choosing the current-thread scheduler then? lol
Oh, apparently this can't work:
When this is used on a current_thread runtime, only the Runtime::block_on method can drive the IO and timer drivers, but the Handle::block_on method cannot drive them. This means that, when using this method on a current_thread runtime, anything that relies on IO or timers will not work unless there is another thread currently calling Runtime::block_on on the same runtime.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we aren't planning on using tokio anywhere else, I think I feel more confident isolating it's use to configure_network
.
e79250b
to
2cf1578
Compare
@@ -42,7 +42,12 @@ fn main() -> Result<()> { | |||
|
|||
setup_fex()?; | |||
|
|||
configure_network()?; | |||
let rt = tokio::runtime::Runtime::new().unwrap(); | |||
rt.block_on(async { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand the docs correctly, this will NOT have the desired effect of ensuring the safety of the env::set_var
calls.
This runs the given future on the current thread, blocking until it is complete, and yielding its resolved result. Any tasks or timers which the future spawns internally will be executed on the runtime.
When the multi thread scheduler is used this will allow futures to run within the io driver and timer context of the overall runtime.
Any spawned tasks will continue running after block_on returns.
https://docs.rs/tokio/latest/tokio/runtime/struct.Runtime.html#method.block_on
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you're right. Another approach could be, since I don't think any of the other tasks krun-guest
puts in motion depends on the network, simply move the network configuration to krun-server
which already uses tokio and doesn't use env::set_var
.
crates/muvm/src/guest/net.rs
Outdated
} | ||
handle.route().add().v4().gateway(router).execute().await?; | ||
fs::write("/etc/resolv.conf", format!("nameserver {}", router)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are assuming that the default gateway and the dns server are the same. While true on most residential networks, it is not a correct assumption in general, and even there, some of us very intentionally override the ISP dns server to something else.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is inside the VM, so passt
is doing its thing. In our current usage (dhclient
), /etc/resolv.conf
inside the VM would contain a nameserver
entry pointing to the host's default gateway...
So as far as I can tell, the code here is correct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/etc/resolv.conf inside the VM would contain a nameserver entry pointing to the host's default gateway.
That is the problem that i am pointing out, in general nameserver != default gateway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
More correct explanation here: #17 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@WhatAmISupposedToPutHere Uhh, but I think I get what you mean now... The code here is only correct in the case of passt
doing such DNS forwarding... If the resolver on the host is not a loopback address, this code is wrong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch. I've updated the PR to actually pick up all the DNS configuration from passt, which was the right thing to do.
Passt includes a DHCP server to allow guests to easily configure the network without being aware passt is on the other side. But we _are_ aware of that, so we can take advantage of it. Instead of using DHCP, read the configuration output from passt, marshall it into some environment variables, pass it to the guest, and have krun-guest read it and apply it using rtnetlink. By doing that, we reduce the startup time in half, from... $ time krun /bin/false (...) real 0m4,301s ... to ... $ time krun /bin/false (...) real 0m1,966s In addition of reducing the boot time, this potentially will prevent some of the dhcp issues we've seen in the past. Signed-off-by: Sergio Lopez <[email protected]>
Even though we're unlikely to break the output scraping you're adding here, I wonder if we could avoid that by carrying a minimal script for the ISC's In passt's test suite, we need to bring up guests fast, and at the same time check that DHCP works, so we use this script in the guest. By the way, I wonder if you need to make this fast for IPv6 as well. There, you could disable neighbour solicitations in the guest, configure the address via DHCPv6 (or via NDP and SLAAC, bringing up the interface) with the Side note: I don't know if it works for you, but passt doesn't actually care about what address and route you're using. If the address is not the same you have on the host, it will just NAT things. You could happily use link-local addresses (even for IPv4, say, 169.254.1.1). But sure, you'll need to convey DNS information somehow. Anyway, regardless of all this, the current patch looks fine to me. |
Actually, even for that, you could hardcode things in a way similar to what Podman does with pasta(1): a single link-local address in
this way, you would lose network transparency (making the guest look like the host in terms of addresses and routes), which might be important for some applications. I wanted to try out the tricks I suggested above to make DHCP fast, but I'm currently hitting:
while trying to build libkrun. I might get back to it in a few days unless there's an obvious solution. |
From r/Showerthoughts: netlink over vsock |
You're probably running |
Oops, of course.
Right, it works, thanks! I'll try things out in a bit. |
...currently driving me mad (yes, I have libkrun and libkrunfw installed):
|
@sbrivio-rh I think it's a cache problem. Try doing EDIT: Ohhhhh... #84 (comment) |
Thanks, I would have never found that! |
```
$ strace -f ./muvm true
[...] [pid 4013940] execve("/usr/local/bin/passt", ["passt", "-q", "-f", "-t", "3334:3334", "--fd", "6"], 0x7ffd61f5fc80 /* 23 vars /) = -1 ENOENT (No such file or directory) Caused by:[pid 4013940] mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0 <unfinished ...>
|
So, at least on my setup, this whole delay is actually caused by dhcpcd(8):
because, by default (without
I get:
For some reason I couldn't get IPv6 working in the guest so I didn't try that (at least, not yet). |
Oh, it's simply disabled in the kernel configuration from libkrunfw (config-libkrunfw_x86_64):
and this is there starting from the very beginning, libkrunfw commit 443c03426a73. Is there a specific reason for it? Maybe because TSI didn't/doesn't support that? Or you don't need it at all in libkrun/muvm? Does it make sense that I spend a moment trying to "fix" that? @slp? |
That was disabled as part of the work to slim down the guest kernel. I'm okay with enabling it if there's a user demand for the feature, but so far I haven't heard of anyone requesting it. |
It just feels ethically wrong to be contributing to holding back IPv6 adoption. 🙈 |
Oof, libkrunfw doesn't have |
There's one problem with that and passt: if passt doesn't find an IPv4 interface on the host, it disables IPv4, and only smoke signals remain at that point. This is by default at least: you can still enable IPv4 even on an IPv6-only host, but then you need to give explicit addresses and routes. I can try to enable |
Passt includes a DHCP server to allow guests to easily configure the network without being aware passt is on the other side. But we are aware of that, so we can take advantage of it.
Instead of using DHCP, read the configuration output from passt, marshall it into some environment variables, pass it to the guest, and have krun-guest read it and apply it using rtnetlink.
By doing that, we reduce the startup time in half, from...
$ time krun /bin/false
(...)
real 0m4,301s
... to ...
$ time krun /bin/false
(...)
real 0m1,966s