-
Notifications
You must be signed in to change notification settings - Fork 260
Tor integration #755
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Tor integration #755
Conversation
|
Running P2P over Tor will impact block discovery for other peers due to slow performance. This could have negative effects on the anticone count of the DAG and in general be penalizing in terms of network connectivity. I am wondering if this has been considered and if-so what are the mitigations in place? |
In short: I think it is just about proper segregation of the TOR network from the clear-net, and limiting it's exposure over clear-net, so it cannot dominate latency in the clear-net, which will remain the part where blocks will be actually mined and communicated to other miners. |
|
Btw when I have time I'll try to review this, TOR has also been on my p2p bucket list. Just some overall input from my side on how I see it:
Is the security/maintenance cost of native Tor support acceptable for upstream Kaspa? It depends what you mean with native, I think the maintenance to support connecting to, and proxying over TOR should be minimal, I guess it is mostly just about expanding the address handling to support TOR addresses, keeping some other types of sockets up, and to ensure segregation of TOR and clear-net addresses. I wouldn't directly support the use of in-process spawned TOR, i.e. something like arti. This puts the maintenance burden back on us, if things end up breaking with them, or there is some versioning misalignment etc.. etc.. we get the brunt of it. If it is their client we are proxying over, then they get to at least share some of it. Also I think the people who wish to use TOR are probably the same that don't mind reading up on some set-up guide, or probably don't even need to, once they know how to point kaspa into the right direction, and don't need some all-encompassing Should I finish the DNS/name-proxy refactor and align listener defaults before even asking for mergeability? I'll be honest, I didn't understand the issue with dns seeders, is the problem simply that we also need to PR to the dns seeders to make them hand out tor addresses, or do we need a .onion dns seeder? How should we document limitations (bandwidth, Tor restarts, lack of bundled daemon) so operators don’t misinterpret the feature? Personally I would just include a set-up guide in our README.md, probably over or under the That being said, I am not sure what the overall enthusiasm for TOR is here, I have less qualms about them dragging down the network as a whole, but more so about them being so slow, that we cannot support them into the future.. Imo it needs to be bench-marked under high load testnet conditions, It is a very real possibility that TOR might simply be too slow to tango with kaspa's throughput. Maintaining and putting effort into something that cannot catch-up anyway probably isn't worth the effort. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, I think I want larger structural changes, so not really requesting individual changes per-se (even though i kind of am) , but the changes reflect more of a general architectural overhaul of how I think the addresses should be managed and integrated into kaspad. I think the finer grained changes will come as they are implemented and I probably cannot account for all changes required with a simple review. I refrained from reviewing tor set-up code, and code involved with proxying over tor, as well as tor things added to the flow context, as i do not think i am familiar enough with actual tor configurations etc... Review is more of a guidance on the route I think this pr should go related to integrating onion addresses into the kaspad, less so about the actual tor proxying and set-up.I believe this integration of onion address handling should be handled first, the actual set-up details can be done afterwards.
| return Err(Error::custom("Please specify peer IP address")); | ||
| } | ||
| let ip: RpcIpAddress = argv.remove(0).parse()?; | ||
| let ip: RpcIpAddress = argv.remove(0).parse().map_err(NetAddressError::from)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can NetAddressError be emitted from the parse() call?
| self.address_store.set(address, 0); | ||
| } | ||
|
|
||
| fn prune_onion_addresses(&mut self) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
see comment at line 85
| } | ||
| } | ||
|
|
||
| /// A network address, equivalent of a [SocketAddr]. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the comment here is no longer accurate.
| } | ||
|
|
||
| impl NetAddress { | ||
| pub fn new(ip: IpAddress, port: u16) -> Self { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
instead of new, new_onion and new_kind, I'd rather have this just accept an AddressKind explictly in new, i.e. remove new, new_onion, and rename from_kind to new. refractor accordingly.
| Self { ip: kind, port } | ||
| } | ||
|
|
||
| pub fn is_ip(&self) -> bool { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think if we have a use-case for it somewhere in the code-base, which we probably do, we should add is_onion as well. in cases where we explicitly want to check if it is onion, we can then use that call, rather then having to infer that the inverse of is_ip is an onion.
| try_from!(&protowire::BanResponseMessage, RpcResult<kaspa_rpc_core::BanResponse>); | ||
|
|
||
| try_from!(item: &protowire::UnbanRequestMessage, kaspa_rpc_core::UnbanRequest, { Self { ip: RpcIpAddress::from_str(&item.ip)? } }); | ||
| try_from!(item: &protowire::UnbanRequestMessage, kaspa_rpc_core::UnbanRequest, { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here
|
|
||
| try_from!(item: &protowire::GetPeerAddressesKnownAddressMessage, kaspa_rpc_core::RpcPeerAddress, { Self::from_str(&item.addr)? }); | ||
| try_from!(item: &protowire::GetPeerAddressesKnownAddressMessage, kaspa_rpc_core::RpcIpAddress, { Self::from_str(&item.addr)? }); | ||
| try_from!(item: &protowire::GetPeerAddressesKnownAddressMessage, kaspa_rpc_core::RpcPeerAddress, { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same as in message.rs, this shouldn't error, but be defined in rpc.proto.
| try_from!(item: &protowire::GetPeerAddressesKnownAddressMessage, kaspa_rpc_core::RpcPeerAddress, { | ||
| Self::from_str(&item.addr).map_err(RpcError::from)? | ||
| }); | ||
| try_from!(item: &protowire::GetPeerAddressesKnownAddressMessage, kaspa_rpc_core::RpcIpAddress, { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here
| use std::str::FromStr; | ||
|
|
||
| #[derive(Clone, Debug, Deserialize)] | ||
| #[serde(rename = "lowercase")] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not too familiar with Wrpc, maybe something for coderofstuff or IzioDev to go over. but that should probably be after potential changes to Netaddress in core.
| Servers.toml | ||
| release | ||
| package-sizes.js | ||
| CLAUDE.md |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand this is something to do with AI assistance, and it probably helps regarding not accidentally uploading this, consider PRing this separately, or put a big TODO: remove this, and CLAUDE.md, before merging over it. As I think it is independent from actual PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll just leave one comment for this file, so I think we should start with:
#[derive(Clone)]
pub struct DbAddressesStore {
db: Arc<DB>,
// a cached db access for standard "clear-net" addresses
std_access: CachedDbAccess<DbStdAddressKey, StdEntry>,
// a cached db access for tor overlay addresses
onion_access: CachedDbAccess<DbOnionAddressKey, OnionEntry>
}this implies separate serialization of the keys and entries and setting a new Database prefix for the new store etc.. etc.. for keys I would go for SocketAddr, or the current NetAddress (see comments regarding it in utils/networking), for StdEntry. For OnionEntry use the appropriate OnionAddress.
this also means renaming methods to set_std / set_onion, get_std / get_onion etc.. you get the drift.
Tor + Kaspa
.onionaddress, and forces outbound P2P through Tor.Inspiration & Approach
Intent & Expectations
Proof-of-Concept Status
kaspad --tor-control=… --tor-cookie=… --proxy-net=onion=127.0.0.1:9050 --listen-onion --tor-onlyroutes outbound traffic exclusively through SOCKS and advertises a persistent hidden service.What Landed
kaspad/src/tor_manager.rs,kaspad/src/daemon.rs)tor-interface, negotiate password/cookie auth, stream bootstrap events, publish/remove hidden services, and persist Ed25519 onion keys under~/.rusty-kaspa/<net>/tor/p2p_onion.key.kaspad/src/args.rs,kaspad/src/daemon.rs)ProxySettings::resolvenow emitsSocksProxyParamswith auth metadata and auto-promotes Tor proxies to randomized credentials.protocol/p2p/src/core/connection_handler.rs,protocol/flows/src/service.rs)SocksProxyConfig, pick default/IPv4/IPv6/onion endpoints, and generate per-stream username/password pairs whenever Tor isolation is requested.protocol/flows/src/flow_context.rs,protocol/flows/src/v5/address.rs,protocol/p2p/proto/p2p.proto,utils/src/networking.rs,components/addressmanager/*)--onlynet/--tor-onlyfiltering.kaspad/src/daemon.rs,protocol/flows/src/service.rs)Known Gaps / Risks
kaspad/src/args.rs,components/connectionmanager/src/lib.rs). Full parity requires host-preserving proxy targets plus a name-proxy pipeline.--listen/--onlynethandling.lsof, Tor control output). There’s no integration harness that spins up a Tor stub yet.Call for Discussion