Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

wallet: avoid premature elevation of auto fee #9767

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

nahuhh
Copy link

@nahuhh nahuhh commented Feb 4, 2025

The intent of this PR is

  1. If blocks are full, to also require a backlog in the tx pool (previously acted on blocks alone). This avoids elevating the fee (will reduce the fee to low) if the txpool is empty.
  2. If blocks are NOT full, to only elevate if the tx pool has >1 block backlog (previously 1 block would elevate). This helps avoid raising the fee everytime someone consolidates or there is a slow block.

src/wallet/wallet2.cpp Outdated Show resolved Hide resolved
@moneromooo-monero
Copy link
Collaborator

iamamyth is correct, returning 0 would make adjust_priority skip its job, which is, in part, to map 0 (default) to one of the 1..3 priorities. It might work in practice since several other bits understand 0 to be default or low as a fallback, but it's safer to avoid returning 0 here.

About the intent of the patch, the intent seems sensible overall, but we might not want to keep priority 1 with full blocks and empty txpool since (1) it misses the case of an asshole spamming full blocks without first broadcasting txes (needs high hash rate though) or the case of starting monerod, failing to sync the txpool (which would otherwise be large), then sending a tx. Or even using flush_txpool to recover from a wallet issue, then sending.

@nahuhh
Copy link
Author

nahuhh commented Feb 6, 2025

I spoke to moo in DM but posting here for clarity

iamamyth is correct, returning 0 would make adjust_priority skip its job, which is, in part, to map 0 (default) to one of the 1..3 priorities. It might work in practice since several other bits understand 0 to be default or low as a fallback, but it's safer to avoid returning 0 here.

done

About the intent of the patch, the intent seems sensible overall, but we might not want to keep priority 1 with full blocks and empty txpool since (1) it misses the case of an asshole spamming full blocks without first broadcasting txes (needs high hash rate though)

regular users txpools should be normal, and would be confirmed as usual. There are pools who mine delayed txpools, and possible to have pools like mara who mine tx what havent been relayed. If "mara" had enough hashrate to prevent other miners from producing blocks, then the txpool would become backlogged (leading to an elevation).

or the case of starting monerod, failing to sync the txpool (which would otherwise be large), then sending a tx. Or even using flush_txpool to recover from a wallet issue, then sending.

imo this is an extreme edge case / would have to be very unlucky to connect to a node after it flushed/dropped the pool, and before it redownloaded more than 1 block worth of tx to its pool.

on stressnet with a 100mb txpool, we often flushed the pool before shutdown - which took 20+ minutes. Without the flush, startup took up to an hour. After startup, the txpool would quickly grow beyond the blocksize.

i also don't think it's a good idea to continue to keep the fees elevated unless necessary for scaling (0.0001 is low value now, but won't always be). Meaning, if there were 8/10 full blocks but the txpool has cleared, then fees should drop back down.

another edge case is using a malicious node that limits their txpool to 1 block. (imo we should disallow this in our daemon by default / set sane minimums for the txpool. Currently default is 648mb, but with no min and no max. We should have a min)

tldr: i think the changes made here are a safe improvement over the current logic.

@nahuhh
Copy link
Author

nahuhh commented Feb 6, 2025

squashed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants