do not try another peer on a local error#440
do not try another peer on a local error#440freemans13 wants to merge 3 commits intobsv-blockchain:mainfrom
Conversation
|
🤖 Claude Code Review Status: Complete Current Review: Found a format string bug in 4 error messages (lines 572, 578, 609, 620) where each has one History:
|
| // Local errors include context cancellation, semaphore exhaustion, and storage errors | ||
| // that are caused by local resource constraints rather than peer failures. | ||
| // These errors should not trigger peer failover since trying another peer won't help. | ||
| func isLocalError(err error) bool { |
| } | ||
| // Check if error is local (not peer-related) - don't retry with other peers | ||
| if errors.IsLocalError(err) { | ||
| return errors.NewServiceError("[catchup:fetchAndStoreSubtreeAndSubtreeData] Local error fetching subtreeData for %s (not retrying with other peers)", subtreeHash.String(), err) |
There was a problem hiding this comment.
Format string bug: has one %s placeholder but passes two arguments (hash and err). The error will be ignored. Need to add %v placeholder for the error.
There was a problem hiding this comment.
Not a bug — this is by design in the teranode errors package. The errors.New* functions (e.g. NewServiceError, NewStorageError) accept an optional error as the last variadic parameter for error wrapping, not for formatting. See errors/errors.go:327-337:
if len(params) > 0 {
lastParam := params[len(params)-1]
switch err := lastParam.(type) {
case *Error:
wErr = err
params = params[:len(params)-1]
case error:
wErr = &Error{message: err.Error()}
params = params[:len(params)-1]
}
}The last err param is extracted and set as the wrapped error (accessible via Unwrap()), then removed from the params slice before fmt.Errorf is called with the remaining args. So errors.NewServiceError("msg %s", hash, err) correctly formats %s with hash and wraps err — the error is not "ignored", it's wrapped.
This is a standard pattern used throughout the codebase.
|


While running teratestnet and syncing, I could semaphore errors. blockvalidation was deciding the problem was with the peer and started over with a new peer. The problem in this instance was in my own teranode so it shouldn't pick another peer and possible downgrade the 'bad' peer.