Closes #508 - chores: implemented webhooks for onchain events#776
Closes #508 - chores: implemented webhooks for onchain events#776KodeSage wants to merge 5 commits intoSolFoundry:mainfrom
Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: ASSERTIVE Plan: Pro Run ID: 📒 Files selected for processing (2)
📝 WalkthroughWalkthroughAdds on-chain webhook ingestion and batched delivery: new internal POST /api/webhooks/internal/chain-events with indexer authentication; a ChainWebhookBatcher that groups events on 5s windows and is started/stopped with the app; new DB migration and SQLAlchemy model for webhook delivery attempts; expanded webhook models (batch payloads, delivery-attempt public view, on-chain event set); new contributor webhook endpoints (delivery-stats, test); service changes to build/deliver batches, retry with per-attempt logging, and dashboard aggregation; tests for ingestion/batching/delivery; docs (WEBHOOK_EVENT_CATALOG); and CHAIN_WEBHOOK_INDEXER_SECRET added to config examples. Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 3❌ Failed checks (1 warning, 2 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/app/api/contributor_webhooks.py`:
- Around line 146-163: Add explicit handling and documentation for errors and
the response model in the webhook_test_delivery endpoint: update the endpoint to
declare a response_model (e.g., a pydantic model with status and
endpoints_notified) and add the ValueError to the responses mapping (e.g., 400:
{"model": ErrorResponse, "description": "..."}), or wrap the call to
ContributorWebhookService.dispatch_test_event in a try/except that converts
ValueError into an HTTPException(status_code=400) so OpenAPI and runtime
behavior are explicit; reference the webhook_test_delivery function and the
dispatch_test_event method when making these edits.
In `@backend/app/main.py`:
- Around line 176-177: Wrap the await chain_webhook_batcher.stop() call in a
try/except so any exception from chain_webhook_batcher.stop() is caught and
logged (use logger.exception or process_logger.error with the exception) and
does not abort the rest of shutdown; ensure task cancellation and calls to
close_redis and close_db still run (move them into a finally block or simply run
them after the except) so resources are always released even if
chain_webhook_batcher.stop() fails.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 50dbbf99-5d4d-40b2-b898-2347fd4d19fc
⛔ Files ignored due to path filters (1)
contracts/Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (17)
.env.examplebackend/alembic/env.pybackend/alembic/versions/006_webhook_delivery_attempts.pybackend/app/api/chain_webhook_indexer.pybackend/app/api/contributor_webhooks.pybackend/app/database.pybackend/app/main.pybackend/app/models/contributor_webhook.pybackend/app/models/webhook_delivery.pybackend/app/services/chain_webhook_batcher.pybackend/app/services/config_validator.pybackend/app/services/contributor_webhook_service.pybackend/tests/test_chain_webhooks.pybackend/tests/test_contributor_webhooks.pydocs/WEBHOOK_EVENT_CATALOG.mdfrontend/src/components/ContributorDashboard.tsxtest.md
| @router.post( | ||
| "/test", | ||
| summary="Send a test webhook", | ||
| description=( | ||
| "Dispatches a signed ``webhook.test`` payload immediately to every active " | ||
| "webhook registered by the caller (not batched)." | ||
| ), | ||
| responses={ | ||
| 401: {"model": ErrorResponse, "description": "Authentication required"}, | ||
| }, | ||
| ) | ||
| async def webhook_test_delivery( | ||
| user_id: str = Depends(get_current_user_id), | ||
| db: AsyncSession = Depends(get_db), | ||
| ) -> dict[str, str | int]: | ||
| service = ContributorWebhookService(db) | ||
| n = await service.dispatch_test_event(user_id) | ||
| return {"status": "completed", "endpoints_notified": n} |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider adding explicit error handling for dispatch_test_event.
The service's dispatch_test_event method raises ValueError if "webhook.test" is not configured in WEBHOOK_EVENTS. While the global value_error_handler in main.py catches this and returns 400, consider documenting this in the responses dict or handling it explicitly for clarity.
Additionally, consider adding a response_model for the return type to ensure consistent API documentation in OpenAPI.
💡 Optional improvement
`@router.post`(
"/test",
summary="Send a test webhook",
description=(
"Dispatches a signed ``webhook.test`` payload immediately to every active "
"webhook registered by the caller (not batched)."
),
responses={
+ 400: {"model": ErrorResponse, "description": "webhook.test event not configured"},
},
)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/api/contributor_webhooks.py` around lines 146 - 163, Add explicit
handling and documentation for errors and the response model in the
webhook_test_delivery endpoint: update the endpoint to declare a response_model
(e.g., a pydantic model with status and endpoints_notified) and add the
ValueError to the responses mapping (e.g., 400: {"model": ErrorResponse,
"description": "..."}), or wrap the call to
ContributorWebhookService.dispatch_test_event in a try/except that converts
ValueError into an HTTPException(status_code=400) so OpenAPI and runtime
behavior are explicit; reference the webhook_test_delivery function and the
dispatch_test_event method when making these edits.
| await chain_webhook_batcher.stop() | ||
|
|
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider wrapping stop() in try/except for graceful shutdown.
If chain_webhook_batcher.stop() raises an exception, subsequent cleanup steps (task cancellations, close_redis, close_db) will not execute, potentially leaving resources unreleased.
💡 Suggested improvement
- await chain_webhook_batcher.stop()
+ try:
+ await chain_webhook_batcher.stop()
+ except Exception as exc:
+ logger.error("chain_webhook_batcher shutdown error: %s", exc)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| await chain_webhook_batcher.stop() | |
| try: | |
| await chain_webhook_batcher.stop() | |
| except Exception as exc: | |
| logger.error("chain_webhook_batcher shutdown error: %s", exc) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/main.py` around lines 176 - 177, Wrap the await
chain_webhook_batcher.stop() call in a try/except so any exception from
chain_webhook_batcher.stop() is caught and logged (use logger.exception or
process_logger.error with the exception) and does not abort the rest of
shutdown; ensure task cancellation and calls to close_redis and close_db still
run (move them into a finally block or simply run them after the except) so
resources are always released even if chain_webhook_batcher.stop() fails.
There was a problem hiding this comment.
Actionable comments posted: 6
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/app/api/chain_webhook_indexer.py`:
- Around line 51-57: The indexer credential is currently treated as a single
bearer secret and notify_user_id (Field notify_user_id) is optional, allowing
any holder of the indexer secret to broadcast to all active webhooks; fix this
by requiring scoping and enforcing authorization: make notify_user_id required
in the indexer payload or, if optional, validate the indexer credential against
an allowlist of user UUIDs and reject requests with no notify_user_id; update
the chain_webhook_indexer.py logic that parses the credential (the
bearer/indexer auth handling around the code that reads notify_user_id) to
enforce the requirement and add explicit authorization checks, and modify
contributor_webhook_service.py (the code path in the function that queries
active webhooks around Lines 238-245) so that it never performs an unscoped
query when notify_user_id is missing—return 400/403 instead or scope the query
to allowed user_ids tied to the indexer credential.
In `@backend/app/models/webhook_delivery.py`:
- Around line 42-52: The model's event_types Column uses generic JSON but the
Alembic migration uses postgresql.JSONB, causing schema drift; update the
WebhookDelivery model's event_types Column to use PostgreSQL JSONB (matching the
migration 006_webhook_delivery_attempts.py) and add the corresponding import
from sqlalchemy.dialects.postgresql (ensure any ascixt/text type handling
matches the migration) so Base.metadata.create_all() produces the same JSONB
column as Alembic.
In `@backend/app/services/contributor_webhook_service.py`:
- Around line 187-193: The current sequential loop that awaits
self._deliver_with_retry for each webhook (seen in the fan-out loop around the
for wh in webhooks at lines ~187-193 and the similar loop at ~247-250) causes
one slow/dead endpoint to block all subsequent deliveries; change the
implementation to fire deliveries concurrently instead of awaiting each call
inline: create asyncio Tasks (or use asyncio.gather / an asyncio.TaskGroup) for
each call to self._deliver_with_retry(wh, event, payload_bytes,
event_types=[event]) and then await them as a group; to avoid unbounded
concurrency, wrap individual delivery calls with a concurrency limiter (e.g., an
asyncio.Semaphore or bounded worker pool) so you still limit parallelism while
ensuring a single slow endpoint cannot stall the entire fan-out. Ensure error
handling/logging is preserved for each task.
- Around line 277-338: The loop currently lets DB write failures from
_log_attempt()/_record_delivery() propagate into the outer retry flow and cause
re-sends after a 2xx; to fix, ensure a successful HTTP 2xx response NEVER
triggers a retry even if DB writes fail by isolating those writes in their own
try/except and swallowing/logging DB errors without re-raising them, then
immediately return; specifically, in the send loop around the response-success
branch that references webhook.id, batch_id, attempt, resp.status, call
_log_attempt and _record_delivery inside a nested try/except (or perform the
return before propagating errors) so exceptions from
_log_attempt/_record_delivery do not set last_exc or fall through to the
retry/backoff (MAX_ATTEMPTS, BACKOFF_BASE_SECONDS) path.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: cd85debc-9560-482e-ae99-73017ab19ddb
📒 Files selected for processing (5)
.github/workflows/anchor.ymlbackend/alembic/versions/006_webhook_delivery_attempts.pybackend/app/api/chain_webhook_indexer.pybackend/app/models/webhook_delivery.pybackend/app/services/contributor_webhook_service.py
| block_time: Optional[str] = Field( | ||
| None, | ||
| description="ISO-8601 UTC timestamp from the block (optional).", | ||
| ) | ||
| accounts: dict[str, Any] = Field( | ||
| default_factory=dict, | ||
| description="Relevant account pubkeys and parsed fields (indexer-specific).", | ||
| ) | ||
| bounty_id: str = Field( | ||
| default="", | ||
| description="Optional bounty correlation id when applicable.", | ||
| ) | ||
| extra: dict[str, Any] = Field( | ||
| default_factory=dict, | ||
| description="Additional fields merged into the webhook ``data`` object.", | ||
| ) | ||
| notify_user_id: Optional[str] = Field( | ||
| None, | ||
| description=( | ||
| "If set, only webhooks owned by this user UUID receive the event. " | ||
| "If omitted, all active subscriber URLs receive batched deliveries." | ||
| ), | ||
| ) |
There was a problem hiding this comment.
The ingest schema is not enforced for several structured fields.
Lines 35-57 accept block_time and notify_user_id as plain strings, so malformed timestamps and non-UUID values pass validation. Lines 101-106 then forward block_time verbatim into outbound payloads, and backend/app/services/contributor_webhook_service.py Lines 241-242 only coerce notify_user_id to UUID during batch delivery, after this endpoint has already returned 202. The merge at Lines 101-102 also lets extra.accounts overwrite the reserved data.accounts object, which breaks the documented event shape.
As per coding guidelines, backend/**: Python FastAPI backend. Analyze thoroughly: Input validation and SQL injection vectors; Error handling and edge case coverage; API contract consistency with spec.
Also applies to: 101-111
| notify_user_id: Optional[str] = Field( | ||
| None, | ||
| description=( | ||
| "If set, only webhooks owned by this user UUID receive the event. " | ||
| "If omitted, all active subscriber URLs receive batched deliveries." | ||
| ), | ||
| ) |
There was a problem hiding this comment.
One shared indexer key can broadcast arbitrary events to every subscriber.
Lines 69-81 treat the indexer credential as a single bearer secret, and Lines 51-57 make recipient scoping optional. In backend/app/services/contributor_webhook_service.py Lines 238-245, a missing notify_user_id becomes an unscoped query over every active webhook. That means any caller holding the indexer secret can synthesize arbitrary on-chain events and fan them out platform-wide, rather than being constrained to a specific source or subscriber set.
As per coding guidelines, backend/**: Python FastAPI backend. Analyze thoroughly: Authentication/authorization gaps.
Also applies to: 69-81, 89-112
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/api/chain_webhook_indexer.py` around lines 51 - 57, The indexer
credential is currently treated as a single bearer secret and notify_user_id
(Field notify_user_id) is optional, allowing any holder of the indexer secret to
broadcast to all active webhooks; fix this by requiring scoping and enforcing
authorization: make notify_user_id required in the indexer payload or, if
optional, validate the indexer credential against an allowlist of user UUIDs and
reject requests with no notify_user_id; update the chain_webhook_indexer.py
logic that parses the credential (the bearer/indexer auth handling around the
code that reads notify_user_id) to enforce the requirement and add explicit
authorization checks, and modify contributor_webhook_service.py (the code path
in the function that queries active webhooks around Lines 238-245) so that it
never performs an unscoped query when notify_user_id is missing—return 400/403
instead or scope the query to allowed user_ids tied to the indexer credential.
| event_types = Column(JSON, nullable=False) | ||
| attempt_number = Column(Integer, nullable=False) | ||
| success = Column(Boolean, nullable=False) | ||
| http_status = Column(Integer, nullable=True) | ||
| error_message = Column(Text, nullable=True) | ||
| created_at = Column( | ||
| DateTime(timezone=True), nullable=False, default=_utcnow, index=True | ||
| ) | ||
|
|
||
| __table_args__ = ( | ||
| Index("ix_webhook_delivery_webhook_created", "webhook_id", "created_at"), |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, find and examine the webhook_delivery model file
find . -name "webhook_delivery.py" -type fRepository: SolFoundry/solfoundry
Length of output: 105
🏁 Script executed:
# Find the migration file mentioned
find . -path "*alembic/versions*" -name "*webhook*" -type fRepository: SolFoundry/solfoundry
Length of output: 179
🏁 Script executed:
# Also check for any Alembic version files that might create webhook_delivery_attempts
find . -path "*alembic/versions*" -name "*.py" -type f | head -20Repository: SolFoundry/solfoundry
Length of output: 559
🏁 Script executed:
# Read the webhook_delivery model file to see the actual column definition
wc -l backend/app/models/webhook_delivery.pyRepository: SolFoundry/solfoundry
Length of output: 106
🏁 Script executed:
# Read around line 42 in the model
cat -n backend/app/models/webhook_delivery.py | sed -n '30,55p'Repository: SolFoundry/solfoundry
Length of output: 1131
🏁 Script executed:
# Read the migration file to see the column definition at lines 32-34
cat -n backend/alembic/versions/006_webhook_delivery_attempts.py | sed -n '25,45p'Repository: SolFoundry/solfoundry
Length of output: 1077
🏁 Script executed:
# Check if the model's metadata is used for runtime schema creation
rg -n "create_all|metadata.create_all" backend/app --type py -A 2 -B 2Repository: SolFoundry/solfoundry
Length of output: 631
🏁 Script executed:
# Check imports in the model file to understand what JSON type is being used
head -30 backend/app/models/webhook_delivery.pyRepository: SolFoundry/solfoundry
Length of output: 644
🏁 Script executed:
# Check the database initialization code
find backend -name "*.py" -type f | xargs rg -l "create_all|metadata" | head -10Repository: SolFoundry/solfoundry
Length of output: 398
🏁 Script executed:
# Verify how the JSON type behaves - check if there are any compatibility layers
rg -n "JSON|JSONB" backend/app/models/webhook_delivery.pyRepository: SolFoundry/solfoundry
Length of output: 126
🏁 Script executed:
# Check if there are other models using JSONB to see the pattern
rg -n "postgresql.JSONB" backend/app/models/ -A 1 -B 1Repository: SolFoundry/solfoundry
Length of output: 47
event_types column uses generic JSON type in model but postgresql.JSONB in migration.
Line 42 of backend/app/models/webhook_delivery.py declares event_types as Column(JSON, nullable=False) (generic SQLAlchemy type). However, backend/alembic/versions/006_webhook_delivery_attempts.py lines 32-34 create the same column as postgresql.JSONB(astext_type=sa.Text()).
This mismatch causes schema divergence: databases bootstrapped via Base.metadata.create_all() (called in backend/app/database.py:183) will create a generic JSON column, while Alembic-migrated databases will have JSONB. The types differ in PostgreSQL behavior—JSONB supports indexing and has distinct operator semantics. This inconsistency violates API contract consistency with the Alembic spec and will trigger schema autogeneration churn.
Align the model to use postgresql.JSONB to match the migration.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/models/webhook_delivery.py` around lines 42 - 52, The model's
event_types Column uses generic JSON but the Alembic migration uses
postgresql.JSONB, causing schema drift; update the WebhookDelivery model's
event_types Column to use PostgreSQL JSONB (matching the migration
006_webhook_delivery_attempts.py) and add the corresponding import from
sqlalchemy.dialects.postgresql (ensure any ascixt/text type handling matches the
migration) so Base.metadata.create_all() produces the same JSONB column as
Alembic.
| for wh in webhooks: | ||
| await self._deliver_with_retry( | ||
| wh, | ||
| event, | ||
| payload_bytes, | ||
| event_types=[event], | ||
| ) |
There was a problem hiding this comment.
Sequential fan-out defeats the 5-second batching window under partial failure.
Lines 187-193 and 247-250 await each webhook delivery one endpoint at a time. With the current 10-second timeout plus 2s/4s backoff, one dead endpoint can hold the loop for roughly 36 seconds before later subscribers are attempted. For global events or users near the 10-webhook cap, delivery latency can balloon from seconds to minutes.
Also applies to: 247-250
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/services/contributor_webhook_service.py` around lines 187 - 193,
The current sequential loop that awaits self._deliver_with_retry for each
webhook (seen in the fan-out loop around the for wh in webhooks at lines
~187-193 and the similar loop at ~247-250) causes one slow/dead endpoint to
block all subsequent deliveries; change the implementation to fire deliveries
concurrently instead of awaiting each call inline: create asyncio Tasks (or use
asyncio.gather / an asyncio.TaskGroup) for each call to
self._deliver_with_retry(wh, event, payload_bytes, event_types=[event]) and then
await them as a group; to avoid unbounded concurrency, wrap individual delivery
calls with a concurrency limiter (e.g., an asyncio.Semaphore or bounded worker
pool) so you still limit parallelism while ensuring a single slow endpoint
cannot stall the entire fan-out. Ensure error handling/logging is preserved for
each task.
| if 200 <= resp.status < 300: | ||
| await self._log_attempt( | ||
| webhook.id, | ||
| batch_id, | ||
| "batch", | ||
| event_types, | ||
| attempt, | ||
| True, | ||
| resp.status, | ||
| None, | ||
| ) | ||
| await self._record_delivery(webhook.id, success=True) | ||
| logger.info( | ||
| "Webhook batch delivered: id=%s attempt=%d status=%d", | ||
| webhook.id, | ||
| attempt, | ||
| resp.status, | ||
| ) | ||
| return | ||
| last_exc = RuntimeError( | ||
| f"HTTP {resp.status} from {webhook.url}" | ||
| ) | ||
| await self._log_attempt( | ||
| webhook.id, | ||
| batch_id, | ||
| "batch", | ||
| event_types, | ||
| attempt, | ||
| False, | ||
| resp.status, | ||
| str(last_exc), | ||
| ) | ||
| logger.warning( | ||
| "Webhook batch non-2xx: id=%s attempt=%d status=%d", | ||
| webhook.id, | ||
| attempt, | ||
| resp.status, | ||
| ) | ||
| except Exception as exc: | ||
| last_exc = exc | ||
| await self._log_attempt( | ||
| webhook.id, | ||
| batch_id, | ||
| "batch", | ||
| event_types, | ||
| attempt, | ||
| False, | ||
| None, | ||
| str(exc), | ||
| ) | ||
| logger.warning( | ||
| "Webhook batch error: id=%s attempt=%d error=%s", | ||
| webhook.id, | ||
| attempt, | ||
| exc, | ||
| ) | ||
|
|
||
| if attempt < MAX_ATTEMPTS: | ||
| delay = BACKOFF_BASE_SECONDS**attempt | ||
| await asyncio.sleep(delay) | ||
|
|
||
| await self._record_delivery(webhook.id, success=False) |
There was a problem hiding this comment.
A database write failure after a 2xx response causes duplicate webhook deliveries.
Lines 277-288 and 377-387 keep _log_attempt() / _record_delivery() inside the same try as the outbound POST, while _log_attempt() commits immediately at Lines 460-471. If either DB write raises after the subscriber has already returned 2xx, control drops into the retry path at Lines 315-336 / 416-438 and the same payload is POSTed again. That turns a local persistence issue into duplicated downstream events, including replaying whole batch payloads.
As per coding guidelines, backend/**: Python FastAPI backend. Analyze thoroughly: Error handling and edge case coverage.
Also applies to: 377-440, 449-472
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/services/contributor_webhook_service.py` around lines 277 - 338,
The loop currently lets DB write failures from _log_attempt()/_record_delivery()
propagate into the outer retry flow and cause re-sends after a 2xx; to fix,
ensure a successful HTTP 2xx response NEVER triggers a retry even if DB writes
fail by isolating those writes in their own try/except and swallowing/logging DB
errors without re-raising them, then immediately return; specifically, in the
send loop around the response-success branch that references webhook.id,
batch_id, attempt, resp.status, call _log_attempt and _record_delivery inside a
nested try/except (or perform the return before propagating errors) so
exceptions from _log_attempt/_record_delivery do not set last_exc or fall
through to the retry/backoff (MAX_ATTEMPTS, BACKOFF_BASE_SECONDS) path.
| *, | ||
| event_types: list[str], | ||
| delivery_mode: str = "single", | ||
| batch_id: UUID | None = None, | ||
| ) -> None: |
There was a problem hiding this comment.
Single-event attempts are being recorded with synthetic batch IDs.
Lines 353-365 generate a UUID even when delivery_mode remains "single", and Lines 573-575 expose that batch_id through the dashboard payload. The field stops meaning “correlation id for an actual batched delivery” and instead becomes non-null for every single-event attempt, which weakens the dashboard/API semantics around batch history.
As per coding guidelines, backend/**: Python FastAPI backend. Analyze thoroughly: API contract consistency with spec.
Also applies to: 365-365, 573-575
Description
This pull request extends the contributor outbound webhook system so subscribers can receive on-chain–sourced events (escrow, reputation, staking) in addition to existing bounty lifecycle events.
escrow.locked,escrow.released,reputation.updated,stake.deposited,stake.withdrawn, pluswebhook.testfor integration checks.POST /api/webhooks/internal/chain-eventsaccepts normalized payloads from Helius, Shyft, or any worker, authenticated viaX-Chain-Indexer-Key/CHAIN_WEBHOOK_INDEXER_SECRET. Optionalnotify_user_idlimits delivery to one user’s registered URLs.X-SolFoundry-Event: batchand a documented JSON envelope (delivery_mode,batch_id,events[]) withtransaction_signature,slot,timestamp, anddata.accounts(and optionalbounty_id/extra).transaction_signature/slotwhen relevant.webhook_delivery_attemptstable logs each HTTP attempt (retries, status, errors).GET /api/webhooks/delivery-statsexposes 7-day failure rate and recent attempts.POST /api/webhooks/testsends a signed test event to the caller’s endpoints.docs/WEBHOOK_EVENT_CATALOG.md— full event catalog and payload schemas.backend/tests/test_chain_webhooks.pyand updates totest_contributor_webhooks.py.Closes #508
Solana Wallet for Payout
Wallet: EwWiRi5zkynTYN9pvgjqCEiWKuFwR7SLdgFox9R3GmyS
Type of Change
Checklist
console.logor debugging code left behindTesting
Details: Ran
pytest tests/test_chain_webhooks.py tests/test_contributor_webhooks.py. Verified FastAPI app imports (from app.main import app). Indexer route behavior covered with httpxAsyncClientagainst a minimal app; batcher grouping and batch JSON schema covered in unit tests.Additional Notes
alembic upgrade head(revision006_webhook_delivery_attempts) in environments that use Alembic;init_db/create_allpicks up the model in dev.CHAIN_WEBHOOK_INDEXER_SECRETbefore enabling indexer traffic; if unset, chain ingest returns 503 (fail closed).