-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update service-level-indicators/spark-retrieval-success-rate.md to describe how Spark v1.5 ingests DDO deals #238 #13
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @Goddhi for the pull request. It goes in the right direction.
The patch is difficult to review because it contains a lot of unrelated changes, most notably changing ’
and “
to '
and "
. Please revert those changes.
|
||
The `spark-deal-observer` works by monitoring the Filecoin network for new DDO deals and automatically adding them to the Spark Eligible Deal database. This process involves the following steps: | ||
|
||
1. **Event Listening**: The `spark-deal-observer` listens for events related to DDO deals on the Filecoin blockchain. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's be more specific here and explain: what event we listen for (claim
) and how we calculate deal expiration time.
The `spark-deal-observer` works by monitoring the Filecoin network for new DDO deals and automatically adding them to the Spark Eligible Deal database. This process involves the following steps: | ||
|
||
1. **Event Listening**: The `spark-deal-observer` listens for events related to DDO deals on the Filecoin blockchain. | ||
2. **Data Extraction**: When a new DDO deal is detected, the observer extracts relevant information, including the payload CID and the associated storage provider. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to be more specific here and explain how we leverage IPNI Advertisement metadata to discover payload CIDs advertised for a given PieceCID. The documentation should also explain how we convert a multihash found in advertised entries to a CID we can retrieve. (Answer: we assume Raw (0x55) codec and create a CID in v1 format.)
3. **Database Update**: The extracted data is then formatted and inserted into the Spark Eligible Deal database, marking the deal as eligible for retrieval testing. | ||
4. **Real-Time Updates**: This process allows for immediate updates to the Spark system, meaning that new storage providers can be evaluated for their Spark scores without the previous delays associated with nightly batch processes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The updates are not real-time:
- We are processing only events older than 8 hours to get high confidence in chain finality.
- The code updating the database of eligible deals picks only deals older than 48 hours (IIRC).
It's important to explain that in this document.
This implementation introduces the spark-deal-observer pipeline for real-time ingestion of Direct Data Onboarding (DDO) deals in Spark v1.5. Key changes include: