Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
153 changes: 153 additions & 0 deletions alcor_exchange/alcor-exchange-docs.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,153 @@
In this guide, we'll set up a complete Alcor Exchange data pipeline from API credentials to your first data load in just 10 minutes. You'll end up with a fully declarative Python pipeline based on dlt's REST API connector, like in the partial example code below:

```python-outcome
@dlt.source
def alcor_exchange_migration_source(access_token=dlt.secrets.value):
config: RESTAPIConfig = {
"client": {
"base_url": "https://alcor.exchange/api/v2/",
"auth": {
"type": "bearer",
"token": access_token,
},
},
"resources": [
pairs,,tokens,,markets
],
}
[...]
yield from rest_api_resources(config)


def get_data() -> None:
# Connect to destination
pipeline = dlt.pipeline(
pipeline_name='alcor_exchange_migration_pipeline',
destination='duckdb',
dataset_name='alcor_exchange_migration_data',
)
# Load the data
load_info = pipeline.run(alcor_exchange_migration_source())
print(load_info)
```

### Why use dltHub Workspace with LLM Context to generate Python pipelines?

- Accelerate pipeline development with AI-native context
- Debug pipelines, validate schemas and data with the integrated **Pipeline Dashboard**
- Build Python notebooks for end users of your data
- **Low maintenance** thanks to Schema evolution with type inference, resilience and self documenting REST API connectors. A shallow learning curve makes the pipeline easy to extend by any team member
- dlt is the tool of choice for Pythonic Iceberg Lakehouses, bringing mature data loading to pythonic Iceberg with or without catalogs

## What you’ll do

We’ll show you how to generate a readable and easily maintainable Python script that fetches data from alcor_exchange_migration’s API and loads it into Iceberg, DataFrames, files, or a database of your choice. Here are some of the endpoints you can load:

- Market Data: Provides information about trading pairs, tokens, and market activity.
- Orders: Contains endpoints for placing buy and sell orders, as well as managing order books.
- Swaps: Offers functionality for interacting with swap pools and retrieving routing information.
- Account Management: Allows users to view account-specific information such as deals and positions.
- Analytics: Provides global analytics related to the exchange's performance.

You will then debug the Alcor Exchange pipeline using our Pipeline Dashboard tool to ensure it is copying the data correctly, before building a Notebook to explore your data and build reports.

## Setup & steps to follow

```default
Before getting started, let's make sure Cursor is set up correctly:
- We suggest using a model like Claude 3.7 Sonnet or better
- Index the REST API Source tutorial: https://dlthub.com/docs/dlt-ecosystem/verified-sources/rest_api/ and add it to context as **@dlt rest api**
- [Read our full steps on setting up Cursor](https://dlthub.com/docs/dlt-ecosystem/llm-tooling/cursor-restapi#23-configuring-cursor-with-documentation)
```

Now you're ready to get started!

1. ⚙️ **Set up `dlt` Workspace**

Install dlt with duckdb support:
```shell
pip install dlt[workspace]
```

Initialize a dlt pipeline with Alcor Exchange support.
```shell
dlt init dlthub:alcor_exchange_migration duckdb
```

The `init` command will setup the necessary files and folders for the next step.

2. 🤠 **Start LLM-assisted coding**

Here’s a prompt to get you started:

```prompt
Please generate a REST API Source for Alcor Exchange API, as specified in @alcor_exchange_migration-docs.yaml
Start with endpoints pairs and and skip incremental loading for now.
Place the code in alcor_exchange_migration_pipeline.py and name the pipeline alcor_exchange_migration_pipeline.
If the file exists, use it as a starting point.
Do not add or modify any other files.
Use @dlt rest api as a tutorial.
After adding the endpoints, allow the user to run the pipeline with python alcor_exchange_migration_pipeline.py and await further instructions.
```


3. 🔒 **Set up credentials**

Authentication is performed using an API key, which needs to be provided with each request to access the endpoints securely.

To get the appropriate API keys, please visit the original source at https://alcor.exchange/.
If you want to protect your environment secrets in a production environment, look into [setting up credentials with dlt](https://dlthub.com/docs/walkthroughs/add_credentials).

4. 🏃‍♀️ **Run the pipeline in the Python terminal in Cursor**

```shell
python alcor_exchange_migration_pipeline.py
```

If your pipeline runs correctly, you’ll see something like the following:

```shell
Pipeline alcor_exchange_migration load step completed in 0.26 seconds
1 load package(s) were loaded to destination duckdb and into dataset alcor_exchange_migration_data
The duckdb destination used duckdb:/alcor_exchange_migration.duckdb location to store data
Load package 1749667187.541553 is LOADED and contains no failed jobs
```

5. 📈 **Debug your pipeline and data with the Pipeline Dashboard**

Now that you have a running pipeline, you need to make sure it’s correct, so you do not introduce silent failures like misconfigured pagination or incremental loading errors. By launching the dlt Workspace Pipeline Dashboard, you can see various information about the pipeline to enable you to test it. Here you can see:
- Pipeline overview: State, load metrics
- Data’s schema: tables, columns, types, hints
- You can query the data itself

```shell
dlt pipeline alcor_exchange_migration_pipeline show --dashboard
```

6. 🐍 **Build a Notebook with data explorations and reports**

With the pipeline and data partially validated, you can continue with custom data explorations and reports. To get started, paste the snippet below into a new marimo Notebook and ask your LLM to go from there. Jupyter Notebooks and regular Python scripts are supported as well.


```python
import dlt

data = dlt.pipeline("alcor_exchange_migration_pipeline").dataset()
# get air table as Pandas frame
data.air.df().head()
```

## Running into errors?

Notably, orders cannot be updated directly; users must cancel an existing order and place a new one instead. Additionally, there are several potential error responses that users should be aware of, including issues with unauthorized access or invalid requests.

### Extra resources:

- [Learn more with our 1h LLM-assisted coding course!](https://www.youtube.com/watch?v=GGid70rnJuM)

## Next steps

- [How to deploy a pipeline](https://dlthub.com/docs/walkthroughs/deploy-a-pipeline)
- [How to explore your data in marimo Notebooks](https://dlthub.com/docs/general-usage/dataset-access/marimo)
- [How to query your data in Python with dataset](https://dlthub.com/docs/general-usage/dataset-access/dataset)
- [How to create REST API Sources with Cursor](https://dlthub.com/docs/dlt-ecosystem/llm-tooling/cursor-restapi)
145 changes: 145 additions & 0 deletions alcor_exchange/alcor-exchange-docs.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,145 @@
resources:
- name: tokens
endpoint:
path: /tokens
method: GET
data_selector: ''
params: {}
- name: global_data
endpoint:
path: /analytics/global
method: GET
data_selector: ''
params: {}
- name: trading_pairs
endpoint:
path: /pairs
method: GET
data_selector: ''
params: {}
- name: tickers
endpoint:
path: /tickers
method: GET
data_selector: ''
params: {}
- name: orderbook
endpoint:
path: /orderbook
method: GET
data_selector: ''
params: {}
- name: latest_trades
endpoint:
path: /tickers/pgl-prospectorsw_wax-eosio.token/latest_trades
method: GET
data_selector: ''
params:
limit: '2'
- name: historical_trades
endpoint:
path: /tickers/pgl-prospectorsw_wax-eosio.token/historical_trades
method: GET
data_selector: ''
params:
limit: '2'
- name: pool
endpoint:
path: /swap/pools/{pool_id}
method: GET
data_selector: ''
params: {}
- name: pools
endpoint:
path: /swap/pools
method: GET
data_selector: ''
params: {}
- name: pool_swaps
endpoint:
path: /swap/pools/{pool_id}/swaps
method: GET
data_selector: ''
params: {}
- name: pool_positions
endpoint:
path: /swap/pools/{pool_id}/positions
method: GET
data_selector: ''
params: {}
- name: swap_route
endpoint:
path: /swapRouter/getRoute
method: GET
data_selector: ''
params:
trade_type: EXACT_INPUT
input: wax-eosio.token
output: tlm-alien.worlds
amount: '1.00000000'
slippage: '0.30'
receiver: alcordexfund
maxHops: '2'
- name: swap_history
endpoint:
path: /account/<account>/swap-history
method: GET
data_selector: records
params:
account: string
- name: positions
endpoint:
path: /account/<account>/positions
method: GET
data_selector: records
params:
account: string
- name: positions_history
endpoint:
path: /account/<account>/positions-history
method: GET
data_selector: records
params:
account: string
- name: deals
endpoint:
path: /account/<account>/deals
method: GET
data_selector: records
params:
account: string
- name: markets
endpoint:
path: /markets
method: GET
data_selector: rows
- name: buy_orders
endpoint:
path: /buyorder
method: GET
data_selector: rows
- name: sell_orders
endpoint:
path: /sellorder
method: GET
data_selector: rows
notes:
- Orders can not be updated; to update, cancel current and place a new one.
errors:
- '400: Bad Request -- Your request is invalid.'
- '401: Unauthorized -- Your API key is wrong.'
- '403: Forbidden -- The kitten requested is hidden for administrators only.'
- '404: Not Found -- The specified kitten could not be found.'
- '405: Method Not Allowed -- You tried to access a kitten with an invalid method.'
- '406: Not Acceptable -- You requested a format that isn''t json.'
- '410: Gone -- The kitten requested has been removed from our servers.'
- '418: I''m a teapot.'
- '429: Too Many Requests -- You''re requesting too many kittens! Slow down!'
- '500: Internal Server Error -- We had a problem with our server. Try again later.'
- '503: Service Unavailable -- We''re temporarily offline for maintenance. Please
try again later.'
auth_info:
mentioned_objects: []
client:
base_url: https://alcor.exchange/api/v2/
source_metadata: null
Loading