diff --git a/README.md b/README.md index fda9abe242..dde417cb1b 100644 --- a/README.md +++ b/README.md @@ -1,138 +1,327 @@ -# hummingbot-deploy +# Condor Deploy -Welcome to the Hummingbot Deploy project. This guide will walk you through the steps to deploy multiple trading bots using a centralized dashboard powered by the Hummingbot API and comprehensive backend services. +Welcome to the Hummingbot Deploy repo. This repo contains bash script(s) that automate installing Condor + Hummingbot API ## Prerequisites -- Docker must be installed on your machine. If you do not have Docker installed, you can download and install it from [Docker's official site](https://www.docker.com/products/docker-desktop). -- If you are on Windows, you'll need to setup WSL2 and a Linux terminal like Ubuntu. Make sure to run the commands below in a Linux terminal and not in the Windows command prompt or Powershell. +- **Linux/macOS** with a terminal (Windows users need WSL2 with Ubuntu) +- **Docker** and **Docker Compose** (the installer will install these if missing) +- **Make** and **Git** (the installer will install these if missing) +- **Conda/Anaconda** (required only for Hummingbot API; installer will offer to install if needed) ## Architecture This deployment includes: -- **Dashboard** (port 8501): Streamlit-based web UI for bot management and monitoring -- **Hummingbot API** (port 8000): FastAPI backend service for bot operations and data management -- **PostgreSQL Database** (port 5432): Persistent storage for bot configurations and performance data -- **EMQX Broker** (port 1883): MQTT broker for real-time bot communication and telemetry - -All services are orchestrated using Docker Compose for seamless deployment and management. - -## Installation - -1. **Clone the repository:** - ```bash - git clone https://github.com/hummingbot/deploy.git - cd deploy - ``` - -## Running the Application - -1. **Start and configure the Application** - - Run the following command to download and start the app. - - ```bash - bash setup.sh - ``` -2. **Access the services:** - - **Dashboard**: Open your web browser and go to `localhost:8501`. Replace `localhost` with the IP of your server if using a cloud server. - - **API Documentation**: Access the Hummingbot API docs at `localhost:8000/docs` - - **EMQX Dashboard**: Monitor MQTT broker at `localhost:18083` (admin/public) - -3. **API Keys and Credentials:** - - Go to the credentials page - - You add credentials to the master account by picking the exchange and adding the API key and secret. This will encrypt the keys and store them in the master account folder. - - If you are managing multiple accounts you can create a new one and start adding new credentials there. - -4. **Create a config for PMM Simple** - - Go to the tab PMM Simple and create a new configuration. Soon will be released a video explaining how the strategy works. - -5. **Deploy the configuration** - - Go to the Deploy tab, select a name for your bot, the image hummingbot/hummingbot:latest and the configuration you just created. - - Press the button to create a new instance. - -6. **Check the status of the bot** - - Go to the Instances tab and check the status of the bot. - - If it's not available is because the bot is starting, wait a few seconds and refresh the page. - - If it's running, you can check the performance of it in the graph, refresh to see the latest data. - - If it's stopped, probably the bot had an error, you can check the logs in the container to understand what happened. - -7. **[Optional] Monitor Services** - - **Hummingbot API**: Access full API documentation at `localhost:8000/docs` - - **Database**: PostgreSQL running on `localhost:5432` (hbot/hummingbot-api) - - **MQTT Broker**: EMQX dashboard at `localhost:18083` for real-time bot communication monitoring - -## Authentication - -Authentication is disabled by default. To enable Dashboard Authentication please follow the steps below: - -**Set Credentials (Optional):** - -The dashboard uses `admin` and `abc` as the default username and password respectively. It's strongly recommended to change these credentials for enhanced security.: - -- Navigate to the `deploy` folder and open the `credentials.yml` file. -- Add or modify the current username / password and save the changes afterward - - ``` - credentials: - usernames: - admin: - email: admin@gmail.com - name: John Doe - logged_in: False - password: abc - cookie: - expiry_days: 0 - key: some_signature_key # Must be string - name: some_cookie_name - pre-authorized: - emails: - - admin@admin.com - ``` -### Enable Authentication - -- Ensure the dashboard container is not running. -- Open the `docker-compose.yml` file within the `deploy` folder using a text editor. -- Locate the environment variable `AUTH_SYSTEM_ENABLED` under the dashboard service configuration. - - ``` - services: - dashboard: - container_name: dashboard - image: hummingbot/dashboard:latest - ports: - - "8501:8501" - environment: - - AUTH_SYSTEM_ENABLED=True - - BACKEND_API_HOST=hummingbot-api - - BACKEND_API_PORT=8000 - ``` -- Change the value of `AUTH_SYSTEM_ENABLED` from `False` to `True`. -- Save the changes to the `docker-compose.yml` file. -- Relaunch Dashboard by running `bash setup.sh` - -### Known Issues -- Refreshing the browser window may log you out and display the login screen again. This is a known issue that might be addressed in future updates. - - -## Dashboard Functionalities - -- **Config Generator:** - - Create and select configurations for different v2 strategies. - - Backtest and deploy the selected configurations. - -- **Bot Management:** - - Visualize bot performance in real-time. - - Stop and archive running bots. - -## Tutorial - -To get started with deploying your first bot, follow these step-by-step instructions: - -1. **Prepare your bot configurations:** - - Select a controller and backtest your controller configs. - -2. **Deploy a bot:** - - Use the dashboard UI to select and deploy your configurations. - -3. **Monitor and Manage:** - - Track bot performance and make adjustments as needed through the dashboard. +- **Condor Bot** (required): Telegram bot for managing and monitoring Hummingbot trading bots +- **Hummingbot API** (optional, port 8000): FastAPI backend service for bot operations and data management +- **PostgreSQL Database** (port 5432, if API installed): Persistent storage for bot configurations and performance data +- **EMQX Broker** (port 1883, if API installed): MQTT broker for real-time bot communication and telemetry + +Each repository manages its own Docker Compose configuration and setup process via Makefile. The installer orchestrates the complete deployment workflow. + +## Quick Install + +Run this single command to download and launch the installer: + +```bash +curl -fsSL https://raw.githubusercontent.com/hummingbot/deploy/refs/heads/main/setup.sh | bash +``` + +### Installation Options + +```bash +# Fresh installation (installs Condor first, then optionally API) +curl -fsSL https://raw.githubusercontent.com/hummingbot/deploy/refs/heads/main/setup.sh | bash + +# Upgrade existing installation +curl -fsSL https://raw.githubusercontent.com/hummingbot/deploy/refs/heads/main/setup.sh | bash -s -- --upgrade + +``` + +### Command Line Options + +| Option | Description | +|--------|-------------| +| `--upgrade` | Upgrade existing installation or install missing components | +| `-h, --help` | Show help message and usage examples | + +## What the Installer Does + +The setup script will: + +1. **Detect OS & Architecture** - Identifies your operating system (Linux/macOS) and CPU architecture (x86_64/ARM64) +2. **Install Dependencies** - Checks and installs missing tools: + - Git + - Docker + - Docker Compose + - Make (build-essentials) +3. **Clone Condor Repository** - Downloads the Condor bot source code +4. **Setup Condor** - Runs `make setup` to initialize Condor environment variables and configurations +5. **Deploy Condor** - Runs `make deploy` to start the Condor service +6. **Prompt for API Installation** - Asks if you want to install Hummingbot API + - If yes: + - Checks for Conda (offers to install Anaconda if missing) + - Clones Hummingbot API repository + - Runs `make setup` to configure API environment + - Runs `make deploy` to start the API service + +## Installation Flow + +### Fresh Installation + +``` +1. Start setup script + ↓ +2. Check & install dependencies + ↓ +3. Clone Condor repository + ↓ +4. Run: make setup (Condor) + ↓ +5. Run: make deploy (Condor) + ↓ +6. Prompt: Install Hummingbot API? + ├─ No → Installation complete + └─ Yes → Check for Conda + ├─ If missing → Install Anaconda (with auto-shell restart) + ├─ Clone Hummingbot API + ├─ Run: make setup (API) + └─ Run: make deploy (API) → Installation complete +``` + +### Upgrade Installation + +``` +1. Start with --upgrade flag + ↓ +2. Check & install dependencies (if needed) + ↓ +3. If Condor exists → git pull (update code) + ↓ +4. If API exists → git pull (update code) + ├─ If API doesn't exist → Prompt to install + └─ If yes → Clone, setup, and deploy + ↓ +5. Pull latest Docker images (Condor & API only) + ↓ +6. Restart services + ↓ +7. Display status +``` + +## Directory Structure + +After installation, your deployment directory will contain: + +``` +. +├── condor/ # Condor bot repository +│ ├── docker-compose.yml # Condor's service configuration +│ ├── .env # Environment variables (managed by Condor) +│ ├── config.yml # Condor configuration +│ └── routines/ # Custom bot routines +│ +└── hummingbot-api/ # Hummingbot API repository (if installed) + ├── docker-compose.yml # API's service configuration + ├── .env # Environment variables (managed by API) + └── bots/ # Bot instances and configurations +``` + +## Accessing Your Services + +After installation, access your services at: + +| Service | URL | Default Credentials | +|---------|-----|---------------------| +| Condor Bot | Your Telegram Bot | Send `/start` to your bot | +| Hummingbot API | http://localhost:8000/docs | Set during installation | +| PostgreSQL | localhost:5432 | Set during installation | +| EMQX Dashboard | http://localhost:18083 | admin / public | + +## Managing Your Installation + +Since each repository manages its own Docker Compose, use these commands: + +### Condor Bot + +```bash +cd condor + +# View running Condor service +docker compose ps + +# View logs +docker compose logs -f + +# Stop Condor +docker compose down + +# Start Condor +docker compose up -d + +# Upgrade Condor +docker compose pull && docker compose up -d +``` + +### Hummingbot API (if installed) + +```bash +cd hummingbot-api + +# View running API services +docker compose ps + +# View logs +docker compose logs -f + +# Stop all API services (API, PostgreSQL, EMQX) +docker compose down + +# Start all API services +docker compose up -d + +# Upgrade API services +docker compose pull && docker compose up -d +``` + +## Getting Started with Trading + +1. **Add Exchange API Credentials** + - Send `/credentials` command to Condor bot, or + - Access the API directly at http://localhost:8000/docs + - Add your exchange API keys (they will be encrypted) + +2. **Create a Trading Configuration** + - Use Condor bot's `/config` command, or + - Use the API's configuration endpoints + - Define your trading strategy and parameters + +3. **Deploy a Bot** + - Send `/deploy` command to Condor bot + - Select your configuration + - Monitor bot status in real-time + +4. **Monitor Performance** + - Check bot status via Condor bot + - View API logs for detailed information + - Access raw metrics via API endpoints + +## Upgrading + +To upgrade your installation to the latest versions: + +### Option 1: Re-run the installer (Recommended) + +```bash +curl -fsSL https://raw.githubusercontent.com/hummingbot/deploy/refs/heads/main/setup.sh | bash -s -- --upgrade +``` + +### Option 2: Manual upgrade + +```bash +# Upgrade Condor +cd condor +git pull +docker compose pull +docker compose up -d + +# Upgrade Hummingbot API (if installed) +cd ../hummingbot-api +git pull +docker compose pull +docker compose up -d +``` + +## Troubleshooting + +### Services not starting + +```bash +# Check Condor logs +cd condor && docker compose logs -f + +# Check API logs (if installed) +cd ../hummingbot-api && docker compose logs -f +``` + +### Condor bot not responding + +- Verify your Telegram Bot Token is correct +- Check `ADMIN_USER_ID` matches your Telegram user ID +- Ensure Condor container is running: `cd condor && docker compose ps` + +### API connection issues + +- Verify API container is running: `cd hummingbot-api && docker compose ps` +- Check PostgreSQL and EMQX are healthy +- Review API logs: `cd hummingbot-api && docker compose logs hummingbot-api` + +### Port conflicts + +If you have other services using the default ports, edit the respective `docker-compose.yml`: + +- **Condor**: Uses host network (no port conflicts) +- **API**: Default ports are 8000 (API), 5432 (PostgreSQL), 1883 (EMQX) + +### Conda installation issues + +If Anaconda installation fails: + +1. Install manually from https://www.anaconda.com/download +2. Ensure conda is in your PATH: `conda --version` +3. Re-run the installer with `--upgrade` flag + +## Support + +- **Documentation**: [Hummingbot Docs](https://docs.hummingbot.org) +- **Discord**: [Hummingbot Discord](https://discord.hummingbot.io) +- **GitHub Issues**: [Report bugs](https://github.com/hummingbot/deploy/issues) + +## Advanced Configuration + +### Modifying Environment Variables + +Each repository manages its own `.env` file. To modify settings: + +```bash +# Edit Condor environment +cd condor +nano .env +docker compose restart + +# Edit API environment (if installed) +cd ../hummingbot-api +nano .env +docker compose restart +``` + +### Customizing Condor Routines + +Add custom trading routines to `condor/routines/`: + +```bash +cd condor/routines +# Add your .py files here +docker compose restart +``` + +### Managing PostgreSQL Data + +API data is stored in Docker volumes. To backup: + +```bash +cd hummingbot-api +docker compose exec postgres pg_dump -U hbot hummingbot_api > backup.sql +``` + +To restore: + +```bash +cd hummingbot-api +cat backup.sql | docker compose exec -T postgres psql -U hbot hummingbot_api +``` + +## License + +Hummingbot Deploy is licensed under the HUMMINGBOT OPEN SOURCE LICENSE AGREEMENT. See LICENSE file for details. diff --git a/bots/__init__.py b/bots/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/bots/archived/.gitignore b/bots/archived/.gitignore deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/bots/conf/controllers/.gitignore b/bots/conf/controllers/.gitignore deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/bots/conf/scripts/.gitignore b/bots/conf/scripts/.gitignore deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/bots/controllers/__init__.py b/bots/controllers/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/bots/controllers/directional_trading/__init__.py b/bots/controllers/directional_trading/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/bots/controllers/directional_trading/bollinger_v1.py b/bots/controllers/directional_trading/bollinger_v1.py deleted file mode 100644 index bfb476b136..0000000000 --- a/bots/controllers/directional_trading/bollinger_v1.py +++ /dev/null @@ -1,87 +0,0 @@ -from typing import List - -import pandas_ta as ta # noqa: F401 -from pydantic import Field, field_validator -from pydantic_core.core_schema import ValidationInfo - -from hummingbot.data_feed.candles_feed.data_types import CandlesConfig -from hummingbot.strategy_v2.controllers.directional_trading_controller_base import ( - DirectionalTradingControllerBase, - DirectionalTradingControllerConfigBase, -) - - -class BollingerV1ControllerConfig(DirectionalTradingControllerConfigBase): - controller_name: str = "bollinger_v1" - candles_config: List[CandlesConfig] = [] - candles_connector: str = Field( - default=None, - json_schema_extra={ - "prompt": "Enter the connector for the candles data, leave empty to use the same exchange as the connector: ", - "prompt_on_new": True}) - candles_trading_pair: str = Field( - default=None, - json_schema_extra={ - "prompt": "Enter the trading pair for the candles data, leave empty to use the same trading pair as the connector: ", - "prompt_on_new": True}) - interval: str = Field( - default="3m", - json_schema_extra={ - "prompt": "Enter the candle interval (e.g., 1m, 5m, 1h, 1d): ", - "prompt_on_new": True}) - bb_length: int = Field( - default=100, - json_schema_extra={"prompt": "Enter the Bollinger Bands length: ", "prompt_on_new": True}) - bb_std: float = Field(default=2.0) - bb_long_threshold: float = Field(default=0.0) - bb_short_threshold: float = Field(default=1.0) - - @field_validator("candles_connector", mode="before") - @classmethod - def set_candles_connector(cls, v, validation_info: ValidationInfo): - if v is None or v == "": - return validation_info.data.get("connector_name") - return v - - @field_validator("candles_trading_pair", mode="before") - @classmethod - def set_candles_trading_pair(cls, v, validation_info: ValidationInfo): - if v is None or v == "": - return validation_info.data.get("trading_pair") - return v - - -class BollingerV1Controller(DirectionalTradingControllerBase): - def __init__(self, config: BollingerV1ControllerConfig, *args, **kwargs): - self.config = config - self.max_records = self.config.bb_length - if len(self.config.candles_config) == 0: - self.config.candles_config = [CandlesConfig( - connector=config.candles_connector, - trading_pair=config.candles_trading_pair, - interval=config.interval, - max_records=self.max_records - )] - super().__init__(config, *args, **kwargs) - - async def update_processed_data(self): - df = self.market_data_provider.get_candles_df(connector_name=self.config.candles_connector, - trading_pair=self.config.candles_trading_pair, - interval=self.config.interval, - max_records=self.max_records) - # Add indicators - df.ta.bbands(length=self.config.bb_length, std=self.config.bb_std, append=True) - bbp = df[f"BBP_{self.config.bb_length}_{self.config.bb_std}"] - - # Generate signal - long_condition = bbp < self.config.bb_long_threshold - short_condition = bbp > self.config.bb_short_threshold - - # Generate signal - df["signal"] = 0 - df.loc[long_condition, "signal"] = 1 - df.loc[short_condition, "signal"] = -1 - - # Update processed data - self.processed_data["signal"] = df["signal"].iloc[-1] - self.processed_data["features"] = df diff --git a/bots/controllers/directional_trading/dman_v3.py b/bots/controllers/directional_trading/dman_v3.py deleted file mode 100644 index 8e4ee07e90..0000000000 --- a/bots/controllers/directional_trading/dman_v3.py +++ /dev/null @@ -1,218 +0,0 @@ -import time -from decimal import Decimal -from typing import List, Optional, Tuple - -import pandas_ta as ta # noqa: F401 -from pydantic import Field, field_validator -from pydantic_core.core_schema import ValidationInfo - -from hummingbot.core.data_type.common import TradeType -from hummingbot.data_feed.candles_feed.data_types import CandlesConfig -from hummingbot.strategy_v2.controllers.directional_trading_controller_base import ( - DirectionalTradingControllerBase, - DirectionalTradingControllerConfigBase, -) -from hummingbot.strategy_v2.executors.dca_executor.data_types import DCAExecutorConfig, DCAMode -from hummingbot.strategy_v2.executors.position_executor.data_types import TrailingStop - - -class DManV3ControllerConfig(DirectionalTradingControllerConfigBase): - controller_name: str = "dman_v3" - candles_config: List[CandlesConfig] = [] - candles_connector: str = Field( - default=None, - json_schema_extra={ - "prompt": "Enter the connector for the candles data, leave empty to use the same exchange as the connector: ", - "prompt_on_new": True}) - candles_trading_pair: str = Field( - default=None, - json_schema_extra={ - "prompt": "Enter the trading pair for the candles data, leave empty to use the same trading pair as the connector: ", - "prompt_on_new": True}) - interval: str = Field( - default="3m", - json_schema_extra={ - "prompt": "Enter the candle interval (e.g., 1m, 5m, 1h, 1d): ", - "prompt_on_new": True}) - bb_length: int = Field( - default=100, - json_schema_extra={"prompt": "Enter the Bollinger Bands length: ", "prompt_on_new": True}) - bb_std: float = Field(default=2.0) - bb_long_threshold: float = Field(default=0.0) - bb_short_threshold: float = Field(default=1.0) - trailing_stop: Optional[TrailingStop] = Field( - default="0.015,0.005", - json_schema_extra={ - "prompt": "Enter the trailing stop parameters (activation_price, trailing_delta) as a comma-separated list: ", - "prompt_on_new": True, - } - ) - dca_spreads: List[Decimal] = Field( - default="0.001,0.018,0.15,0.25", - json_schema_extra={ - "prompt": "Enter the spreads for each DCA level (comma-separated) if dynamic_spread=True this value " - "will multiply the Bollinger Bands width, e.g. if the Bollinger Bands width is 0.1 (10%)" - "and the spread is 0.2, the distance of the order to the current price will be 0.02 (2%) ", - "prompt_on_new": True}, - ) - dca_amounts_pct: List[Decimal] = Field( - default=None, - json_schema_extra={ - "prompt": "Enter the amounts for each DCA level (as a percentage of the total balance, " - "comma-separated). Don't worry about the final sum, it will be normalized. ", - "prompt_on_new": True}, - ) - dynamic_order_spread: bool = Field( - default=None, - json_schema_extra={"prompt": "Do you want to make the spread dynamic? (Yes/No) ", "prompt_on_new": True}) - dynamic_target: bool = Field( - default=None, - json_schema_extra={"prompt": "Do you want to make the target dynamic? (Yes/No) ", "prompt_on_new": True}) - activation_bounds: Optional[List[Decimal]] = Field( - default=None, - json_schema_extra={ - "prompt": "Enter the activation bounds for the orders (e.g., 0.01 activates the next order when the price is closer than 1%): ", - "prompt_on_new": True, - } - ) - - @field_validator("activation_bounds", mode="before") - @classmethod - def parse_activation_bounds(cls, v): - if isinstance(v, str): - if v == "": - return None - return [Decimal(val) for val in v.split(",")] - if isinstance(v, list): - return [Decimal(val) for val in v] - return v - - @field_validator('dca_spreads', mode="before") - @classmethod - def validate_spreads(cls, v): - if isinstance(v, str): - return [Decimal(val) for val in v.split(",")] - return v - - @field_validator('dca_amounts_pct', mode="before") - @classmethod - def validate_amounts(cls, v, validation_info: ValidationInfo): - spreads = validation_info.data.get("dca_spreads") - if isinstance(v, str): - if v == "": - return [Decimal('1.0') / len(spreads) for _ in spreads] - amounts = [Decimal(val) for val in v.split(",")] - if len(amounts) != len(spreads): - raise ValueError("Amounts and spreads must have the same length") - return amounts - if v is None: - return [Decimal('1.0') / len(spreads) for _ in spreads] - return v - - @field_validator("candles_connector", mode="before") - @classmethod - def set_candles_connector(cls, v, validation_info: ValidationInfo): - if v is None or v == "": - return validation_info.data.get("connector_name") - return v - - @field_validator("candles_trading_pair", mode="before") - @classmethod - def set_candles_trading_pair(cls, v, validation_info: ValidationInfo): - if v is None or v == "": - return validation_info.data.get("trading_pair") - return v - - def get_spreads_and_amounts_in_quote(self, trade_type: TradeType, total_amount_quote: Decimal) -> Tuple[List[Decimal], List[Decimal]]: - amounts_pct = self.dca_amounts_pct - if amounts_pct is None: - # Equally distribute if amounts_pct is not set - spreads = self.dca_spreads - normalized_amounts_pct = [Decimal('1.0') / len(spreads) for _ in spreads] - else: - if trade_type == TradeType.BUY: - normalized_amounts_pct = [amt_pct / sum(amounts_pct) for amt_pct in amounts_pct] - else: # TradeType.SELL - normalized_amounts_pct = [amt_pct / sum(amounts_pct) for amt_pct in amounts_pct] - - return self.dca_spreads, [amt_pct * total_amount_quote for amt_pct in normalized_amounts_pct] - - -class DManV3Controller(DirectionalTradingControllerBase): - """ - Mean reversion strategy with Grid execution making use of Bollinger Bands indicator to make spreads dynamic - and shift the mid-price. - """ - def __init__(self, config: DManV3ControllerConfig, *args, **kwargs): - self.config = config - self.max_records = config.bb_length - if len(self.config.candles_config) == 0: - self.config.candles_config = [CandlesConfig( - connector=config.candles_connector, - trading_pair=config.candles_trading_pair, - interval=config.interval, - max_records=self.max_records - )] - super().__init__(config, *args, **kwargs) - - async def update_processed_data(self): - df = self.market_data_provider.get_candles_df(connector_name=self.config.candles_connector, - trading_pair=self.config.candles_trading_pair, - interval=self.config.interval, - max_records=self.max_records) - # Add indicators - df.ta.bbands(length=self.config.bb_length, std=self.config.bb_std, append=True) - - # Generate signal - long_condition = df[f"BBP_{self.config.bb_length}_{self.config.bb_std}"] < self.config.bb_long_threshold - short_condition = df[f"BBP_{self.config.bb_length}_{self.config.bb_std}"] > self.config.bb_short_threshold - - # Generate signal - df["signal"] = 0 - df.loc[long_condition, "signal"] = 1 - df.loc[short_condition, "signal"] = -1 - - # Update processed data - self.processed_data["signal"] = df["signal"].iloc[-1] - self.processed_data["features"] = df - - def get_spread_multiplier(self) -> Decimal: - if self.config.dynamic_order_spread: - df = self.processed_data["features"] - bb_width = df[f"BBB_{self.config.bb_length}_{self.config.bb_std}"].iloc[-1] - return Decimal(bb_width / 200) - else: - return Decimal("1.0") - - def get_executor_config(self, trade_type: TradeType, price: Decimal, amount: Decimal) -> DCAExecutorConfig: - spread, amounts_quote = self.config.get_spreads_and_amounts_in_quote(trade_type, amount * price) - spread_multiplier = self.get_spread_multiplier() - if trade_type == TradeType.BUY: - prices = [price * (1 - spread * spread_multiplier) for spread in spread] - else: - prices = [price * (1 + spread * spread_multiplier) for spread in spread] - if self.config.dynamic_target: - stop_loss = self.config.stop_loss * spread_multiplier - if self.config.trailing_stop: - trailing_stop = TrailingStop( - activation_price=self.config.trailing_stop.activation_price * spread_multiplier, - trailing_delta=self.config.trailing_stop.trailing_delta * spread_multiplier) - else: - trailing_stop = None - else: - stop_loss = self.config.stop_loss - trailing_stop = self.config.trailing_stop - return DCAExecutorConfig( - timestamp=time.time(), - connector_name=self.config.connector_name, - trading_pair=self.config.trading_pair, - side=trade_type, - mode=DCAMode.MAKER, - prices=prices, - amounts_quote=amounts_quote, - time_limit=self.config.time_limit, - stop_loss=stop_loss, - trailing_stop=trailing_stop, - leverage=self.config.leverage, - activation_bounds=self.config.activation_bounds, - ) diff --git a/bots/controllers/directional_trading/macd_bb_v1.py b/bots/controllers/directional_trading/macd_bb_v1.py deleted file mode 100644 index f792ecf8aa..0000000000 --- a/bots/controllers/directional_trading/macd_bb_v1.py +++ /dev/null @@ -1,100 +0,0 @@ -from typing import List - -import pandas_ta as ta # noqa: F401 -from pydantic import Field, field_validator -from pydantic_core.core_schema import ValidationInfo - -from hummingbot.data_feed.candles_feed.data_types import CandlesConfig -from hummingbot.strategy_v2.controllers.directional_trading_controller_base import ( - DirectionalTradingControllerBase, - DirectionalTradingControllerConfigBase, -) - - -class MACDBBV1ControllerConfig(DirectionalTradingControllerConfigBase): - controller_name: str = "macd_bb_v1" - candles_config: List[CandlesConfig] = [] - candles_connector: str = Field( - default=None, - json_schema_extra={ - "prompt": "Enter the connector for the candles data, leave empty to use the same exchange as the connector: ", - "prompt_on_new": True}) - candles_trading_pair: str = Field( - default=None, - json_schema_extra={ - "prompt": "Enter the trading pair for the candles data, leave empty to use the same trading pair as the connector: ", - "prompt_on_new": True}) - interval: str = Field( - default="3m", - json_schema_extra={ - "prompt": "Enter the candle interval (e.g., 1m, 5m, 1h, 1d): ", - "prompt_on_new": True}) - bb_length: int = Field( - default=100, - json_schema_extra={"prompt": "Enter the Bollinger Bands length: ", "prompt_on_new": True}) - bb_std: float = Field(default=2.0) - bb_long_threshold: float = Field(default=0.0) - bb_short_threshold: float = Field(default=1.0) - macd_fast: int = Field( - default=21, - json_schema_extra={"prompt": "Enter the MACD fast period: ", "prompt_on_new": True}) - macd_slow: int = Field( - default=42, - json_schema_extra={"prompt": "Enter the MACD slow period: ", "prompt_on_new": True}) - macd_signal: int = Field( - default=9, - json_schema_extra={"prompt": "Enter the MACD signal period: ", "prompt_on_new": True}) - - @field_validator("candles_connector", mode="before") - @classmethod - def set_candles_connector(cls, v, validation_info: ValidationInfo): - if v is None or v == "": - return validation_info.data.get("connector_name") - return v - - @field_validator("candles_trading_pair", mode="before") - @classmethod - def set_candles_trading_pair(cls, v, validation_info: ValidationInfo): - if v is None or v == "": - return validation_info.data.get("trading_pair") - return v - - -class MACDBBV1Controller(DirectionalTradingControllerBase): - - def __init__(self, config: MACDBBV1ControllerConfig, *args, **kwargs): - self.config = config - self.max_records = max(config.macd_slow, config.macd_fast, config.macd_signal, config.bb_length) + 20 - if len(self.config.candles_config) == 0: - self.config.candles_config = [CandlesConfig( - connector=config.candles_connector, - trading_pair=config.candles_trading_pair, - interval=config.interval, - max_records=self.max_records - )] - super().__init__(config, *args, **kwargs) - - async def update_processed_data(self): - df = self.market_data_provider.get_candles_df(connector_name=self.config.candles_connector, - trading_pair=self.config.candles_trading_pair, - interval=self.config.interval, - max_records=self.max_records) - # Add indicators - df.ta.bbands(length=self.config.bb_length, std=self.config.bb_std, append=True) - df.ta.macd(fast=self.config.macd_fast, slow=self.config.macd_slow, signal=self.config.macd_signal, append=True) - - bbp = df[f"BBP_{self.config.bb_length}_{self.config.bb_std}"] - macdh = df[f"MACDh_{self.config.macd_fast}_{self.config.macd_slow}_{self.config.macd_signal}"] - macd = df[f"MACD_{self.config.macd_fast}_{self.config.macd_slow}_{self.config.macd_signal}"] - - # Generate signal - long_condition = (bbp < self.config.bb_long_threshold) & (macdh > 0) & (macd < 0) - short_condition = (bbp > self.config.bb_short_threshold) & (macdh < 0) & (macd > 0) - - df["signal"] = 0 - df.loc[long_condition, "signal"] = 1 - df.loc[short_condition, "signal"] = -1 - - # Update processed data - self.processed_data["signal"] = df["signal"].iloc[-1] - self.processed_data["features"] = df diff --git a/bots/controllers/directional_trading/supertrend_v1.py b/bots/controllers/directional_trading/supertrend_v1.py deleted file mode 100644 index 10f3ea84f9..0000000000 --- a/bots/controllers/directional_trading/supertrend_v1.py +++ /dev/null @@ -1,88 +0,0 @@ -from typing import List - -import pandas_ta as ta # noqa: F401 -from pydantic import Field, field_validator -from pydantic_core.core_schema import ValidationInfo - -from hummingbot.data_feed.candles_feed.data_types import CandlesConfig -from hummingbot.strategy_v2.controllers.directional_trading_controller_base import ( - DirectionalTradingControllerBase, - DirectionalTradingControllerConfigBase, -) - - -class SuperTrendConfig(DirectionalTradingControllerConfigBase): - controller_name: str = "supertrend_v1" - candles_config: List[CandlesConfig] = [] - candles_connector: str = Field( - default=None, - json_schema_extra={ - "prompt": "Enter the connector for the candles data, leave empty to use the same exchange as the connector: ", - "prompt_on_new": True}) - candles_trading_pair: str = Field( - default=None, - json_schema_extra={ - "prompt": "Enter the trading pair for the candles data, leave empty to use the same trading pair as the connector: ", - "prompt_on_new": True}) - interval: str = Field( - default="3m", - json_schema_extra={"prompt": "Enter the candle interval (e.g., 1m, 5m, 1h, 1d): ", "prompt_on_new": True}) - length: int = Field( - default=20, - json_schema_extra={"prompt": "Enter the supertrend length: ", "prompt_on_new": True}) - multiplier: float = Field( - default=4.0, - json_schema_extra={"prompt": "Enter the supertrend multiplier: ", "prompt_on_new": True}) - percentage_threshold: float = Field( - default=0.01, - json_schema_extra={"prompt": "Enter the percentage threshold: ", "prompt_on_new": True}) - - @field_validator("candles_connector", mode="before") - @classmethod - def set_candles_connector(cls, v, validation_info: ValidationInfo): - if v is None or v == "": - return validation_info.data.get("connector_name") - return v - - @field_validator("candles_trading_pair", mode="before") - @classmethod - def set_candles_trading_pair(cls, v, validation_info: ValidationInfo): - if v is None or v == "": - return validation_info.data.get("trading_pair") - return v - - -class SuperTrend(DirectionalTradingControllerBase): - def __init__(self, config: SuperTrendConfig, *args, **kwargs): - self.config = config - self.max_records = config.length + 10 - if len(self.config.candles_config) == 0: - self.config.candles_config = [CandlesConfig( - connector=config.candles_connector, - trading_pair=config.candles_trading_pair, - interval=config.interval, - max_records=self.max_records - )] - super().__init__(config, *args, **kwargs) - - async def update_processed_data(self): - df = self.market_data_provider.get_candles_df(connector_name=self.config.candles_connector, - trading_pair=self.config.candles_trading_pair, - interval=self.config.interval, - max_records=self.max_records) - # Add indicators - df.ta.supertrend(length=self.config.length, multiplier=self.config.multiplier, append=True) - df["percentage_distance"] = abs(df["close"] - df[f"SUPERT_{self.config.length}_{self.config.multiplier}"]) / df["close"] - - # Generate long and short conditions - long_condition = (df[f"SUPERTd_{self.config.length}_{self.config.multiplier}"] == 1) & (df["percentage_distance"] < self.config.percentage_threshold) - short_condition = (df[f"SUPERTd_{self.config.length}_{self.config.multiplier}"] == -1) & (df["percentage_distance"] < self.config.percentage_threshold) - - # Choose side - df['signal'] = 0 - df.loc[long_condition, 'signal'] = 1 - df.loc[short_condition, 'signal'] = -1 - - # Update processed data - self.processed_data["signal"] = df["signal"].iloc[-1] - self.processed_data["features"] = df diff --git a/bots/controllers/generic/__init__.py b/bots/controllers/generic/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/bots/controllers/generic/arbitrage_controller.py b/bots/controllers/generic/arbitrage_controller.py deleted file mode 100644 index 825a866323..0000000000 --- a/bots/controllers/generic/arbitrage_controller.py +++ /dev/null @@ -1,148 +0,0 @@ -from decimal import Decimal -from typing import List - -import pandas as pd - -from hummingbot.client.ui.interface_utils import format_df_for_printout -from hummingbot.core.data_type.common import MarketDict -from hummingbot.data_feed.candles_feed.data_types import CandlesConfig -from hummingbot.strategy_v2.controllers.controller_base import ControllerBase, ControllerConfigBase -from hummingbot.strategy_v2.executors.arbitrage_executor.data_types import ArbitrageExecutorConfig -from hummingbot.strategy_v2.executors.data_types import ConnectorPair -from hummingbot.strategy_v2.models.base import RunnableStatus -from hummingbot.strategy_v2.models.executor_actions import CreateExecutorAction, ExecutorAction - - -class ArbitrageControllerConfig(ControllerConfigBase): - controller_name: str = "arbitrage_controller" - candles_config: List[CandlesConfig] = [] - exchange_pair_1: ConnectorPair = ConnectorPair(connector_name="binance", trading_pair="PENGU-USDT") - exchange_pair_2: ConnectorPair = ConnectorPair(connector_name="solana_jupiter_mainnet-beta", trading_pair="PENGU-USDC") - min_profitability: Decimal = Decimal("0.01") - delay_between_executors: int = 10 # in seconds - max_executors_imbalance: int = 1 - rate_connector: str = "binance" - quote_conversion_asset: str = "USDT" - - def update_markets(self, markets: MarketDict) -> MarketDict: - return [markets.add_or_update(cp.connector_name, cp.trading_pair) for cp in [self.exchange_pair_1, self.exchange_pair_2]][-1] - - -class ArbitrageController(ControllerBase): - gas_token_by_network = { - "ethereum": "ETH", - "solana": "SOL", - "binance-smart-chain": "BNB", - "polygon": "POL", - "avalanche": "AVAX", - "dexalot": "AVAX" - } - - def __init__(self, config: ArbitrageControllerConfig, *args, **kwargs): - self.config = config - super().__init__(config, *args, **kwargs) - self._imbalance = 0 - self._last_buy_closed_timestamp = 0 - self._last_sell_closed_timestamp = 0 - self._len_active_buy_arbitrages = 0 - self._len_active_sell_arbitrages = 0 - self.base_asset = self.config.exchange_pair_1.trading_pair.split("-")[0] - self.initialize_rate_sources() - - def initialize_rate_sources(self): - rates_required = [] - for connector_pair in [self.config.exchange_pair_1, self.config.exchange_pair_2]: - base, quote = connector_pair.trading_pair.split("-") - # Add rate source for gas token - if connector_pair.is_amm_connector(): - gas_token = self.get_gas_token(connector_pair.connector_name) - if gas_token != quote: - rates_required.append(ConnectorPair(connector_name=self.config.rate_connector, - trading_pair=f"{gas_token}-{quote}")) - - # Add rate source for quote conversion asset - if quote != self.config.quote_conversion_asset: - rates_required.append(ConnectorPair(connector_name=self.config.rate_connector, - trading_pair=f"{quote}-{self.config.quote_conversion_asset}")) - - # Add rate source for trading pairs - rates_required.append(ConnectorPair(connector_name=connector_pair.connector_name, - trading_pair=connector_pair.trading_pair)) - if len(rates_required) > 0: - self.market_data_provider.initialize_rate_sources(rates_required) - - def get_gas_token(self, connector_name: str) -> str: - _, chain, _ = connector_name.split("_") - return self.gas_token_by_network[chain] - - async def update_processed_data(self): - pass - - def determine_executor_actions(self) -> List[ExecutorAction]: - self.update_arbitrage_stats() - executor_actions = [] - current_time = self.market_data_provider.time() - if (abs(self._imbalance) >= self.config.max_executors_imbalance or - self._last_buy_closed_timestamp + self.config.delay_between_executors > current_time or - self._last_sell_closed_timestamp + self.config.delay_between_executors > current_time): - return executor_actions - if self._len_active_buy_arbitrages == 0: - executor_actions.append(self.create_arbitrage_executor_action(self.config.exchange_pair_1, - self.config.exchange_pair_2)) - if self._len_active_sell_arbitrages == 0: - executor_actions.append(self.create_arbitrage_executor_action(self.config.exchange_pair_2, - self.config.exchange_pair_1)) - return executor_actions - - def create_arbitrage_executor_action(self, buying_exchange_pair: ConnectorPair, - selling_exchange_pair: ConnectorPair): - try: - if buying_exchange_pair.is_amm_connector(): - gas_token = self.get_gas_token(buying_exchange_pair.connector_name) - pair = buying_exchange_pair.trading_pair.split("-")[0] + "-" + gas_token - gas_conversion_price = self.market_data_provider.get_rate(pair) - elif selling_exchange_pair.is_amm_connector(): - gas_token = self.get_gas_token(selling_exchange_pair.connector_name) - pair = selling_exchange_pair.trading_pair.split("-")[0] + "-" + gas_token - gas_conversion_price = self.market_data_provider.get_rate(pair) - else: - gas_conversion_price = None - rate = self.market_data_provider.get_rate(self.base_asset + "-" + self.config.quote_conversion_asset) - amount_quantized = self.market_data_provider.quantize_order_amount( - buying_exchange_pair.connector_name, buying_exchange_pair.trading_pair, - self.config.total_amount_quote / rate) - arbitrage_config = ArbitrageExecutorConfig( - timestamp=self.market_data_provider.time(), - buying_market=buying_exchange_pair, - selling_market=selling_exchange_pair, - order_amount=amount_quantized, - min_profitability=self.config.min_profitability, - gas_conversion_price=gas_conversion_price, - ) - return CreateExecutorAction( - executor_config=arbitrage_config, - controller_id=self.config.id) - except Exception as e: - self.logger().error( - f"Error creating executor to buy on {buying_exchange_pair.connector_name} and sell on {selling_exchange_pair.connector_name}, {e}") - - def update_arbitrage_stats(self): - closed_executors = [e for e in self.executors_info if e.status == RunnableStatus.TERMINATED] - active_executors = [e for e in self.executors_info if e.status != RunnableStatus.TERMINATED] - buy_arbitrages = [arbitrage for arbitrage in closed_executors if - arbitrage.config.buying_market == self.config.exchange_pair_1] - sell_arbitrages = [arbitrage for arbitrage in closed_executors if - arbitrage.config.buying_market == self.config.exchange_pair_2] - self._imbalance = len(buy_arbitrages) - len(sell_arbitrages) - self._last_buy_closed_timestamp = max([arbitrage.close_timestamp for arbitrage in buy_arbitrages]) if len( - buy_arbitrages) > 0 else 0 - self._last_sell_closed_timestamp = max([arbitrage.close_timestamp for arbitrage in sell_arbitrages]) if len( - sell_arbitrages) > 0 else 0 - self._len_active_buy_arbitrages = len([arbitrage for arbitrage in active_executors if - arbitrage.config.buying_market == self.config.exchange_pair_1]) - self._len_active_sell_arbitrages = len([arbitrage for arbitrage in active_executors if - arbitrage.config.buying_market == self.config.exchange_pair_2]) - - def to_format_status(self) -> List[str]: - all_executors_custom_info = pd.DataFrame(e.custom_info for e in self.executors_info) - return [format_df_for_printout(all_executors_custom_info, table_format="psql", )] diff --git a/bots/controllers/generic/grid_strike.py b/bots/controllers/generic/grid_strike.py deleted file mode 100644 index 825082c4ca..0000000000 --- a/bots/controllers/generic/grid_strike.py +++ /dev/null @@ -1,196 +0,0 @@ -from decimal import Decimal -from typing import List, Optional - -from pydantic import Field - -from hummingbot.core.data_type.common import MarketDict, OrderType, PositionMode, PriceType, TradeType -from hummingbot.data_feed.candles_feed.data_types import CandlesConfig -from hummingbot.strategy_v2.controllers import ControllerBase, ControllerConfigBase -from hummingbot.strategy_v2.executors.data_types import ConnectorPair -from hummingbot.strategy_v2.executors.grid_executor.data_types import GridExecutorConfig -from hummingbot.strategy_v2.executors.position_executor.data_types import TripleBarrierConfig -from hummingbot.strategy_v2.models.executor_actions import CreateExecutorAction, ExecutorAction -from hummingbot.strategy_v2.models.executors_info import ExecutorInfo - - -class GridStrikeConfig(ControllerConfigBase): - """ - Configuration required to run the GridStrike strategy for one connector and trading pair. - """ - controller_type: str = "generic" - controller_name: str = "grid_strike" - candles_config: List[CandlesConfig] = [] - - # Account configuration - leverage: int = 20 - position_mode: PositionMode = PositionMode.HEDGE - - # Boundaries - connector_name: str = "binance_perpetual" - trading_pair: str = "WLD-USDT" - side: TradeType = TradeType.BUY - start_price: Decimal = Field(default=Decimal("0.58"), json_schema_extra={"is_updatable": True}) - end_price: Decimal = Field(default=Decimal("0.95"), json_schema_extra={"is_updatable": True}) - limit_price: Decimal = Field(default=Decimal("0.55"), json_schema_extra={"is_updatable": True}) - - # Profiling - total_amount_quote: Decimal = Field(default=Decimal("1000"), json_schema_extra={"is_updatable": True}) - min_spread_between_orders: Optional[Decimal] = Field(default=Decimal("0.001"), json_schema_extra={"is_updatable": True}) - min_order_amount_quote: Optional[Decimal] = Field(default=Decimal("5"), json_schema_extra={"is_updatable": True}) - - # Execution - max_open_orders: int = Field(default=2, json_schema_extra={"is_updatable": True}) - max_orders_per_batch: Optional[int] = Field(default=1, json_schema_extra={"is_updatable": True}) - order_frequency: int = Field(default=3, json_schema_extra={"is_updatable": True}) - activation_bounds: Optional[Decimal] = Field(default=None, json_schema_extra={"is_updatable": True}) - keep_position: bool = Field(default=False, json_schema_extra={"is_updatable": True}) - - # Risk Management - triple_barrier_config: TripleBarrierConfig = TripleBarrierConfig( - take_profit=Decimal("0.001"), - open_order_type=OrderType.LIMIT_MAKER, - take_profit_order_type=OrderType.LIMIT_MAKER, - ) - - def update_markets(self, markets: MarketDict) -> MarketDict: - return markets.add_or_update(self.connector_name, self.trading_pair) - - -class GridStrike(ControllerBase): - def __init__(self, config: GridStrikeConfig, *args, **kwargs): - super().__init__(config, *args, **kwargs) - self.config = config - self._last_grid_levels_update = 0 - self.trading_rules = None - self.grid_levels = [] - self.initialize_rate_sources() - - def initialize_rate_sources(self): - self.market_data_provider.initialize_rate_sources([ConnectorPair(connector_name=self.config.connector_name, - trading_pair=self.config.trading_pair)]) - - def active_executors(self) -> List[ExecutorInfo]: - return [ - executor for executor in self.executors_info - if executor.is_active - ] - - def is_inside_bounds(self, price: Decimal) -> bool: - return self.config.start_price <= price <= self.config.end_price - - def determine_executor_actions(self) -> List[ExecutorAction]: - mid_price = self.market_data_provider.get_price_by_type( - self.config.connector_name, self.config.trading_pair, PriceType.MidPrice) - if len(self.active_executors()) == 0 and self.is_inside_bounds(mid_price): - return [CreateExecutorAction( - controller_id=self.config.id, - executor_config=GridExecutorConfig( - timestamp=self.market_data_provider.time(), - connector_name=self.config.connector_name, - trading_pair=self.config.trading_pair, - start_price=self.config.start_price, - end_price=self.config.end_price, - leverage=self.config.leverage, - limit_price=self.config.limit_price, - side=self.config.side, - total_amount_quote=self.config.total_amount_quote, - min_spread_between_orders=self.config.min_spread_between_orders, - min_order_amount_quote=self.config.min_order_amount_quote, - max_open_orders=self.config.max_open_orders, - max_orders_per_batch=self.config.max_orders_per_batch, - order_frequency=self.config.order_frequency, - activation_bounds=self.config.activation_bounds, - triple_barrier_config=self.config.triple_barrier_config, - level_id=None, - keep_position=self.config.keep_position, - ))] - return [] - - async def update_processed_data(self): - pass - - def to_format_status(self) -> List[str]: - status = [] - mid_price = self.market_data_provider.get_price_by_type( - self.config.connector_name, self.config.trading_pair, PriceType.MidPrice) - # Define standard box width for consistency - box_width = 114 - # Top Grid Configuration box with simple borders - status.append("┌" + "─" * box_width + "┐") - # First line: Grid Configuration and Mid Price - left_section = "Grid Configuration:" - padding = box_width - len(left_section) - 4 # -4 for the border characters and spacing - config_line1 = f"│ {left_section}{' ' * padding}" - padding2 = box_width - len(config_line1) + 1 # +1 for correct right border alignment - config_line1 += " " * padding2 + "│" - status.append(config_line1) - # Second line: Configuration parameters - config_line2 = f"│ Start: {self.config.start_price:.4f} │ End: {self.config.end_price:.4f} │ Side: {self.config.side} │ Limit: {self.config.limit_price:.4f} │ Mid Price: {mid_price:.4f} │" - padding = box_width - len(config_line2) + 1 # +1 for correct right border alignment - config_line2 += " " * padding + "│" - status.append(config_line2) - # Third line: Max orders and Inside bounds - config_line3 = f"│ Max Orders: {self.config.max_open_orders} │ Inside bounds: {1 if self.is_inside_bounds(mid_price) else 0}" - padding = box_width - len(config_line3) + 1 # +1 for correct right border alignment - config_line3 += " " * padding + "│" - status.append(config_line3) - status.append("└" + "─" * box_width + "┘") - for level in self.active_executors(): - # Define column widths for perfect alignment - col_width = box_width // 3 # Dividing the total width by 3 for equal columns - total_width = box_width - # Grid Status header - use long line and running status - status_header = f"Grid Status: {level.id} (RunnableStatus.RUNNING)" - status_line = f"┌ {status_header}" + "─" * (total_width - len(status_header) - 2) + "┐" - status.append(status_line) - # Calculate exact column widths for perfect alignment - col1_end = col_width - # Column headers - header_line = "│ Level Distribution" + " " * (col1_end - 20) + "│" - header_line += " Order Statistics" + " " * (col_width - 18) + "│" - header_line += " Performance Metrics" + " " * (col_width - 21) + "│" - status.append(header_line) - # Data for the three columns - level_dist_data = [ - f"NOT_ACTIVE: {len(level.custom_info['levels_by_state'].get('NOT_ACTIVE', []))}", - f"OPEN_ORDER_PLACED: {len(level.custom_info['levels_by_state'].get('OPEN_ORDER_PLACED', []))}", - f"OPEN_ORDER_FILLED: {len(level.custom_info['levels_by_state'].get('OPEN_ORDER_FILLED', []))}", - f"CLOSE_ORDER_PLACED: {len(level.custom_info['levels_by_state'].get('CLOSE_ORDER_PLACED', []))}", - f"COMPLETE: {len(level.custom_info['levels_by_state'].get('COMPLETE', []))}" - ] - order_stats_data = [ - f"Total: {sum(len(level.custom_info[k]) for k in ['filled_orders', 'failed_orders', 'canceled_orders'])}", - f"Filled: {len(level.custom_info['filled_orders'])}", - f"Failed: {len(level.custom_info['failed_orders'])}", - f"Canceled: {len(level.custom_info['canceled_orders'])}" - ] - perf_metrics_data = [ - f"Buy Vol: {level.custom_info['realized_buy_size_quote']:.4f}", - f"Sell Vol: {level.custom_info['realized_sell_size_quote']:.4f}", - f"R. PnL: {level.custom_info['realized_pnl_quote']:.4f}", - f"R. Fees: {level.custom_info['realized_fees_quote']:.4f}", - f"P. PnL: {level.custom_info['position_pnl_quote']:.4f}", - f"Position: {level.custom_info['position_size_quote']:.4f}" - ] - # Build rows with perfect alignment - max_rows = max(len(level_dist_data), len(order_stats_data), len(perf_metrics_data)) - for i in range(max_rows): - col1 = level_dist_data[i] if i < len(level_dist_data) else "" - col2 = order_stats_data[i] if i < len(order_stats_data) else "" - col3 = perf_metrics_data[i] if i < len(perf_metrics_data) else "" - row = "│ " + col1 - row += " " * (col1_end - len(col1) - 2) # -2 for the "│ " at the start - row += "│ " + col2 - row += " " * (col_width - len(col2) - 2) # -2 for the "│ " before col2 - row += "│ " + col3 - row += " " * (col_width - len(col3) - 2) # -2 for the "│ " before col3 - row += "│" - status.append(row) - # Liquidity line with perfect alignment - status.append("├" + "─" * total_width + "┤") - liquidity_line = f"│ Open Liquidity: {level.custom_info['open_liquidity_placed']:.4f} │ Close Liquidity: {level.custom_info['close_liquidity_placed']:.4f} │" - liquidity_line += " " * (total_width - len(liquidity_line) + 1) # +1 for correct right border alignment - liquidity_line += "│" - status.append(liquidity_line) - status.append("└" + "─" * total_width + "┘") - return status diff --git a/bots/controllers/generic/pmm.py b/bots/controllers/generic/pmm.py deleted file mode 100644 index 97e5513565..0000000000 --- a/bots/controllers/generic/pmm.py +++ /dev/null @@ -1,647 +0,0 @@ -from decimal import Decimal -from typing import List, Optional, Tuple, Union - -from pydantic import Field, field_validator -from pydantic_core.core_schema import ValidationInfo - -from hummingbot.core.data_type.common import MarketDict, OrderType, PositionMode, PriceType, TradeType -from hummingbot.data_feed.candles_feed.data_types import CandlesConfig -from hummingbot.strategy_v2.controllers.controller_base import ControllerBase, ControllerConfigBase -from hummingbot.strategy_v2.executors.data_types import ConnectorPair -from hummingbot.strategy_v2.executors.order_executor.data_types import ExecutionStrategy, OrderExecutorConfig -from hummingbot.strategy_v2.executors.position_executor.data_types import PositionExecutorConfig, TripleBarrierConfig -from hummingbot.strategy_v2.models.executor_actions import CreateExecutorAction, ExecutorAction, StopExecutorAction -from hummingbot.strategy_v2.models.executors import CloseType - - -class PMMConfig(ControllerConfigBase): - """ - This class represents the base configuration for a market making controller. - """ - controller_type: str = "generic" - controller_name: str = "pmm" - candles_config: List[CandlesConfig] = [] - connector_name: str = Field( - default="binance", - json_schema_extra={ - "prompt_on_new": True, - "prompt": "Enter the name of the connector to use (e.g., binance):", - } - ) - trading_pair: str = Field( - default="BTC-FDUSD", - json_schema_extra={ - "prompt_on_new": True, - "prompt": "Enter the trading pair to trade on (e.g., BTC-FDUSD):", - } - ) - portfolio_allocation: Decimal = Field( - default=Decimal("0.05"), - json_schema_extra={ - "prompt_on_new": True, - "prompt": "Enter the maximum quote exposure percentage around mid price (e.g., 0.05 for 5% of total quote allocation):", - } - ) - target_base_pct: Decimal = Field( - default=Decimal("0.2"), - json_schema_extra={ - "prompt_on_new": True, - "prompt": "Enter the target base percentage (e.g., 0.2 for 20%):", - } - ) - min_base_pct: Decimal = Field( - default=Decimal("0.1"), - json_schema_extra={ - "prompt_on_new": True, - "prompt": "Enter the minimum base percentage (e.g., 0.1 for 10%):", - } - ) - max_base_pct: Decimal = Field( - default=Decimal("0.4"), - json_schema_extra={ - "prompt_on_new": True, - "prompt": "Enter the maximum base percentage (e.g., 0.4 for 40%):", - } - ) - buy_spreads: List[float] = Field( - default="0.01,0.02", - json_schema_extra={ - "prompt_on_new": True, "is_updatable": True, - "prompt": "Enter a comma-separated list of buy spreads (e.g., '0.01, 0.02'):", - } - ) - sell_spreads: List[float] = Field( - default="0.01,0.02", - json_schema_extra={ - "prompt_on_new": True, "is_updatable": True, - "prompt": "Enter a comma-separated list of sell spreads (e.g., '0.01, 0.02'):", - } - ) - buy_amounts_pct: Union[List[Decimal], None] = Field( - default=None, - json_schema_extra={ - "prompt_on_new": True, "is_updatable": True, - "prompt": "Enter a comma-separated list of buy amounts as percentages (e.g., '50, 50'), or leave blank to distribute equally:", - } - ) - sell_amounts_pct: Union[List[Decimal], None] = Field( - default=None, - json_schema_extra={ - "prompt_on_new": True, "is_updatable": True, - "prompt": "Enter a comma-separated list of sell amounts as percentages (e.g., '50, 50'), or leave blank to distribute equally:", - } - ) - executor_refresh_time: int = Field( - default=60 * 5, - json_schema_extra={ - "prompt_on_new": True, "is_updatable": True, - "prompt": "Enter the refresh time in seconds for executors (e.g., 300 for 5 minutes):", - } - ) - cooldown_time: int = Field( - default=15, - json_schema_extra={ - "prompt_on_new": True, "is_updatable": True, - "prompt": "Enter the cooldown time in seconds between after replacing an executor that traded (e.g., 15):", - } - ) - leverage: int = Field( - default=20, - json_schema_extra={ - "prompt_on_new": True, "is_updatable": True, - "prompt": "Enter the leverage to use for trading (e.g., 20 for 20x leverage). Set it to 1 for spot trading:", - } - ) - position_mode: PositionMode = Field(default="HEDGE") - take_profit: Optional[Decimal] = Field( - default=Decimal("0.02"), gt=0, - json_schema_extra={ - "prompt_on_new": True, "is_updatable": True, - "prompt": "Enter the take profit as a decimal (e.g., 0.02 for 2%):", - } - ) - take_profit_order_type: Optional[OrderType] = Field( - default="LIMIT_MAKER", - json_schema_extra={ - "prompt_on_new": True, "is_updatable": True, - "prompt": "Enter the order type for take profit (e.g., LIMIT_MAKER):", - } - ) - max_skew: Decimal = Field( - default=Decimal("1.0"), - json_schema_extra={ - "prompt_on_new": True, "is_updatable": True, - "prompt": "Enter the maximum skew factor (e.g., 1.0):", - } - ) - global_take_profit: Decimal = Decimal("0.02") - global_stop_loss: Decimal = Decimal("0.05") - - @field_validator("take_profit", mode="before") - @classmethod - def validate_target(cls, v): - if isinstance(v, str): - if v == "": - return None - return Decimal(v) - return v - - @field_validator('take_profit_order_type', mode="before") - @classmethod - def validate_order_type(cls, v) -> OrderType: - if isinstance(v, OrderType): - return v - elif v is None: - return OrderType.MARKET - elif isinstance(v, str): - if v.upper() in OrderType.__members__: - return OrderType[v.upper()] - elif isinstance(v, int): - try: - return OrderType(v) - except ValueError: - pass - raise ValueError(f"Invalid order type: {v}. Valid options are: {', '.join(OrderType.__members__)}") - - @field_validator('buy_spreads', 'sell_spreads', mode="before") - @classmethod - def parse_spreads(cls, v): - if v is None: - return [] - if isinstance(v, str): - if v == "": - return [] - return [float(x.strip()) for x in v.split(',')] - return v - - @field_validator('buy_amounts_pct', 'sell_amounts_pct', mode="before") - @classmethod - def parse_and_validate_amounts(cls, v, validation_info: ValidationInfo): - field_name = validation_info.field_name - if v is None or v == "": - spread_field = field_name.replace('amounts_pct', 'spreads') - return [1 for _ in validation_info.data[spread_field]] - if isinstance(v, str): - return [float(x.strip()) for x in v.split(',')] - elif isinstance(v, list) and len(v) != len(validation_info.data[field_name.replace('amounts_pct', 'spreads')]): - raise ValueError( - f"The number of {field_name} must match the number of {field_name.replace('amounts_pct', 'spreads')}.") - return v - - @field_validator('position_mode', mode="before") - @classmethod - def validate_position_mode(cls, v) -> PositionMode: - if isinstance(v, str): - if v.upper() in PositionMode.__members__: - return PositionMode[v.upper()] - raise ValueError(f"Invalid position mode: {v}. Valid options are: {', '.join(PositionMode.__members__)}") - return v - - @property - def triple_barrier_config(self) -> TripleBarrierConfig: - return TripleBarrierConfig( - take_profit=self.take_profit, - trailing_stop=None, - open_order_type=OrderType.LIMIT_MAKER, # Defaulting to LIMIT as is a Maker Controller - take_profit_order_type=self.take_profit_order_type, - stop_loss_order_type=OrderType.MARKET, # Defaulting to MARKET as per requirement - time_limit_order_type=OrderType.MARKET # Defaulting to MARKET as per requirement - ) - - def update_parameters(self, trade_type: TradeType, new_spreads: Union[List[float], str], new_amounts_pct: Optional[Union[List[int], str]] = None): - spreads_field = 'buy_spreads' if trade_type == TradeType.BUY else 'sell_spreads' - amounts_pct_field = 'buy_amounts_pct' if trade_type == TradeType.BUY else 'sell_amounts_pct' - - setattr(self, spreads_field, self.parse_spreads(new_spreads)) - if new_amounts_pct is not None: - setattr(self, amounts_pct_field, self.parse_and_validate_amounts(new_amounts_pct, self.__dict__, self.__fields__[amounts_pct_field])) - else: - setattr(self, amounts_pct_field, [1 for _ in getattr(self, spreads_field)]) - - def get_spreads_and_amounts_in_quote(self, trade_type: TradeType) -> Tuple[List[float], List[float]]: - buy_amounts_pct = getattr(self, 'buy_amounts_pct') - sell_amounts_pct = getattr(self, 'sell_amounts_pct') - - # Calculate total percentages across buys and sells - total_pct = sum(buy_amounts_pct) + sum(sell_amounts_pct) - - # Normalize amounts_pct based on total percentages - if trade_type == TradeType.BUY: - normalized_amounts_pct = [amt_pct / total_pct for amt_pct in buy_amounts_pct] - else: # TradeType.SELL - normalized_amounts_pct = [amt_pct / total_pct for amt_pct in sell_amounts_pct] - - spreads = getattr(self, f'{trade_type.name.lower()}_spreads') - return spreads, [amt_pct * self.total_amount_quote * self.portfolio_allocation for amt_pct in normalized_amounts_pct] - - def update_markets(self, markets: MarketDict) -> MarketDict: - return markets.add_or_update(self.connector_name, self.trading_pair) - - -class PMM(ControllerBase): - """ - This class represents the base class for a market making controller. - """ - - def __init__(self, config: PMMConfig, *args, **kwargs): - super().__init__(config, *args, **kwargs) - self.config = config - self.market_data_provider.initialize_rate_sources([ConnectorPair( - connector_name=config.connector_name, trading_pair=config.trading_pair)]) - - def determine_executor_actions(self) -> List[ExecutorAction]: - """ - Determine actions based on the provided executor handler report. - """ - actions = [] - actions.extend(self.create_actions_proposal()) - actions.extend(self.stop_actions_proposal()) - return actions - - def create_actions_proposal(self) -> List[ExecutorAction]: - """ - Create actions proposal based on the current state of the controller. - """ - create_actions = [] - - # Check if a position reduction executor for TP/SL is already sent - reduction_executor_exists = any( - executor.is_active and - executor.custom_info.get("level_id") == "global_tp_sl" - for executor in self.executors_info - ) - - if (not reduction_executor_exists and - self.processed_data["current_base_pct"] > self.config.target_base_pct and - (self.processed_data["unrealized_pnl_pct"] > self.config.global_take_profit or - self.processed_data["unrealized_pnl_pct"] < -self.config.global_stop_loss)): - - # Create a global take profit or stop loss executor - create_actions.append(CreateExecutorAction( - controller_id=self.config.id, - executor_config=OrderExecutorConfig( - timestamp=self.market_data_provider.time(), - connector_name=self.config.connector_name, - trading_pair=self.config.trading_pair, - side=TradeType.SELL, - amount=self.processed_data["position_amount"], - execution_strategy=ExecutionStrategy.MARKET, - price=self.processed_data["reference_price"], - level_id="global_tp_sl" # Use a specific level_id to identify this as a TP/SL executor - ) - )) - return create_actions - levels_to_execute = self.get_levels_to_execute() - # Pre-calculate all spreads and amounts for buy and sell sides - buy_spreads, buy_amounts_quote = self.config.get_spreads_and_amounts_in_quote(TradeType.BUY) - sell_spreads, sell_amounts_quote = self.config.get_spreads_and_amounts_in_quote(TradeType.SELL) - reference_price = Decimal(self.processed_data["reference_price"]) - # Get current position info for skew calculation - current_pct = self.processed_data["current_base_pct"] - min_pct = self.config.min_base_pct - max_pct = self.config.max_base_pct - # Calculate skew factors (0 to 1) - how much to scale orders - if max_pct > min_pct: # Prevent division by zero - # For buys: full size at min_pct, decreasing as we approach max_pct - buy_skew = (max_pct - current_pct) / (max_pct - min_pct) - # For sells: full size at max_pct, decreasing as we approach min_pct - sell_skew = (current_pct - min_pct) / (max_pct - min_pct) - # Ensure values stay between 0.2 and 1.0 (never go below 20% of original size) - buy_skew = max(min(buy_skew, Decimal("1.0")), self.config.max_skew) - sell_skew = max(min(sell_skew, Decimal("1.0")), self.config.max_skew) - else: - buy_skew = sell_skew = Decimal("1.0") - # Create executors for each level - for level_id in levels_to_execute: - trade_type = self.get_trade_type_from_level_id(level_id) - level = self.get_level_from_level_id(level_id) - if trade_type == TradeType.BUY: - spread_in_pct = Decimal(buy_spreads[level]) * Decimal(self.processed_data["spread_multiplier"]) - amount_quote = Decimal(buy_amounts_quote[level]) - skew = buy_skew - else: # TradeType.SELL - spread_in_pct = Decimal(sell_spreads[level]) * Decimal(self.processed_data["spread_multiplier"]) - amount_quote = Decimal(sell_amounts_quote[level]) - skew = sell_skew - # Calculate price - side_multiplier = Decimal("-1") if trade_type == TradeType.BUY else Decimal("1") - price = reference_price * (Decimal("1") + side_multiplier * spread_in_pct) - # Calculate amount with skew applied - amount = self.market_data_provider.quantize_order_amount(self.config.connector_name, - self.config.trading_pair, - (amount_quote / price) * skew) - if amount == Decimal("0"): - self.logger().warning(f"The amount of the level {level_id} is 0. Skipping.") - executor_config = self.get_executor_config(level_id, price, amount) - if executor_config is not None: - create_actions.append(CreateExecutorAction( - controller_id=self.config.id, - executor_config=executor_config - )) - return create_actions - - def get_levels_to_execute(self) -> List[str]: - working_levels = self.filter_executors( - executors=self.executors_info, - filter_func=lambda x: x.is_active or (x.close_type == CloseType.STOP_LOSS and self.market_data_provider.time() - x.close_timestamp < self.config.cooldown_time) - ) - working_levels_ids = [executor.custom_info["level_id"] for executor in working_levels] - return self.get_not_active_levels_ids(working_levels_ids) - - def stop_actions_proposal(self) -> List[ExecutorAction]: - """ - Create a list of actions to stop the executors based on order refresh and early stop conditions. - """ - stop_actions = [] - stop_actions.extend(self.executors_to_refresh()) - stop_actions.extend(self.executors_to_early_stop()) - return stop_actions - - def executors_to_refresh(self) -> List[ExecutorAction]: - executors_to_refresh = self.filter_executors( - executors=self.executors_info, - filter_func=lambda x: not x.is_trading and x.is_active and self.market_data_provider.time() - x.timestamp > self.config.executor_refresh_time) - return [StopExecutorAction( - controller_id=self.config.id, - keep_position=True, - executor_id=executor.id) for executor in executors_to_refresh] - - def executors_to_early_stop(self) -> List[ExecutorAction]: - """ - Get the executors to early stop based on the current state of market data. This method can be overridden to - implement custom behavior. - """ - executors_to_early_stop = self.filter_executors( - executors=self.executors_info, - filter_func=lambda x: x.is_active and x.is_trading and self.market_data_provider.time() - x.custom_info["open_order_last_update"] > self.config.cooldown_time) - return [StopExecutorAction( - controller_id=self.config.id, - keep_position=True, - executor_id=executor.id) for executor in executors_to_early_stop] - - async def update_processed_data(self): - """ - Update the processed data for the controller. This method should be reimplemented to modify the reference price - and spread multiplier based on the market data. By default, it will update the reference price as mid price and - the spread multiplier as 1. - """ - reference_price = self.market_data_provider.get_price_by_type(self.config.connector_name, - self.config.trading_pair, PriceType.MidPrice) - position_held = next((position for position in self.positions_held if - (position.trading_pair == self.config.trading_pair) & - (position.connector_name == self.config.connector_name)), None) - target_position = self.config.total_amount_quote * self.config.target_base_pct - if position_held is not None: - position_amount = position_held.amount - current_base_pct = position_held.amount_quote / self.config.total_amount_quote - deviation = (target_position - position_held.amount_quote) / target_position - unrealized_pnl_pct = position_held.unrealized_pnl_quote / position_held.amount_quote if position_held.amount_quote != 0 else Decimal("0") - else: - position_amount = 0 - current_base_pct = 0 - deviation = 1 - unrealized_pnl_pct = 0 - - self.processed_data = {"reference_price": Decimal(reference_price), "spread_multiplier": Decimal("1"), - "deviation": deviation, "current_base_pct": current_base_pct, - "unrealized_pnl_pct": unrealized_pnl_pct, "position_amount": position_amount} - - def get_executor_config(self, level_id: str, price: Decimal, amount: Decimal): - """ - Get the executor config for a given level id. - """ - trade_type = self.get_trade_type_from_level_id(level_id) - level_multiplier = self.get_level_from_level_id(level_id) + 1 - return PositionExecutorConfig( - timestamp=self.market_data_provider.time(), - level_id=level_id, - connector_name=self.config.connector_name, - trading_pair=self.config.trading_pair, - entry_price=price, - amount=amount, - triple_barrier_config=self.config.triple_barrier_config.new_instance_with_adjusted_volatility(level_multiplier), - leverage=self.config.leverage, - side=trade_type, - ) - - def get_level_id_from_side(self, trade_type: TradeType, level: int) -> str: - """ - Get the level id based on the trade type and the level. - """ - return f"{trade_type.name.lower()}_{level}" - - def get_trade_type_from_level_id(self, level_id: str) -> TradeType: - return TradeType.BUY if level_id.startswith("buy") else TradeType.SELL - - def get_level_from_level_id(self, level_id: str) -> int: - return int(level_id.split('_')[1]) - - def get_not_active_levels_ids(self, active_levels_ids: List[str]) -> List[str]: - """ - Get the levels to execute based on the current state of the controller. - """ - buy_ids_missing = [self.get_level_id_from_side(TradeType.BUY, level) for level in range(len(self.config.buy_spreads)) - if self.get_level_id_from_side(TradeType.BUY, level) not in active_levels_ids] - sell_ids_missing = [self.get_level_id_from_side(TradeType.SELL, level) for level in range(len(self.config.sell_spreads)) - if self.get_level_id_from_side(TradeType.SELL, level) not in active_levels_ids] - if self.processed_data["current_base_pct"] < self.config.min_base_pct: - return buy_ids_missing - elif self.processed_data["current_base_pct"] > self.config.max_base_pct: - return sell_ids_missing - return buy_ids_missing + sell_ids_missing - - def to_format_status(self) -> List[str]: - """ - Get the status of the controller in a formatted way with ASCII visualizations. - """ - from decimal import Decimal - from itertools import zip_longest - - status = [] - - # Get all required data - base_pct = self.processed_data['current_base_pct'] - min_pct = self.config.min_base_pct - max_pct = self.config.max_base_pct - target_pct = self.config.target_base_pct - skew = base_pct - target_pct - skew_pct = skew / target_pct if target_pct != 0 else Decimal('0') - max_skew = getattr(self.config, 'max_skew', Decimal('0.0')) - - # Fixed widths - adjusted based on screenshot analysis - outer_width = 92 # Total width including outer borders - inner_width = outer_width - 4 # Inner content width - half_width = (inner_width) // 2 - 1 # Width of each column in split sections - bar_width = inner_width - 15 # Width of visualization bars (accounting for label) - - # Header - omit ID since it's shown above in controller header - status.append("╒" + "═" * (inner_width) + "╕") - - header_line = ( - f"{self.config.connector_name}:{self.config.trading_pair} " - f"Price: {self.processed_data['reference_price']} " - f"Alloc: {self.config.portfolio_allocation:.1%} " - f"Spread Mult: {self.processed_data['spread_multiplier']} |" - ) - - status.append(f"│ {header_line:<{inner_width}} │") - - # Position and PnL sections with precise widths - status.append(f"├{'─' * half_width}┬{'─' * half_width}┤") - status.append(f"│ {'POSITION STATUS':<{half_width - 2}} │ {'PROFIT & LOSS':<{half_width - 2}} │") - status.append(f"├{'─' * half_width}┼{'─' * half_width}┤") - - # Position data for left column - position_info = [ - f"Current: {base_pct:.2%}", - f"Target: {target_pct:.2%}", - f"Min/Max: {min_pct:.2%}/{max_pct:.2%}", - f"Skew: {skew_pct:+.2%} (max {max_skew:.2%})" - ] - - # PnL data for right column - pnl_info = [] - if 'unrealized_pnl_pct' in self.processed_data: - pnl = self.processed_data['unrealized_pnl_pct'] - pnl_sign = "+" if pnl >= 0 else "" - pnl_info = [ - f"Unrealized: {pnl_sign}{pnl:.2%}", - f"Take Profit: {self.config.global_take_profit:.2%}", - f"Stop Loss: {-self.config.global_stop_loss:.2%}", - f"Leverage: {self.config.leverage}x" - ] - - # Display position and PnL info side by side with exact spacing - for pos_line, pnl_line in zip_longest(position_info, pnl_info, fillvalue=""): - status.append(f"│ {pos_line:<{half_width - 2}} │ {pnl_line:<{half_width - 2}} │") - - # Adjust visualization section - ensure consistent spacing - status.append(f"├{'─' * (inner_width)}┤") - status.append(f"│ {'VISUALIZATIONS':<{inner_width}} │") - status.append(f"├{'─' * (inner_width)}┤") - - # Position bar with exact spacing and characters - filled_width = int(base_pct * bar_width) - min_pos = int(min_pct * bar_width) - max_pos = int(max_pct * bar_width) - target_pos = int(target_pct * bar_width) - - # Build position bar character by character - position_bar = "" - for i in range(bar_width): - if i == filled_width: - position_bar += "◆" # Current position - elif i == min_pos: - position_bar += "┃" # Min threshold - elif i == max_pos: - position_bar += "┃" # Max threshold - elif i == target_pos: - position_bar += "┇" # Target threshold - elif i < filled_width: - position_bar += "█" # Filled area - else: - position_bar += "░" # Empty area - - # Ensure consistent label spacing as seen in screenshot - status.append(f"│ Position: [{position_bar}] │") - - # Skew visualization with exact spacing - skew_bar_width = bar_width - center = skew_bar_width // 2 - skew_pos = center + int(skew_pct * center * 2) - skew_pos = max(0, min(skew_bar_width - 1, skew_pos)) - - # Build skew bar character by character - skew_bar = "" - for i in range(skew_bar_width): - if i == center: - skew_bar += "┃" # Center line - elif i == skew_pos: - skew_bar += "⬤" # Current skew - else: - skew_bar += "─" # Empty line - - # Match spacing from screenshot with exact character counts - status.append(f"│ Skew: [{skew_bar}] │") - - # PnL visualization if available - if 'unrealized_pnl_pct' in self.processed_data: - pnl = self.processed_data['unrealized_pnl_pct'] - take_profit = self.config.global_take_profit - stop_loss = -self.config.global_stop_loss - - pnl_bar_width = bar_width - center = pnl_bar_width // 2 - - # Calculate positions with exact scaling - max_range = max(abs(take_profit), abs(stop_loss), abs(pnl)) * Decimal("1.2") - scale = (pnl_bar_width // 2) / max_range - - pnl_pos = center + int(pnl * scale) - take_profit_pos = center + int(take_profit * scale) - stop_loss_pos = center + int(stop_loss * scale) - - # Ensure positions are within bounds - pnl_pos = max(0, min(pnl_bar_width - 1, pnl_pos)) - take_profit_pos = max(0, min(pnl_bar_width - 1, take_profit_pos)) - stop_loss_pos = max(0, min(pnl_bar_width - 1, stop_loss_pos)) - - # Build PnL bar character by character - pnl_bar = "" - for i in range(pnl_bar_width): - if i == center: - pnl_bar += "│" # Center line - elif i == pnl_pos: - pnl_bar += "⬤" # Current PnL - elif i == take_profit_pos: - pnl_bar += "T" # Take profit line - elif i == stop_loss_pos: - pnl_bar += "S" # Stop loss line - elif (pnl >= 0 and center <= i < pnl_pos) or (pnl < 0 and pnl_pos < i <= center): - pnl_bar += "█" if pnl >= 0 else "▓" - else: - pnl_bar += "─" - - # Match spacing from screenshot - status.append(f"│ PnL: [{pnl_bar}] │") - - # Executors section with precise column widths - status.append(f"├{'─' * half_width}┬{'─' * half_width}┤") - status.append(f"│ {'EXECUTORS STATUS':<{half_width - 2}} │ {'EXECUTOR VISUALIZATION':<{half_width - 2}} │") - status.append(f"├{'─' * half_width}┼{'─' * half_width}┤") - - # Count active executors by type - active_buy = sum(1 for info in self.executors_info - if info.is_active and self.get_trade_type_from_level_id(info.custom_info["level_id"]) == TradeType.BUY) - active_sell = sum(1 for info in self.executors_info - if info.is_active and self.get_trade_type_from_level_id(info.custom_info["level_id"]) == TradeType.SELL) - total_active = sum(1 for info in self.executors_info if info.is_active) - - # Executor information with fixed formatting - executor_info = [ - f"Total Active: {total_active}", - f"Total Created: {len(self.executors_info)}", - f"Buy Executors: {active_buy}", - f"Sell Executors: {active_sell}" - ] - - if 'deviation' in self.processed_data: - executor_info.append(f"Target Deviation: {self.processed_data['deviation']:.4f}") - - # Visualization with consistent block characters for buy/sell representation - buy_bars = "▮" * active_buy if active_buy > 0 else "─" - sell_bars = "▮" * active_sell if active_sell > 0 else "─" - - executor_viz = [ - f"Buy: {buy_bars}", - f"Sell: {sell_bars}" - ] - - # Display with fixed width columns - for exec_line, viz_line in zip_longest(executor_info, executor_viz, fillvalue=""): - status.append(f"│ {exec_line:<{half_width - 2}} │ {viz_line:<{half_width - 2}} │") - - # Bottom border with exact width - status.append(f"╘{'═' * (inner_width)}╛") - - return status diff --git a/bots/controllers/generic/pmm_adjusted.py b/bots/controllers/generic/pmm_adjusted.py deleted file mode 100644 index e9bc26673c..0000000000 --- a/bots/controllers/generic/pmm_adjusted.py +++ /dev/null @@ -1,669 +0,0 @@ -from decimal import Decimal -from typing import List, Optional, Tuple, Union - -from pydantic import Field, field_validator -from pydantic_core.core_schema import ValidationInfo - -from hummingbot.core.data_type.common import MarketDict, OrderType, PositionMode, PriceType, TradeType -from hummingbot.data_feed.candles_feed.data_types import CandlesConfig -from hummingbot.strategy_v2.controllers.controller_base import ControllerBase, ControllerConfigBase -from hummingbot.strategy_v2.executors.data_types import ConnectorPair -from hummingbot.strategy_v2.executors.order_executor.data_types import ExecutionStrategy, OrderExecutorConfig -from hummingbot.strategy_v2.executors.position_executor.data_types import PositionExecutorConfig, TripleBarrierConfig -from hummingbot.strategy_v2.models.executor_actions import CreateExecutorAction, ExecutorAction, StopExecutorAction -from hummingbot.strategy_v2.models.executors import CloseType - - -class PMMAdjustedConfig(ControllerConfigBase): - """ - This class represents the base configuration for a market making controller. - """ - controller_type: str = "generic" - controller_name: str = "pmm_adjusted" - candles_config: List[CandlesConfig] = [] - connector_name: str = Field( - default="binance", - json_schema_extra={ - "prompt_on_new": True, - "prompt": "Enter the name of the connector to use (e.g., binance):", - } - ) - trading_pair: str = Field( - default="BTC-FDUSD", - json_schema_extra={ - "prompt_on_new": True, - "prompt": "Enter the trading pair to trade on (e.g., BTC-FDUSD):", - } - ) - candles_connector_name: str = Field(default="binance") - candles_trading_pair: str = Field(default="BTC-USDT") - candles_interval: str = Field(default="1s") - - portfolio_allocation: Decimal = Field( - default=Decimal("0.05"), - json_schema_extra={ - "prompt_on_new": True, - "prompt": "Enter the maximum quote exposure percentage around mid price (e.g., 0.05 for 5% of total quote allocation):", - } - ) - target_base_pct: Decimal = Field( - default=Decimal("0.2"), - json_schema_extra={ - "prompt_on_new": True, - "prompt": "Enter the target base percentage (e.g., 0.2 for 20%):", - } - ) - min_base_pct: Decimal = Field( - default=Decimal("0.1"), - json_schema_extra={ - "prompt_on_new": True, - "prompt": "Enter the minimum base percentage (e.g., 0.1 for 10%):", - } - ) - max_base_pct: Decimal = Field( - default=Decimal("0.4"), - json_schema_extra={ - "prompt_on_new": True, - "prompt": "Enter the maximum base percentage (e.g., 0.4 for 40%):", - } - ) - buy_spreads: List[float] = Field( - default="0.01,0.02", - json_schema_extra={ - "prompt_on_new": True, "is_updatable": True, - "prompt": "Enter a comma-separated list of buy spreads (e.g., '0.01, 0.02'):", - } - ) - sell_spreads: List[float] = Field( - default="0.01,0.02", - json_schema_extra={ - "prompt_on_new": True, "is_updatable": True, - "prompt": "Enter a comma-separated list of sell spreads (e.g., '0.01, 0.02'):", - } - ) - buy_amounts_pct: Union[List[Decimal], None] = Field( - default=None, - json_schema_extra={ - "prompt_on_new": True, "is_updatable": True, - "prompt": "Enter a comma-separated list of buy amounts as percentages (e.g., '50, 50'), or leave blank to distribute equally:", - } - ) - sell_amounts_pct: Union[List[Decimal], None] = Field( - default=None, - json_schema_extra={ - "prompt_on_new": True, "is_updatable": True, - "prompt": "Enter a comma-separated list of sell amounts as percentages (e.g., '50, 50'), or leave blank to distribute equally:", - } - ) - executor_refresh_time: int = Field( - default=60 * 5, - json_schema_extra={ - "prompt_on_new": True, "is_updatable": True, - "prompt": "Enter the refresh time in seconds for executors (e.g., 300 for 5 minutes):", - } - ) - cooldown_time: int = Field( - default=15, - json_schema_extra={ - "prompt_on_new": True, "is_updatable": True, - "prompt": "Enter the cooldown time in seconds between after replacing an executor that traded (e.g., 15):", - } - ) - leverage: int = Field( - default=20, - json_schema_extra={ - "prompt_on_new": True, "is_updatable": True, - "prompt": "Enter the leverage to use for trading (e.g., 20 for 20x leverage). Set it to 1 for spot trading:", - } - ) - position_mode: PositionMode = Field(default="HEDGE") - take_profit: Optional[Decimal] = Field( - default=Decimal("0.02"), gt=0, - json_schema_extra={ - "prompt_on_new": True, "is_updatable": True, - "prompt": "Enter the take profit as a decimal (e.g., 0.02 for 2%):", - } - ) - take_profit_order_type: Optional[OrderType] = Field( - default="LIMIT_MAKER", - json_schema_extra={ - "prompt_on_new": True, "is_updatable": True, - "prompt": "Enter the order type for take profit (e.g., LIMIT_MAKER):", - } - ) - max_skew: Decimal = Field( - default=Decimal("1.0"), - json_schema_extra={ - "prompt_on_new": True, "is_updatable": True, - "prompt": "Enter the maximum skew factor (e.g., 1.0):", - } - ) - global_take_profit: Decimal = Decimal("0.02") - global_stop_loss: Decimal = Decimal("0.05") - - @field_validator("take_profit", mode="before") - @classmethod - def validate_target(cls, v): - if isinstance(v, str): - if v == "": - return None - return Decimal(v) - return v - - @field_validator('take_profit_order_type', mode="before") - @classmethod - def validate_order_type(cls, v) -> OrderType: - if isinstance(v, OrderType): - return v - elif v is None: - return OrderType.MARKET - elif isinstance(v, str): - if v.upper() in OrderType.__members__: - return OrderType[v.upper()] - elif isinstance(v, int): - try: - return OrderType(v) - except ValueError: - pass - raise ValueError(f"Invalid order type: {v}. Valid options are: {', '.join(OrderType.__members__)}") - - @field_validator('buy_spreads', 'sell_spreads', mode="before") - @classmethod - def parse_spreads(cls, v): - if v is None: - return [] - if isinstance(v, str): - if v == "": - return [] - return [float(x.strip()) for x in v.split(',')] - return v - - @field_validator('buy_amounts_pct', 'sell_amounts_pct', mode="before") - @classmethod - def parse_and_validate_amounts(cls, v, validation_info: ValidationInfo): - field_name = validation_info.field_name - if v is None or v == "": - spread_field = field_name.replace('amounts_pct', 'spreads') - return [1 for _ in validation_info.data[spread_field]] - if isinstance(v, str): - return [float(x.strip()) for x in v.split(',')] - elif isinstance(v, list) and len(v) != len(validation_info.data[field_name.replace('amounts_pct', 'spreads')]): - raise ValueError( - f"The number of {field_name} must match the number of {field_name.replace('amounts_pct', 'spreads')}.") - return v - - @field_validator('position_mode', mode="before") - @classmethod - def validate_position_mode(cls, v) -> PositionMode: - if isinstance(v, str): - if v.upper() in PositionMode.__members__: - return PositionMode[v.upper()] - raise ValueError(f"Invalid position mode: {v}. Valid options are: {', '.join(PositionMode.__members__)}") - return v - - @property - def triple_barrier_config(self) -> TripleBarrierConfig: - return TripleBarrierConfig( - take_profit=self.take_profit, - trailing_stop=None, - open_order_type=OrderType.LIMIT_MAKER, # Defaulting to LIMIT as is a Maker Controller - take_profit_order_type=self.take_profit_order_type, - stop_loss_order_type=OrderType.MARKET, # Defaulting to MARKET as per requirement - time_limit_order_type=OrderType.MARKET # Defaulting to MARKET as per requirement - ) - - def update_parameters(self, trade_type: TradeType, new_spreads: Union[List[float], str], new_amounts_pct: Optional[Union[List[int], str]] = None): - spreads_field = 'buy_spreads' if trade_type == TradeType.BUY else 'sell_spreads' - amounts_pct_field = 'buy_amounts_pct' if trade_type == TradeType.BUY else 'sell_amounts_pct' - - setattr(self, spreads_field, self.parse_spreads(new_spreads)) - if new_amounts_pct is not None: - setattr(self, amounts_pct_field, self.parse_and_validate_amounts(new_amounts_pct, self.__dict__, self.__fields__[amounts_pct_field])) - else: - setattr(self, amounts_pct_field, [1 for _ in getattr(self, spreads_field)]) - - def get_spreads_and_amounts_in_quote(self, trade_type: TradeType) -> Tuple[List[float], List[float]]: - buy_amounts_pct = getattr(self, 'buy_amounts_pct') - sell_amounts_pct = getattr(self, 'sell_amounts_pct') - - # Calculate total percentages across buys and sells - total_pct = sum(buy_amounts_pct) + sum(sell_amounts_pct) - - # Normalize amounts_pct based on total percentages - if trade_type == TradeType.BUY: - normalized_amounts_pct = [amt_pct / total_pct for amt_pct in buy_amounts_pct] - else: # TradeType.SELL - normalized_amounts_pct = [amt_pct / total_pct for amt_pct in sell_amounts_pct] - - spreads = getattr(self, f'{trade_type.name.lower()}_spreads') - return spreads, [amt_pct * self.total_amount_quote * self.portfolio_allocation for amt_pct in normalized_amounts_pct] - - def update_markets(self, markets: MarketDict) -> MarketDict: - return markets.add_or_update(self.connector_name, self.trading_pair) - - -class PMMAdjusted(ControllerBase): - """ - This class represents the base class for a market making controller. - """ - - def __init__(self, config: PMMAdjustedConfig, *args, **kwargs): - super().__init__(config, *args, **kwargs) - self.config = config - self.market_data_provider.initialize_rate_sources([ConnectorPair( - connector_name=config.connector_name, trading_pair=config.trading_pair)]) - self.config.candles_config = [ - CandlesConfig(connector=self.config.candles_connector_name, - trading_pair=self.config.candles_trading_pair, - interval=self.config.candles_interval) - ] - - def determine_executor_actions(self) -> List[ExecutorAction]: - """ - Determine actions based on the provided executor handler report. - """ - actions = [] - actions.extend(self.create_actions_proposal()) - actions.extend(self.stop_actions_proposal()) - return actions - - def create_actions_proposal(self) -> List[ExecutorAction]: - """ - Create actions proposal based on the current state of the controller. - """ - create_actions = [] - - # Check if a position reduction executor for TP/SL is already sent - reduction_executor_exists = any( - executor.is_active and - executor.custom_info.get("level_id") == "global_tp_sl" - for executor in self.executors_info - ) - - if (not reduction_executor_exists and - self.processed_data["current_base_pct"] > self.config.target_base_pct and - (self.processed_data["unrealized_pnl_pct"] > self.config.global_take_profit or - self.processed_data["unrealized_pnl_pct"] < -self.config.global_stop_loss)): - - # Create a global take profit or stop loss executor - create_actions.append(CreateExecutorAction( - controller_id=self.config.id, - executor_config=OrderExecutorConfig( - timestamp=self.market_data_provider.time(), - connector_name=self.config.connector_name, - trading_pair=self.config.trading_pair, - side=TradeType.SELL, - amount=self.processed_data["position_amount"], - execution_strategy=ExecutionStrategy.MARKET, - price=self.processed_data["reference_price"], - level_id="global_tp_sl" # Use a specific level_id to identify this as a TP/SL executor - ) - )) - return create_actions - levels_to_execute = self.get_levels_to_execute() - # Pre-calculate all spreads and amounts for buy and sell sides - buy_spreads, buy_amounts_quote = self.config.get_spreads_and_amounts_in_quote(TradeType.BUY) - sell_spreads, sell_amounts_quote = self.config.get_spreads_and_amounts_in_quote(TradeType.SELL) - reference_price = Decimal(self.processed_data["reference_price"]) - # Get current position info for skew calculation - current_pct = self.processed_data["current_base_pct"] - min_pct = self.config.min_base_pct - max_pct = self.config.max_base_pct - # Calculate skew factors (0 to 1) - how much to scale orders - if max_pct > min_pct: # Prevent division by zero - # For buys: full size at min_pct, decreasing as we approach max_pct - buy_skew = (max_pct - current_pct) / (max_pct - min_pct) - # For sells: full size at max_pct, decreasing as we approach min_pct - sell_skew = (current_pct - min_pct) / (max_pct - min_pct) - # Ensure values stay between 0.2 and 1.0 (never go below 20% of original size) - buy_skew = max(min(buy_skew, Decimal("1.0")), self.config.max_skew) - sell_skew = max(min(sell_skew, Decimal("1.0")), self.config.max_skew) - else: - buy_skew = sell_skew = Decimal("1.0") - # Create executors for each level - for level_id in levels_to_execute: - trade_type = self.get_trade_type_from_level_id(level_id) - level = self.get_level_from_level_id(level_id) - if trade_type == TradeType.BUY: - spread_in_pct = Decimal(buy_spreads[level]) * Decimal(self.processed_data["spread_multiplier"]) - amount_quote = Decimal(buy_amounts_quote[level]) - skew = buy_skew - else: # TradeType.SELL - spread_in_pct = Decimal(sell_spreads[level]) * Decimal(self.processed_data["spread_multiplier"]) - amount_quote = Decimal(sell_amounts_quote[level]) - skew = sell_skew - # Calculate price - side_multiplier = Decimal("-1") if trade_type == TradeType.BUY else Decimal("1") - price = reference_price * (Decimal("1") + side_multiplier * spread_in_pct) - # Calculate amount with skew applied - amount = self.market_data_provider.quantize_order_amount(self.config.connector_name, - self.config.trading_pair, - (amount_quote / price) * skew) - if amount == Decimal("0"): - self.logger().warning(f"The amount of the level {level_id} is 0. Skipping.") - executor_config = self.get_executor_config(level_id, price, amount) - if executor_config is not None: - create_actions.append(CreateExecutorAction( - controller_id=self.config.id, - executor_config=executor_config - )) - return create_actions - - def get_levels_to_execute(self) -> List[str]: - working_levels = self.filter_executors( - executors=self.executors_info, - filter_func=lambda x: x.is_active or (x.close_type == CloseType.STOP_LOSS and self.market_data_provider.time() - x.close_timestamp < self.config.cooldown_time) - ) - working_levels_ids = [executor.custom_info["level_id"] for executor in working_levels] - return self.get_not_active_levels_ids(working_levels_ids) - - def stop_actions_proposal(self) -> List[ExecutorAction]: - """ - Create a list of actions to stop the executors based on order refresh and early stop conditions. - """ - stop_actions = [] - stop_actions.extend(self.executors_to_refresh()) - stop_actions.extend(self.executors_to_early_stop()) - return stop_actions - - def executors_to_refresh(self) -> List[ExecutorAction]: - executors_to_refresh = self.filter_executors( - executors=self.executors_info, - filter_func=lambda x: not x.is_trading and x.is_active and self.market_data_provider.time() - x.timestamp > self.config.executor_refresh_time) - return [StopExecutorAction( - controller_id=self.config.id, - keep_position=True, - executor_id=executor.id) for executor in executors_to_refresh] - - def executors_to_early_stop(self) -> List[ExecutorAction]: - """ - Get the executors to early stop based on the current state of market data. This method can be overridden to - implement custom behavior. - """ - executors_to_early_stop = self.filter_executors( - executors=self.executors_info, - filter_func=lambda x: x.is_active and x.is_trading and self.market_data_provider.time() - x.custom_info["open_order_last_update"] > self.config.cooldown_time) - return [StopExecutorAction( - controller_id=self.config.id, - keep_position=True, - executor_id=executor.id) for executor in executors_to_early_stop] - - async def update_processed_data(self): - """ - Update the processed data for the controller. This method should be reimplemented to modify the reference price - and spread multiplier based on the market data. By default, it will update the reference price as mid price and - the spread multiplier as 1. - """ - reference_price = self.get_current_candles_price() - position_held = next((position for position in self.positions_held if - (position.trading_pair == self.config.trading_pair) & - (position.connector_name == self.config.connector_name)), None) - target_position = self.config.total_amount_quote * self.config.target_base_pct - if position_held is not None: - position_amount = position_held.amount - current_base_pct = position_held.amount_quote / self.config.total_amount_quote - deviation = (target_position - position_held.amount_quote) / target_position - unrealized_pnl_pct = position_held.unrealized_pnl_quote / position_held.amount_quote if position_held.amount_quote != 0 else Decimal("0") - else: - position_amount = 0 - current_base_pct = 0 - deviation = 1 - unrealized_pnl_pct = 0 - - self.processed_data = {"reference_price": Decimal(reference_price), "spread_multiplier": Decimal("1"), - "deviation": deviation, "current_base_pct": current_base_pct, - "unrealized_pnl_pct": unrealized_pnl_pct, "position_amount": position_amount} - - def get_current_candles_price(self) -> Decimal: - """ - Get the current price from the candles data provider. - """ - candles = self.market_data_provider.get_candles_df(self.config.candles_connector_name, - self.config.candles_trading_pair, - self.config.candles_interval) - if candles is not None and not candles.empty: - last_candle = candles.iloc[-1] - return Decimal(last_candle['close']) - else: - self.logger().warning(f"No candles data available for {self.config.candles_connector_name} - {self.config.candles_trading_pair} at {self.config.candles_interval}. Using last known price.") - return Decimal(self.market_data_provider.get_price_by_type(self.config.connector_name, self.config.trading_pair, PriceType.MidPrice)) - - def get_executor_config(self, level_id: str, price: Decimal, amount: Decimal): - """ - Get the executor config for a given level id. - """ - trade_type = self.get_trade_type_from_level_id(level_id) - level_multiplier = self.get_level_from_level_id(level_id) + 1 - return PositionExecutorConfig( - timestamp=self.market_data_provider.time(), - level_id=level_id, - connector_name=self.config.connector_name, - trading_pair=self.config.trading_pair, - entry_price=price, - amount=amount, - triple_barrier_config=self.config.triple_barrier_config.new_instance_with_adjusted_volatility(level_multiplier), - leverage=self.config.leverage, - side=trade_type, - ) - - def get_level_id_from_side(self, trade_type: TradeType, level: int) -> str: - """ - Get the level id based on the trade type and the level. - """ - return f"{trade_type.name.lower()}_{level}" - - def get_trade_type_from_level_id(self, level_id: str) -> TradeType: - return TradeType.BUY if level_id.startswith("buy") else TradeType.SELL - - def get_level_from_level_id(self, level_id: str) -> int: - return int(level_id.split('_')[1]) - - def get_not_active_levels_ids(self, active_levels_ids: List[str]) -> List[str]: - """ - Get the levels to execute based on the current state of the controller. - """ - buy_ids_missing = [self.get_level_id_from_side(TradeType.BUY, level) for level in range(len(self.config.buy_spreads)) - if self.get_level_id_from_side(TradeType.BUY, level) not in active_levels_ids] - sell_ids_missing = [self.get_level_id_from_side(TradeType.SELL, level) for level in range(len(self.config.sell_spreads)) - if self.get_level_id_from_side(TradeType.SELL, level) not in active_levels_ids] - if self.processed_data["current_base_pct"] < self.config.min_base_pct: - return buy_ids_missing - elif self.processed_data["current_base_pct"] > self.config.max_base_pct: - return sell_ids_missing - return buy_ids_missing + sell_ids_missing - - def to_format_status(self) -> List[str]: - """ - Get the status of the controller in a formatted way with ASCII visualizations. - """ - from decimal import Decimal - from itertools import zip_longest - - status = [] - - # Get all required data - base_pct = self.processed_data['current_base_pct'] - min_pct = self.config.min_base_pct - max_pct = self.config.max_base_pct - target_pct = self.config.target_base_pct - skew = base_pct - target_pct - skew_pct = skew / target_pct if target_pct != 0 else Decimal('0') - max_skew = getattr(self.config, 'max_skew', Decimal('0.0')) - - # Fixed widths - adjusted based on screenshot analysis - outer_width = 92 # Total width including outer borders - inner_width = outer_width - 4 # Inner content width - half_width = (inner_width) // 2 - 1 # Width of each column in split sections - bar_width = inner_width - 15 # Width of visualization bars (accounting for label) - - # Header - omit ID since it's shown above in controller header - status.append("╒" + "═" * (inner_width) + "╕") - - header_line = ( - f"{self.config.connector_name}:{self.config.trading_pair} " - f"Price: {self.processed_data['reference_price']} " - f"Alloc: {self.config.portfolio_allocation:.1%} " - f"Spread Mult: {self.processed_data['spread_multiplier']} |" - ) - - status.append(f"│ {header_line:<{inner_width}} │") - - # Position and PnL sections with precise widths - status.append(f"├{'─' * half_width}┬{'─' * half_width}┤") - status.append(f"│ {'POSITION STATUS':<{half_width - 2}} │ {'PROFIT & LOSS':<{half_width - 2}} │") - status.append(f"├{'─' * half_width}┼{'─' * half_width}┤") - - # Position data for left column - position_info = [ - f"Current: {base_pct:.2%}", - f"Target: {target_pct:.2%}", - f"Min/Max: {min_pct:.2%}/{max_pct:.2%}", - f"Skew: {skew_pct:+.2%} (max {max_skew:.2%})" - ] - - # PnL data for right column - pnl_info = [] - if 'unrealized_pnl_pct' in self.processed_data: - pnl = self.processed_data['unrealized_pnl_pct'] - pnl_sign = "+" if pnl >= 0 else "" - pnl_info = [ - f"Unrealized: {pnl_sign}{pnl:.2%}", - f"Take Profit: {self.config.global_take_profit:.2%}", - f"Stop Loss: {-self.config.global_stop_loss:.2%}", - f"Leverage: {self.config.leverage}x" - ] - - # Display position and PnL info side by side with exact spacing - for pos_line, pnl_line in zip_longest(position_info, pnl_info, fillvalue=""): - status.append(f"│ {pos_line:<{half_width - 2}} │ {pnl_line:<{half_width - 2}} │") - - # Adjust visualization section - ensure consistent spacing - status.append(f"├{'─' * (inner_width)}┤") - status.append(f"│ {'VISUALIZATIONS':<{inner_width}} │") - status.append(f"├{'─' * (inner_width)}┤") - - # Position bar with exact spacing and characters - filled_width = int(base_pct * bar_width) - min_pos = int(min_pct * bar_width) - max_pos = int(max_pct * bar_width) - target_pos = int(target_pct * bar_width) - - # Build position bar character by character - position_bar = "" - for i in range(bar_width): - if i == filled_width: - position_bar += "◆" # Current position - elif i == min_pos: - position_bar += "┃" # Min threshold - elif i == max_pos: - position_bar += "┃" # Max threshold - elif i == target_pos: - position_bar += "┇" # Target threshold - elif i < filled_width: - position_bar += "█" # Filled area - else: - position_bar += "░" # Empty area - - # Ensure consistent label spacing as seen in screenshot - status.append(f"│ Position: [{position_bar}] │") - - # Skew visualization with exact spacing - skew_bar_width = bar_width - center = skew_bar_width // 2 - skew_pos = center + int(skew_pct * center * 2) - skew_pos = max(0, min(skew_bar_width - 1, skew_pos)) - - # Build skew bar character by character - skew_bar = "" - for i in range(skew_bar_width): - if i == center: - skew_bar += "┃" # Center line - elif i == skew_pos: - skew_bar += "⬤" # Current skew - else: - skew_bar += "─" # Empty line - - # Match spacing from screenshot with exact character counts - status.append(f"│ Skew: [{skew_bar}] │") - - # PnL visualization if available - if 'unrealized_pnl_pct' in self.processed_data: - pnl = self.processed_data['unrealized_pnl_pct'] - take_profit = self.config.global_take_profit - stop_loss = -self.config.global_stop_loss - - pnl_bar_width = bar_width - center = pnl_bar_width // 2 - - # Calculate positions with exact scaling - max_range = max(abs(take_profit), abs(stop_loss), abs(pnl)) * Decimal("1.2") - scale = (pnl_bar_width // 2) / max_range - - pnl_pos = center + int(pnl * scale) - take_profit_pos = center + int(take_profit * scale) - stop_loss_pos = center + int(stop_loss * scale) - - # Ensure positions are within bounds - pnl_pos = max(0, min(pnl_bar_width - 1, pnl_pos)) - take_profit_pos = max(0, min(pnl_bar_width - 1, take_profit_pos)) - stop_loss_pos = max(0, min(pnl_bar_width - 1, stop_loss_pos)) - - # Build PnL bar character by character - pnl_bar = "" - for i in range(pnl_bar_width): - if i == center: - pnl_bar += "│" # Center line - elif i == pnl_pos: - pnl_bar += "⬤" # Current PnL - elif i == take_profit_pos: - pnl_bar += "T" # Take profit line - elif i == stop_loss_pos: - pnl_bar += "S" # Stop loss line - elif (pnl >= 0 and center <= i < pnl_pos) or (pnl < 0 and pnl_pos < i <= center): - pnl_bar += "█" if pnl >= 0 else "▓" - else: - pnl_bar += "─" - - # Match spacing from screenshot - status.append(f"│ PnL: [{pnl_bar}] │") - - # Executors section with precise column widths - status.append(f"├{'─' * half_width}┬{'─' * half_width}┤") - status.append(f"│ {'EXECUTORS STATUS':<{half_width - 2}} │ {'EXECUTOR VISUALIZATION':<{half_width - 2}} │") - status.append(f"├{'─' * half_width}┼{'─' * half_width}┤") - - # Count active executors by type - active_buy = sum(1 for info in self.executors_info - if info.is_active and self.get_trade_type_from_level_id(info.custom_info["level_id"]) == TradeType.BUY) - active_sell = sum(1 for info in self.executors_info - if info.is_active and self.get_trade_type_from_level_id(info.custom_info["level_id"]) == TradeType.SELL) - total_active = sum(1 for info in self.executors_info if info.is_active) - - # Executor information with fixed formatting - executor_info = [ - f"Total Active: {total_active}", - f"Total Created: {len(self.executors_info)}", - f"Buy Executors: {active_buy}", - f"Sell Executors: {active_sell}" - ] - - if 'deviation' in self.processed_data: - executor_info.append(f"Target Deviation: {self.processed_data['deviation']:.4f}") - - # Visualization with consistent block characters for buy/sell representation - buy_bars = "▮" * active_buy if active_buy > 0 else "─" - sell_bars = "▮" * active_sell if active_sell > 0 else "─" - - executor_viz = [ - f"Buy: {buy_bars}", - f"Sell: {sell_bars}" - ] - - # Display with fixed width columns - for exec_line, viz_line in zip_longest(executor_info, executor_viz, fillvalue=""): - status.append(f"│ {exec_line:<{half_width - 2}} │ {viz_line:<{half_width - 2}} │") - - # Bottom border with exact width - status.append(f"╘{'═' * (inner_width)}╛") - - return status diff --git a/bots/controllers/generic/quantum_grid_allocator.py b/bots/controllers/generic/quantum_grid_allocator.py deleted file mode 100644 index 19b7a47cde..0000000000 --- a/bots/controllers/generic/quantum_grid_allocator.py +++ /dev/null @@ -1,492 +0,0 @@ -from decimal import Decimal -from typing import Dict, List, Set, Union - -import pandas_ta as ta # noqa: F401 -from pydantic import Field, field_validator - -from hummingbot.core.data_type.common import OrderType, PositionMode, PriceType, TradeType -from hummingbot.data_feed.candles_feed.data_types import CandlesConfig -from hummingbot.strategy_v2.controllers import ControllerBase, ControllerConfigBase -from hummingbot.strategy_v2.executors.data_types import ConnectorPair -from hummingbot.strategy_v2.executors.grid_executor.data_types import GridExecutorConfig -from hummingbot.strategy_v2.executors.position_executor.data_types import TripleBarrierConfig -from hummingbot.strategy_v2.models.executor_actions import CreateExecutorAction, StopExecutorAction -from hummingbot.strategy_v2.models.executors_info import ExecutorInfo - - -class QGAConfig(ControllerConfigBase): - controller_name: str = "quantum_grid_allocator" - candles_config: List[CandlesConfig] = [] - - # Portfolio allocation zones - long_only_threshold: Decimal = Field(default=Decimal("0.2"), json_schema_extra={"is_updatable": True}) - short_only_threshold: Decimal = Field(default=Decimal("0.2"), json_schema_extra={"is_updatable": True}) - hedge_ratio: Decimal = Field(default=Decimal("2"), json_schema_extra={"is_updatable": True}) - - # Grid allocation multipliers - base_grid_value_pct: Decimal = Field(default=Decimal("0.08"), json_schema_extra={"is_updatable": True}) - max_grid_value_pct: Decimal = Field(default=Decimal("0.15"), json_schema_extra={"is_updatable": True}) - - # Order frequency settings - safe_extra_spread: Decimal = Field(default=Decimal("0.0001"), json_schema_extra={"is_updatable": True}) - favorable_order_frequency: int = Field(default=2, json_schema_extra={"is_updatable": True}) - unfavorable_order_frequency: int = Field(default=5, json_schema_extra={"is_updatable": True}) - max_orders_per_batch: int = Field(default=1, json_schema_extra={"is_updatable": True}) - - # Portfolio allocation - portfolio_allocation: Dict[str, Decimal] = Field( - default={ - "SOL": Decimal("0.50"), # 50% - }, - json_schema_extra={"is_updatable": True}) - # Grid parameters - grid_range: Decimal = Field(default=Decimal("0.002"), json_schema_extra={"is_updatable": True}) - tp_sl_ratio: Decimal = Field(default=Decimal("0.8"), json_schema_extra={"is_updatable": True}) - min_order_amount: Decimal = Field(default=Decimal("5"), json_schema_extra={"is_updatable": True}) - # Risk parameters - max_deviation: Decimal = Field(default=Decimal("0.05"), json_schema_extra={"is_updatable": True}) - max_open_orders: int = Field(default=2, json_schema_extra={"is_updatable": True}) - # Exchange settings - connector_name: str = "binance" - leverage: int = 1 - position_mode: PositionMode = PositionMode.HEDGE - quote_asset: str = "FDUSD" - fee_asset: str = "BNB" - # Grid price multipliers - min_spread_between_orders: Decimal = Field( - default=Decimal("0.0001"), # 0.01% between orders - json_schema_extra={"is_updatable": True}) - grid_tp_multiplier: Decimal = Field( - default=Decimal("0.0001"), # 0.2% take profit - json_schema_extra={"is_updatable": True}) - # Grid safety parameters - limit_price_spread: Decimal = Field( - default=Decimal("0.001"), # 0.1% spread for limit price - json_schema_extra={"is_updatable": True}) - activation_bounds: Decimal = Field( - default=Decimal("0.0002"), # Activation bounds for orders - json_schema_extra={"is_updatable": True}) - bb_lenght: int = 100 - bb_std_dev: float = 2.0 - interval: str = "1s" - dynamic_grid_range: bool = Field(default=False, json_schema_extra={"is_updatable": True}) - show_terminated_details: bool = False - - @property - def quote_asset_allocation(self) -> Decimal: - """Calculate the implicit quote asset (FDUSD) allocation""" - return Decimal("1") - sum(self.portfolio_allocation.values()) - - @field_validator("portfolio_allocation") - @classmethod - def validate_allocation(cls, v): - total = sum(v.values()) - if total >= Decimal("1"): - raise ValueError(f"Total allocation {total} exceeds or equals 100%. Must leave room for FDUSD allocation.") - if "FDUSD" in v: - raise ValueError("FDUSD should not be explicitly allocated as it is the quote asset") - return v - - def update_markets(self, markets: Dict[str, Set[str]]) -> Dict[str, Set[str]]: - if self.connector_name not in markets: - markets[self.connector_name] = set() - for asset in self.portfolio_allocation: - markets[self.connector_name].add(f"{asset}-{self.quote_asset}") - return markets - - -class QuantumGridAllocator(ControllerBase): - def __init__(self, config: QGAConfig, *args, **kwargs): - self.config = config - self.metrics = {} - # Track unfavorable grid IDs - self.unfavorable_grid_ids = set() - # Track held positions from unfavorable grids - self.unfavorable_positions = { - f"{asset}-{config.quote_asset}": { - 'long': {'size': Decimal('0'), 'value': Decimal('0'), 'weighted_price': Decimal('0')}, - 'short': {'size': Decimal('0'), 'value': Decimal('0'), 'weighted_price': Decimal('0')} - } - for asset in config.portfolio_allocation - } - self.config.candles_config = [CandlesConfig( - connector=config.connector_name, - trading_pair=trading_pair + "-" + config.quote_asset, - interval=config.interval, - max_records=config.bb_lenght + 100 - ) for trading_pair in config.portfolio_allocation.keys()] - super().__init__(config, *args, **kwargs) - self.initialize_rate_sources() - - def initialize_rate_sources(self): - fee_pair = ConnectorPair(connector_name=self.config.connector_name, trading_pair=f"{self.config.fee_asset}-{self.config.quote_asset}") - self.market_data_provider.initialize_rate_sources([fee_pair]) - - async def update_processed_data(self): - # Get the bb width to use it as the range for the grid - for asset in self.config.portfolio_allocation: - trading_pair = f"{asset}-{self.config.quote_asset}" - candles = self.market_data_provider.get_candles_df( - connector_name=self.config.connector_name, - trading_pair=trading_pair, - interval=self.config.interval, - max_records=self.config.bb_lenght + 100 - ) - if len(candles) == 0: - bb_width = self.config.grid_range - else: - bb = ta.bbands(candles["close"], length=self.config.bb_lenght, std=self.config.bb_std_dev) - bb_width = bb[f"BBB_{self.config.bb_lenght}_{self.config.bb_std_dev}"].iloc[-1] / 100 - self.processed_data[trading_pair] = { - "bb_width": bb_width - } - - def update_portfolio_metrics(self): - """ - Calculate theoretical vs actual portfolio allocations - """ - metrics = { - "theoretical": {}, - "actual": {}, - "difference": {}, - } - - # Get real balances and calculate total portfolio value - quote_balance = self.market_data_provider.get_balance(self.config.connector_name, self.config.quote_asset) - total_value_quote = quote_balance - - # Calculate actual allocations including positions - for asset in self.config.portfolio_allocation: - trading_pair = f"{asset}-{self.config.quote_asset}" - price = self.get_mid_price(trading_pair) - # Get balance and add any position from active grid - balance = self.market_data_provider.get_balance(self.config.connector_name, asset) - value = balance * price - total_value_quote += value - metrics["actual"][asset] = value - # Calculate theoretical allocations and differences - for asset in self.config.portfolio_allocation: - theoretical_value = total_value_quote * self.config.portfolio_allocation[asset] - metrics["theoretical"][asset] = theoretical_value - metrics["difference"][asset] = metrics["actual"][asset] - theoretical_value - # Add quote asset metrics - metrics["actual"][self.config.quote_asset] = quote_balance - metrics["theoretical"][self.config.quote_asset] = total_value_quote * self.config.quote_asset_allocation - metrics["difference"][self.config.quote_asset] = quote_balance - metrics["theoretical"][self.config.quote_asset] - metrics["total_portfolio_value"] = total_value_quote - self.metrics = metrics - - def get_active_grids_by_asset(self) -> Dict[str, List[ExecutorInfo]]: - """Group active grids by asset using filter_executors""" - active_grids = {} - for asset in self.config.portfolio_allocation: - if asset == self.config.quote_asset: - continue - trading_pair = f"{asset}-{self.config.quote_asset}" - active_executors = self.filter_executors( - executors=self.executors_info, - filter_func=lambda e: ( - e.is_active and - e.config.trading_pair == trading_pair - ) - ) - if active_executors: - active_grids[asset] = active_executors - return active_grids - - def to_format_status(self) -> List[str]: - """Generate a detailed status report with portfolio, grid, and position information""" - status_lines = [] - total_value = self.metrics.get("total_portfolio_value", Decimal("0")) - # Portfolio Status - status_lines.append(f"Total Portfolio Value: ${total_value:,.2f}") - status_lines.append("") - status_lines.append("Portfolio Status:") - status_lines.append("-" * 80) - status_lines.append( - f"{'Asset':<8} | " - f"{'Actual':>10} | " - f"{'Target':>10} | " - f"{'Diff':>10} | " - f"{'Dev %':>8}" - ) - status_lines.append("-" * 80) - # Show metrics for each asset - for asset in self.config.portfolio_allocation: - actual = self.metrics["actual"].get(asset, Decimal("0")) - theoretical = self.metrics["theoretical"].get(asset, Decimal("0")) - difference = self.metrics["difference"].get(asset, Decimal("0")) - deviation_pct = (difference / theoretical * 100) if theoretical != Decimal("0") else Decimal("0") - status_lines.append( - f"{asset:<8} | " - f"${actual:>9.2f} | " - f"${theoretical:>9.2f} | " - f"${difference:>+9.2f} | " - f"{deviation_pct:>+7.1f}%" - ) - # Add quote asset metrics - quote_asset = self.config.quote_asset - actual = self.metrics["actual"].get(quote_asset, Decimal("0")) - theoretical = self.metrics["theoretical"].get(quote_asset, Decimal("0")) - difference = self.metrics["difference"].get(quote_asset, Decimal("0")) - deviation_pct = (difference / theoretical * 100) if theoretical != Decimal("0") else Decimal("0") - status_lines.append("-" * 80) - status_lines.append( - f"{quote_asset:<8} | " - f"${actual:>9.2f} | " - f"${theoretical:>9.2f} | " - f"${difference:>+9.2f} | " - f"{deviation_pct:>+7.1f}%" - ) - # Active Grids Summary - active_grids = self.get_active_grids_by_asset() - if active_grids: - status_lines.append("") - status_lines.append("Active Grids:") - status_lines.append("-" * 140) - status_lines.append( - f"{'Asset':<8} {'Side':<6} | " - f"{'Total ($)':<10} {'Position':<10} {'Volume':<10} | " - f"{'PnL':<10} {'RPnL':<10} {'Fees':<10} | " - f"{'Start':<10} {'Current':<10} {'End':<10} {'Limit':<10}" - ) - status_lines.append("-" * 140) - for asset, executors in active_grids.items(): - for executor in executors: - config = executor.config - custom_info = executor.custom_info - trading_pair = config.trading_pair - current_price = self.get_mid_price(trading_pair) - # Get grid metrics - total_amount = Decimal(str(config.total_amount_quote)) - position_size = Decimal(str(custom_info.get('position_size_quote', '0'))) - volume = executor.filled_amount_quote - pnl = executor.net_pnl_quote - realized_pnl_quote = custom_info.get('realized_pnl_quote', Decimal('0')) - fees = executor.cum_fees_quote - status_lines.append( - f"{asset:<8} {config.side.name:<6} | " - f"${total_amount:<9.2f} ${position_size:<9.2f} ${volume:<9.2f} | " - f"${pnl:>+9.2f} ${realized_pnl_quote:>+9.2f} ${fees:>9.2f} | " - f"{config.start_price:<10.4f} {current_price:<10.4f} {config.end_price:<10.4f} {config.limit_price:<10.4f}" - ) - - status_lines.append("-" * 100 + "\n") - return status_lines - - def tp_multiplier(self): - return self.config.tp_sl_ratio - - def sl_multiplier(self): - return 1 - self.config.tp_sl_ratio - - def determine_executor_actions(self) -> List[Union[CreateExecutorAction, StopExecutorAction]]: - actions = [] - self.update_portfolio_metrics() - active_grids_by_asset = self.get_active_grids_by_asset() - for asset in self.config.portfolio_allocation: - if asset == self.config.quote_asset: - continue - trading_pair = f"{asset}-{self.config.quote_asset}" - # Check if there are any active grids for this asset - if asset in active_grids_by_asset: - self.logger().debug(f"Skipping {trading_pair} - Active grid exists") - continue - theoretical = self.metrics["theoretical"][asset] - difference = self.metrics["difference"][asset] - deviation = difference / theoretical if theoretical != Decimal("0") else Decimal("0") - mid_price = self.get_mid_price(trading_pair) - - # Calculate dynamic grid value percentage based on deviation - abs_deviation = abs(deviation) - grid_value_pct = self.config.max_grid_value_pct if abs_deviation > self.config.max_deviation else self.config.base_grid_value_pct - - self.logger().info( - f"{trading_pair} Grid Sizing - " - f"Deviation: {deviation:+.1%}, " - f"Grid Value %: {grid_value_pct:.1%}" - ) - if self.config.dynamic_grid_range: - grid_range = Decimal(self.processed_data[trading_pair]["bb_width"]) - else: - grid_range = self.config.grid_range - - # Determine which zone we're in by normalizing the deviation over the theoretical allocation - if deviation < -self.config.long_only_threshold: - # Long-only zone - only create buy grids - if difference < Decimal("0"): # Only if we need to buy - grid_value = min(abs(difference), theoretical * grid_value_pct) - start_price = mid_price * (1 - grid_range * self.sl_multiplier()) - end_price = mid_price * (1 + grid_range * self.tp_multiplier()) - grid_action = self.create_grid_executor( - trading_pair=trading_pair, - side=TradeType.BUY, - start_price=start_price, - end_price=end_price, - grid_value=grid_value, - is_unfavorable=False - ) - if grid_action is not None: - actions.append(grid_action) - elif deviation > self.config.short_only_threshold: - # Short-only zone - only create sell grids - if difference > Decimal("0"): # Only if we need to sell - grid_value = min(abs(difference), theoretical * grid_value_pct) - start_price = mid_price * (1 - grid_range * self.tp_multiplier()) - end_price = mid_price * (1 + grid_range * self.sl_multiplier()) - grid_action = self.create_grid_executor( - trading_pair=trading_pair, - side=TradeType.SELL, - start_price=start_price, - end_price=end_price, - grid_value=grid_value, - is_unfavorable=False - ) - if grid_action is not None: - actions.append(grid_action) - else: - # we create a buy and a sell grid with higher range pct and the base grid value pct - # to hedge the position - grid_value = theoretical * grid_value_pct - if difference < Decimal("0"): # create a bigger buy grid and sell grid - # Create buy grid - start_price = mid_price * (1 - 2 * grid_range * self.sl_multiplier()) - end_price = mid_price * (1 + grid_range * self.tp_multiplier()) - buy_grid_action = self.create_grid_executor( - trading_pair=trading_pair, - side=TradeType.BUY, - start_price=start_price, - end_price=end_price, - grid_value=grid_value, - is_unfavorable=False - ) - if buy_grid_action is not None: - actions.append(buy_grid_action) - # Create sell grid - start_price = mid_price * (1 - grid_range * self.tp_multiplier()) - end_price = mid_price * (1 + 2 * grid_range * self.sl_multiplier()) - sell_grid_action = self.create_grid_executor( - trading_pair=trading_pair, - side=TradeType.SELL, - start_price=start_price, - end_price=end_price, - grid_value=grid_value, - is_unfavorable=False - ) - if sell_grid_action is not None: - actions.append(sell_grid_action) - if difference > Decimal("0"): - # Create sell grid - start_price = mid_price * (1 - 2 * grid_range * self.tp_multiplier()) - end_price = mid_price * (1 + grid_range * self.sl_multiplier()) - sell_grid_action = self.create_grid_executor( - trading_pair=trading_pair, - side=TradeType.SELL, - start_price=start_price, - end_price=end_price, - grid_value=grid_value, - is_unfavorable=False - ) - if sell_grid_action is not None: - actions.append(sell_grid_action) - # Create buy grid - start_price = mid_price * (1 - grid_range * self.sl_multiplier()) - end_price = mid_price * (1 + 2 * grid_range * self.tp_multiplier()) - buy_grid_action = self.create_grid_executor( - trading_pair=trading_pair, - side=TradeType.BUY, - start_price=start_price, - end_price=end_price, - grid_value=grid_value, - is_unfavorable=False - ) - if buy_grid_action is not None: - actions.append(buy_grid_action) - return actions - - def create_grid_executor( - self, - trading_pair: str, - side: TradeType, - start_price: Decimal, - end_price: Decimal, - grid_value: Decimal, - is_unfavorable: bool = False - ) -> CreateExecutorAction: - """Creates a grid executor with dynamic sizing and range adjustments""" - # Get trading rules and minimum notional - trading_rules = self.market_data_provider.get_trading_rules(self.config.connector_name, trading_pair) - min_notional = max( - self.config.min_order_amount, - trading_rules.min_notional_size if trading_rules else Decimal("5.0") - ) - # Add safety margin and check if grid value is sufficient - min_grid_value = min_notional * Decimal("5") # Ensure room for at least 5 levels - if grid_value < min_grid_value: - self.logger().info( - f"Grid value {grid_value} is too small for {trading_pair}. " - f"Minimum required for viable grid: {min_grid_value}" - ) - return None # Skip grid creation if value is too small - - # Select order frequency based on grid favorability - order_frequency = ( - self.config.unfavorable_order_frequency if is_unfavorable - else self.config.favorable_order_frequency - ) - # Calculate limit price to be more aggressive than grid boundaries - if side == TradeType.BUY: - # For buys, limit price should be lower than start price - limit_price = start_price * (1 - self.config.limit_price_spread) - else: - # For sells, limit price should be higher than end price - limit_price = end_price * (1 + self.config.limit_price_spread) - # Create the executor action - action = CreateExecutorAction( - controller_id=self.config.id, - executor_config=GridExecutorConfig( - timestamp=self.market_data_provider.time(), - connector_name=self.config.connector_name, - trading_pair=trading_pair, - side=side, - start_price=start_price, - end_price=end_price, - limit_price=limit_price, - leverage=self.config.leverage, - total_amount_quote=grid_value, - safe_extra_spread=self.config.safe_extra_spread, - min_spread_between_orders=self.config.min_spread_between_orders, - min_order_amount_quote=self.config.min_order_amount, - max_open_orders=self.config.max_open_orders, - order_frequency=order_frequency, # Use dynamic order frequency - max_orders_per_batch=self.config.max_orders_per_batch, - activation_bounds=self.config.activation_bounds, - keep_position=True, # Always keep position for potential reversal - coerce_tp_to_step=True, - triple_barrier_config=TripleBarrierConfig( - take_profit=self.config.grid_tp_multiplier, - open_order_type=OrderType.LIMIT_MAKER, - take_profit_order_type=OrderType.LIMIT_MAKER, - stop_loss=None, - time_limit=None, - trailing_stop=None, - ))) - # Track unfavorable grid configs - if is_unfavorable: - self.unfavorable_grid_ids.add(action.executor_config.id) - self.logger().info( - f"Created unfavorable grid for {trading_pair} - " - f"Side: {side.name}, Value: ${grid_value:,.2f}, " - f"Order Frequency: {order_frequency}s" - ) - else: - self.logger().info( - f"Created favorable grid for {trading_pair} - " - f"Side: {side.name}, Value: ${grid_value:,.2f}, " - f"Order Frequency: {order_frequency}s" - ) - - return action - - def get_mid_price(self, trading_pair: str) -> Decimal: - return self.market_data_provider.get_price_by_type(self.config.connector_name, trading_pair, PriceType.MidPrice) diff --git a/bots/controllers/generic/stat_arb.py b/bots/controllers/generic/stat_arb.py deleted file mode 100644 index 527db07a34..0000000000 --- a/bots/controllers/generic/stat_arb.py +++ /dev/null @@ -1,475 +0,0 @@ -from decimal import Decimal -from typing import List - -import numpy as np -from sklearn.linear_model import LinearRegression - -from hummingbot.core.data_type.common import OrderType, PositionAction, PositionMode, PriceType, TradeType -from hummingbot.data_feed.candles_feed.data_types import CandlesConfig -from hummingbot.strategy_v2.controllers import ControllerBase, ControllerConfigBase -from hummingbot.strategy_v2.executors.data_types import ConnectorPair, PositionSummary -from hummingbot.strategy_v2.executors.order_executor.data_types import ExecutionStrategy, OrderExecutorConfig -from hummingbot.strategy_v2.executors.position_executor.data_types import PositionExecutorConfig, TripleBarrierConfig -from hummingbot.strategy_v2.models.executor_actions import CreateExecutorAction, ExecutorAction, StopExecutorAction - - -class StatArbConfig(ControllerConfigBase): - """ - Configuration for a statistical arbitrage controller that trades two cointegrated assets. - """ - controller_type: str = "generic" - controller_name: str = "stat_arb" - candles_config: List[CandlesConfig] = [] - connector_pair_dominant: ConnectorPair = ConnectorPair(connector_name="binance_perpetual", trading_pair="SOL-USDT") - connector_pair_hedge: ConnectorPair = ConnectorPair(connector_name="binance_perpetual", trading_pair="POPCAT-USDT") - interval: str = "1m" - lookback_period: int = 300 - entry_threshold: Decimal = Decimal("2.0") - take_profit: Decimal = Decimal("0.0008") - tp_global: Decimal = Decimal("0.01") - sl_global: Decimal = Decimal("0.05") - min_amount_quote: Decimal = Decimal("10") - quoter_spread: Decimal = Decimal("0.0001") - quoter_cooldown: int = 30 - quoter_refresh: int = 10 - max_orders_placed_per_side: int = 2 - max_orders_filled_per_side: int = 2 - max_position_deviation: Decimal = Decimal("0.1") - pos_hedge_ratio: Decimal = Decimal("1.0") - leverage: int = 20 - position_mode: PositionMode = PositionMode.HEDGE - - @property - def triple_barrier_config(self) -> TripleBarrierConfig: - return TripleBarrierConfig( - take_profit=self.take_profit, - open_order_type=OrderType.LIMIT_MAKER, - take_profit_order_type=OrderType.LIMIT_MAKER, - ) - - def update_markets(self, markets: dict) -> dict: - """Update markets dictionary with both trading pairs""" - # Add dominant pair - if self.connector_pair_dominant.connector_name not in markets: - markets[self.connector_pair_dominant.connector_name] = set() - markets[self.connector_pair_dominant.connector_name].add(self.connector_pair_dominant.trading_pair) - - # Add hedge pair - if self.connector_pair_hedge.connector_name not in markets: - markets[self.connector_pair_hedge.connector_name] = set() - markets[self.connector_pair_hedge.connector_name].add(self.connector_pair_hedge.trading_pair) - - return markets - - -class StatArb(ControllerBase): - """ - Statistical arbitrage controller that trades two cointegrated assets. - """ - - def __init__(self, config: StatArbConfig, *args, **kwargs): - super().__init__(config, *args, **kwargs) - self.config = config - self.theoretical_dominant_quote = self.config.total_amount_quote * (1 / (1 + self.config.pos_hedge_ratio)) - self.theoretical_hedge_quote = self.config.total_amount_quote * (self.config.pos_hedge_ratio / (1 + self.config.pos_hedge_ratio)) - - # Initialize processed data dictionary - self.processed_data = { - "dominant_price": None, - "hedge_price": None, - "spread": None, - "z_score": None, - "hedge_ratio": None, - "position_dominant": Decimal("0"), - "position_hedge": Decimal("0"), - "active_orders_dominant": [], - "active_orders_hedge": [], - "pair_pnl": Decimal("0"), - "signal": 0 # 0: no signal, 1: long dominant/short hedge, -1: short dominant/long hedge - } - - # Setup candles config if not already set - if len(self.config.candles_config) == 0: - max_records = self.config.lookback_period + 20 # extra records for safety - self.max_records = max_records - self.config.candles_config = [ - CandlesConfig( - connector=self.config.connector_pair_dominant.connector_name, - trading_pair=self.config.connector_pair_dominant.trading_pair, - interval=self.config.interval, - max_records=max_records - ), - CandlesConfig( - connector=self.config.connector_pair_hedge.connector_name, - trading_pair=self.config.connector_pair_hedge.trading_pair, - interval=self.config.interval, - max_records=max_records - ) - ] - if "_perpetual" in self.config.connector_pair_dominant.connector_name: - connector = self.market_data_provider.get_connector(self.config.connector_pair_dominant.connector_name) - connector.set_position_mode(self.config.position_mode) - connector.set_leverage(self.config.connector_pair_dominant.trading_pair, self.config.leverage) - if "_perpetual" in self.config.connector_pair_hedge.connector_name: - connector = self.market_data_provider.get_connector(self.config.connector_pair_hedge.connector_name) - connector.set_position_mode(self.config.position_mode) - connector.set_leverage(self.config.connector_pair_hedge.trading_pair, self.config.leverage) - - def determine_executor_actions(self) -> List[ExecutorAction]: - """ - The execution logic for the statistical arbitrage strategy. - Market Data Conditions: Signal is generated based on the z-score of the spread between the two assets. - If signal == 1 --> long dominant/short hedge - If signal == -1 --> short dominant/long hedge - Execution Conditions: If the signal is generated add position executors to quote from the dominant and hedge markets. - We compare the current position with the theoretical position for the dominant and hedge assets. - If the current position + the active placed amount is greater than the theoretical position, can't place more orders. - If the imbalance scaled pct is greater than the threshold, we avoid placing orders in the market passed on filtered_connector_pair. - If the pnl of total position is greater than the take profit or lower than the stop loss, we close the position. - """ - actions: List[ExecutorAction] = [] - # Check global take profit and stop loss - if self.processed_data["pair_pnl_pct"] > self.config.tp_global or self.processed_data["pair_pnl_pct"] < -self.config.sl_global: - # Close all positions - for position in self.positions_held: - actions.extend(self.get_executors_to_reduce_position(position)) - return actions - # Check the signal - elif self.processed_data["signal"] != 0: - actions.extend(self.get_executors_to_quote()) - actions.extend(self.get_executors_to_reduce_position_on_opposite_signal()) - - # Get the executors to keep position after a cooldown is reached - actions.extend(self.get_executors_to_keep_position()) - actions.extend(self.get_executors_to_refresh()) - - return actions - - def get_executors_to_reduce_position_on_opposite_signal(self) -> List[ExecutorAction]: - if self.processed_data["signal"] == 1: - dominant_side, hedge_side = TradeType.SELL, TradeType.BUY - elif self.processed_data["signal"] == -1: - dominant_side, hedge_side = TradeType.BUY, TradeType.SELL - else: - return [] - # Get executors to stop - dominant_active_executors_to_stop = self.filter_executors(self.executors_info, filter_func=lambda e: e.connector_name == self.config.connector_pair_dominant.connector_name and e.trading_pair == self.config.connector_pair_dominant.trading_pair and e.side == dominant_side) - hedge_active_executors_to_stop = self.filter_executors(self.executors_info, filter_func=lambda e: e.connector_name == self.config.connector_pair_hedge.connector_name and e.trading_pair == self.config.connector_pair_hedge.trading_pair and e.side == hedge_side) - stop_actions = [StopExecutorAction(controller_id=self.config.id, executor_id=executor.id, keep_position=False) for executor in dominant_active_executors_to_stop + hedge_active_executors_to_stop] - - # Get order executors to reduce positions - reduce_actions: List[ExecutorAction] = [] - for position in self.positions_held: - if position.connector_name == self.config.connector_pair_dominant.connector_name and position.trading_pair == self.config.connector_pair_dominant.trading_pair and position.side == dominant_side: - reduce_actions.extend(self.get_executors_to_reduce_position(position)) - elif position.connector_name == self.config.connector_pair_hedge.connector_name and position.trading_pair == self.config.connector_pair_hedge.trading_pair and position.side == hedge_side: - reduce_actions.extend(self.get_executors_to_reduce_position(position)) - return stop_actions + reduce_actions - - def get_executors_to_keep_position(self) -> List[ExecutorAction]: - stop_actions: List[ExecutorAction] = [] - for executor in self.processed_data["executors_dominant_filled"] + self.processed_data["executors_hedge_filled"]: - if self.market_data_provider.time() - executor.timestamp >= self.config.quoter_cooldown: - # Create a new executor to keep the position - stop_actions.append(StopExecutorAction(controller_id=self.config.id, executor_id=executor.id, keep_position=True)) - return stop_actions - - def get_executors_to_refresh(self) -> List[ExecutorAction]: - refresh_actions: List[ExecutorAction] = [] - for executor in self.processed_data["executors_dominant_placed"] + self.processed_data["executors_hedge_placed"]: - if self.market_data_provider.time() - executor.timestamp >= self.config.quoter_refresh: - # Create a new executor to refresh the position - refresh_actions.append(StopExecutorAction(controller_id=self.config.id, executor_id=executor.id, keep_position=False)) - return refresh_actions - - def get_executors_to_quote(self) -> List[ExecutorAction]: - """ - Get Order Executor to quote from the dominant and hedge markets. - """ - actions: List[ExecutorAction] = [] - trade_type_dominant = TradeType.BUY if self.processed_data["signal"] == 1 else TradeType.SELL - trade_type_hedge = TradeType.SELL if self.processed_data["signal"] == 1 else TradeType.BUY - - # Analyze dominant active orders, max deviation and imbalance to create a new executor - if self.processed_data["dominant_gap"] > Decimal("0") and \ - self.processed_data["filter_connector_pair"] != self.config.connector_pair_dominant and \ - len(self.processed_data["executors_dominant_placed"]) < self.config.max_orders_placed_per_side and \ - len(self.processed_data["executors_dominant_filled"]) < self.config.max_orders_filled_per_side: - # Create Position Executor for dominant asset - if trade_type_dominant == TradeType.BUY: - price = self.processed_data["min_price_dominant"] * (1 - self.config.quoter_spread) - else: - price = self.processed_data["max_price_dominant"] * (1 + self.config.quoter_spread) - dominant_executor_config = PositionExecutorConfig( - timestamp=self.market_data_provider.time(), - connector_name=self.config.connector_pair_dominant.connector_name, - trading_pair=self.config.connector_pair_dominant.trading_pair, - side=trade_type_dominant, - entry_price=price, - amount=self.config.min_amount_quote / self.processed_data["dominant_price"], - triple_barrier_config=self.config.triple_barrier_config, - leverage=self.config.leverage, - ) - actions.append(CreateExecutorAction(controller_id=self.config.id, executor_config=dominant_executor_config)) - - # Analyze hedge active orders, max deviation and imbalance to create a new executor - if self.processed_data["hedge_gap"] > Decimal("0") and \ - self.processed_data["filter_connector_pair"] != self.config.connector_pair_hedge and \ - len(self.processed_data["executors_hedge_placed"]) < self.config.max_orders_placed_per_side and \ - len(self.processed_data["executors_hedge_filled"]) < self.config.max_orders_filled_per_side: - # Create Position Executor for hedge asset - if trade_type_hedge == TradeType.BUY: - price = self.processed_data["min_price_hedge"] * (1 - self.config.quoter_spread) - else: - price = self.processed_data["max_price_hedge"] * (1 + self.config.quoter_spread) - hedge_executor_config = PositionExecutorConfig( - timestamp=self.market_data_provider.time(), - connector_name=self.config.connector_pair_hedge.connector_name, - trading_pair=self.config.connector_pair_hedge.trading_pair, - side=trade_type_hedge, - entry_price=price, - amount=self.config.min_amount_quote / self.processed_data["hedge_price"], - triple_barrier_config=self.config.triple_barrier_config, - leverage=self.config.leverage, - ) - actions.append(CreateExecutorAction(controller_id=self.config.id, executor_config=hedge_executor_config)) - return actions - - def get_executors_to_reduce_position(self, position: PositionSummary) -> List[ExecutorAction]: - """ - Get Order Executor to reduce position. - """ - if position.amount > Decimal("0"): - # Close position - config = OrderExecutorConfig( - timestamp=self.market_data_provider.time(), - connector_name=position.connector_name, - trading_pair=position.trading_pair, - side=TradeType.BUY if position.side == TradeType.SELL else TradeType.SELL, - amount=position.amount, - position_action=PositionAction.CLOSE, - execution_strategy=ExecutionStrategy.MARKET, - leverage=self.config.leverage, - ) - return [CreateExecutorAction(controller_id=self.config.id, executor_config=config)] - return [] - - async def update_processed_data(self): - """ - Update processed data with the latest market information and statistical calculations - needed for the statistical arbitrage strategy. - """ - # Stat arb analysis - spread, z_score = self.get_spread_and_z_score() - - # Generate trading signal based on z-score - entry_threshold = float(self.config.entry_threshold) - if z_score > entry_threshold: - # Spread is too high, expect it to revert: long dominant, short hedge - signal = 1 - dominant_side, hedge_side = TradeType.BUY, TradeType.SELL - elif z_score < -entry_threshold: - # Spread is too low, expect it to revert: short dominant, long hedge - signal = -1 - dominant_side, hedge_side = TradeType.SELL, TradeType.BUY - else: - # No signal - signal = 0 - dominant_side, hedge_side = None, None - - # Current prices - dominant_price, hedge_price = self.get_pairs_prices() - - # Get current positions stats by signal - positions_dominant = next((position for position in self.positions_held if position.connector_name == self.config.connector_pair_dominant.connector_name and position.trading_pair == self.config.connector_pair_dominant.trading_pair and (position.side == dominant_side or dominant_side is None)), None) - positions_hedge = next((position for position in self.positions_held if position.connector_name == self.config.connector_pair_hedge.connector_name and position.trading_pair == self.config.connector_pair_hedge.trading_pair and (position.side == hedge_side or hedge_side is None)), None) - # Get position stats - position_dominant_quote = positions_dominant.amount_quote if positions_dominant else Decimal("0") - position_hedge_quote = positions_hedge.amount_quote if positions_hedge else Decimal("0") - position_dominant_pnl_quote = positions_dominant.global_pnl_quote if positions_dominant else Decimal("0") - position_hedge_pnl_quote = positions_hedge.global_pnl_quote if positions_hedge else Decimal("0") - pair_pnl_pct = (position_dominant_pnl_quote + position_hedge_pnl_quote) / (position_dominant_quote + position_hedge_quote) if (position_dominant_quote + position_hedge_quote) != 0 else Decimal("0") - # Get active executors - executors_dominant_placed, executors_dominant_filled = self.get_executors_dominant() - executors_hedge_placed, executors_hedge_filled = self.get_executors_hedge() - min_price_dominant = Decimal(str(min([executor.config.entry_price for executor in executors_dominant_placed]))) if executors_dominant_placed else None - max_price_dominant = Decimal(str(max([executor.config.entry_price for executor in executors_dominant_placed]))) if executors_dominant_placed else None - min_price_hedge = Decimal(str(min([executor.config.entry_price for executor in executors_hedge_placed]))) if executors_hedge_placed else None - max_price_hedge = Decimal(str(max([executor.config.entry_price for executor in executors_hedge_placed]))) if executors_hedge_placed else None - - active_amount_dominant = Decimal(str(sum([executor.filled_amount_quote for executor in executors_dominant_filled]))) - active_amount_hedge = Decimal(str(sum([executor.filled_amount_quote for executor in executors_hedge_filled]))) - - # Compute imbalance based on the hedge ratio - dominant_gap = self.theoretical_dominant_quote - position_dominant_quote - active_amount_dominant - hedge_gap = self.theoretical_hedge_quote - position_hedge_quote - active_amount_hedge - imbalance = position_dominant_quote - position_hedge_quote - imbalance_scaled = position_dominant_quote - position_hedge_quote * self.config.pos_hedge_ratio - imbalance_scaled_pct = imbalance_scaled / position_dominant_quote if position_dominant_quote != Decimal("0") else Decimal("0") - filter_connector_pair = None - if imbalance_scaled_pct > self.config.max_position_deviation: - # Avoid placing orders in the dominant market - filter_connector_pair = self.config.connector_pair_dominant - elif imbalance_scaled_pct < -self.config.max_position_deviation: - # Avoid placing orders in the hedge market - filter_connector_pair = self.config.connector_pair_hedge - - # Update processed data - self.processed_data.update({ - "dominant_price": Decimal(str(dominant_price)), - "hedge_price": Decimal(str(hedge_price)), - "spread": Decimal(str(spread)), - "z_score": Decimal(str(z_score)), - "dominant_gap": Decimal(str(dominant_gap)), - "hedge_gap": Decimal(str(hedge_gap)), - "position_dominant_quote": position_dominant_quote, - "position_hedge_quote": position_hedge_quote, - "active_amount_dominant": active_amount_dominant, - "active_amount_hedge": active_amount_hedge, - "signal": signal, - # Store full dataframes for reference - "imbalance": Decimal(str(imbalance)), - "imbalance_scaled_pct": Decimal(str(imbalance_scaled_pct)), - "filter_connector_pair": filter_connector_pair, - "min_price_dominant": min_price_dominant if min_price_dominant is not None else Decimal(str(dominant_price)), - "max_price_dominant": max_price_dominant if max_price_dominant is not None else Decimal(str(dominant_price)), - "min_price_hedge": min_price_hedge if min_price_hedge is not None else Decimal(str(hedge_price)), - "max_price_hedge": max_price_hedge if max_price_hedge is not None else Decimal(str(hedge_price)), - "executors_dominant_filled": executors_dominant_filled, - "executors_hedge_filled": executors_hedge_filled, - "executors_dominant_placed": executors_dominant_placed, - "executors_hedge_placed": executors_hedge_placed, - "pair_pnl_pct": pair_pnl_pct, - }) - - def get_spread_and_z_score(self): - # Fetch candle data for both assets - dominant_df = self.market_data_provider.get_candles_df( - connector_name=self.config.connector_pair_dominant.connector_name, - trading_pair=self.config.connector_pair_dominant.trading_pair, - interval=self.config.interval, - max_records=self.max_records - ) - - hedge_df = self.market_data_provider.get_candles_df( - connector_name=self.config.connector_pair_hedge.connector_name, - trading_pair=self.config.connector_pair_hedge.trading_pair, - interval=self.config.interval, - max_records=self.max_records - ) - - if dominant_df.empty or hedge_df.empty: - self.logger().warning("Not enough candle data available for statistical analysis") - return - - # Extract close prices - dominant_prices = dominant_df['close'].values - hedge_prices = hedge_df['close'].values - - # Ensure we have enough data and both series have the same length - min_length = min(len(dominant_prices), len(hedge_prices)) - if min_length < self.config.lookback_period: - self.logger().warning( - f"Not enough data points for analysis. Required: {self.config.lookback_period}, Available: {min_length}") - return - - # Use the most recent data points - dominant_prices = dominant_prices[-self.config.lookback_period:] - hedge_prices = hedge_prices[-self.config.lookback_period:] - - # Convert to numpy arrays - dominant_prices_np = np.array(dominant_prices, dtype=float) - hedge_prices_np = np.array(hedge_prices, dtype=float) - - # Calculate percentage returns - dominant_pct_change = np.diff(dominant_prices_np) / dominant_prices_np[:-1] - hedge_pct_change = np.diff(hedge_prices_np) / hedge_prices_np[:-1] - - # Convert to cumulative returns - dominant_cum_returns = np.cumprod(dominant_pct_change + 1) - hedge_cum_returns = np.cumprod(hedge_pct_change + 1) - - # Normalize to start at 1 - dominant_cum_returns = dominant_cum_returns / dominant_cum_returns[0] if len(dominant_cum_returns) > 0 else np.array([1.0]) - hedge_cum_returns = hedge_cum_returns / hedge_cum_returns[0] if len(hedge_cum_returns) > 0 else np.array([1.0]) - - # Perform linear regression - dominant_cum_returns_reshaped = dominant_cum_returns.reshape(-1, 1) - reg = LinearRegression().fit(dominant_cum_returns_reshaped, hedge_cum_returns) - alpha = reg.intercept_ - beta = reg.coef_[0] - self.processed_data.update({ - "alpha": alpha, - "beta": beta, - }) - - # Calculate spread as percentage difference from predicted value - y_pred = alpha + beta * dominant_cum_returns - spread_pct = (hedge_cum_returns - y_pred) / y_pred * 100 - - # Calculate z-score - mean_spread = np.mean(spread_pct) - std_spread = np.std(spread_pct) - if std_spread == 0: - self.logger().warning("Standard deviation of spread is zero, cannot calculate z-score") - return - - current_spread = spread_pct[-1] - current_z_score = (current_spread - mean_spread) / std_spread - - return current_spread, current_z_score - - def get_pairs_prices(self): - current_dominant_price = self.market_data_provider.get_price_by_type( - connector_name=self.config.connector_pair_dominant.connector_name, - trading_pair=self.config.connector_pair_dominant.trading_pair, price_type=PriceType.MidPrice) - - current_hedge_price = self.market_data_provider.get_price_by_type( - connector_name=self.config.connector_pair_hedge.connector_name, - trading_pair=self.config.connector_pair_hedge.trading_pair, price_type=PriceType.MidPrice) - return current_dominant_price, current_hedge_price - - def get_executors_dominant(self): - active_executors_dominant_placed = self.filter_executors( - self.executors_info, - filter_func=lambda e: e.connector_name == self.config.connector_pair_dominant.connector_name and e.trading_pair == self.config.connector_pair_dominant.trading_pair and e.is_active and not e.is_trading and e.type == "position_executor" - ) - active_executors_dominant_filled = self.filter_executors( - self.executors_info, - filter_func=lambda e: e.connector_name == self.config.connector_pair_dominant.connector_name and e.trading_pair == self.config.connector_pair_dominant.trading_pair and e.is_active and e.is_trading and e.type == "position_executor" - ) - return active_executors_dominant_placed, active_executors_dominant_filled - - def get_executors_hedge(self): - active_executors_hedge_placed = self.filter_executors( - self.executors_info, - filter_func=lambda e: e.connector_name == self.config.connector_pair_hedge.connector_name and e.trading_pair == self.config.connector_pair_hedge.trading_pair and e.is_active and not e.is_trading and e.type == "position_executor" - ) - active_executors_hedge_filled = self.filter_executors( - self.executors_info, - filter_func=lambda e: e.connector_name == self.config.connector_pair_hedge.connector_name and e.trading_pair == self.config.connector_pair_hedge.trading_pair and e.is_active and e.is_trading and e.type == "position_executor" - ) - return active_executors_hedge_placed, active_executors_hedge_filled - - def to_format_status(self) -> List[str]: - """ - Format the status of the controller for display. - """ - status_lines = [] - status_lines.append(f""" -Dominant Pair: {self.config.connector_pair_dominant} | Hedge Pair: {self.config.connector_pair_hedge} | -Timeframe: {self.config.interval} | Lookback Period: {self.config.lookback_period} | Entry Threshold: {self.config.entry_threshold} - -Positions targets: -Theoretical Dominant : {self.theoretical_dominant_quote} | Theoretical Hedge: {self.theoretical_hedge_quote} | Position Hedge Ratio: {self.config.pos_hedge_ratio} -Position Dominant : {self.processed_data['position_dominant_quote']:.2f} | Position Hedge: {self.processed_data['position_hedge_quote']:.2f} | Imbalance: {self.processed_data['imbalance']:.2f} | Imbalance Scaled: {self.processed_data['imbalance_scaled_pct']:.2f} % - -Current Executors: -Active Orders Dominant : {len(self.processed_data['executors_dominant_placed'])} | Active Orders Hedge : {len(self.processed_data['executors_hedge_placed'])} | -Active Orders Dominant Filled: {len(self.processed_data['executors_dominant_filled'])} | Active Orders Hedge Filled: {len(self.processed_data['executors_hedge_filled'])} - -Signal: {self.processed_data['signal']:.2f} | Z-Score: {self.processed_data['z_score']:.2f} | Spread: {self.processed_data['spread']:.2f} -Alpha : {self.processed_data['alpha']:.2f} | Beta: {self.processed_data['beta']:.2f} -Pair PnL PCT: {self.processed_data['pair_pnl_pct'] * 100:.2f} % -""") - return status_lines diff --git a/bots/controllers/generic/xemm_multiple_levels.py b/bots/controllers/generic/xemm_multiple_levels.py deleted file mode 100644 index 9983e118dd..0000000000 --- a/bots/controllers/generic/xemm_multiple_levels.py +++ /dev/null @@ -1,143 +0,0 @@ -import time -from decimal import Decimal -from typing import Dict, List, Set - -import pandas as pd -from pydantic import Field, field_validator - -from hummingbot.client.ui.interface_utils import format_df_for_printout -from hummingbot.core.data_type.common import PriceType, TradeType -from hummingbot.data_feed.candles_feed.data_types import CandlesConfig -from hummingbot.strategy_v2.controllers.controller_base import ControllerBase, ControllerConfigBase -from hummingbot.strategy_v2.executors.data_types import ConnectorPair -from hummingbot.strategy_v2.executors.xemm_executor.data_types import XEMMExecutorConfig -from hummingbot.strategy_v2.models.executor_actions import CreateExecutorAction, ExecutorAction - - -class XEMMMultipleLevelsConfig(ControllerConfigBase): - controller_name: str = "xemm_multiple_levels" - candles_config: List[CandlesConfig] = [] - maker_connector: str = Field( - default="mexc", - json_schema_extra={"prompt": "Enter the maker connector: ", "prompt_on_new": True}) - maker_trading_pair: str = Field( - default="PEPE-USDT", - json_schema_extra={"prompt": "Enter the maker trading pair: ", "prompt_on_new": True}) - taker_connector: str = Field( - default="binance", - json_schema_extra={"prompt": "Enter the taker connector: ", "prompt_on_new": True}) - taker_trading_pair: str = Field( - default="PEPE-USDT", - json_schema_extra={"prompt": "Enter the taker trading pair: ", "prompt_on_new": True}) - buy_levels_targets_amount: List[List[Decimal]] = Field( - default="0.003,10-0.006,20-0.009,30", - json_schema_extra={ - "prompt": "Enter the buy levels targets with the following structure: (target_profitability1,amount1-target_profitability2,amount2): ", - "prompt_on_new": True}) - sell_levels_targets_amount: List[List[Decimal]] = Field( - default="0.003,10-0.006,20-0.009,30", - json_schema_extra={ - "prompt": "Enter the sell levels targets with the following structure: (target_profitability1,amount1-target_profitability2,amount2): ", - "prompt_on_new": True}) - min_profitability: Decimal = Field( - default=0.003, - json_schema_extra={"prompt": "Enter the minimum profitability: ", "prompt_on_new": True}) - max_profitability: Decimal = Field( - default=0.01, - json_schema_extra={"prompt": "Enter the maximum profitability: ", "prompt_on_new": True}) - max_executors_imbalance: int = Field( - default=1, - json_schema_extra={"prompt": "Enter the maximum executors imbalance: ", "prompt_on_new": True}) - - @field_validator("buy_levels_targets_amount", "sell_levels_targets_amount", mode="before") - @classmethod - def validate_levels_targets_amount(cls, v): - if isinstance(v, str): - v = [list(map(Decimal, x.split(","))) for x in v.split("-")] - return v - - def update_markets(self, markets: Dict[str, Set[str]]) -> Dict[str, Set[str]]: - if self.maker_connector not in markets: - markets[self.maker_connector] = set() - markets[self.maker_connector].add(self.maker_trading_pair) - if self.taker_connector not in markets: - markets[self.taker_connector] = set() - markets[self.taker_connector].add(self.taker_trading_pair) - return markets - - -class XEMMMultipleLevels(ControllerBase): - - def __init__(self, config: XEMMMultipleLevelsConfig, *args, **kwargs): - self.config = config - self.buy_levels_targets_amount = config.buy_levels_targets_amount - self.sell_levels_targets_amount = config.sell_levels_targets_amount - super().__init__(config, *args, **kwargs) - - async def update_processed_data(self): - pass - - def determine_executor_actions(self) -> List[ExecutorAction]: - executor_actions = [] - mid_price = self.market_data_provider.get_price_by_type(self.config.maker_connector, self.config.maker_trading_pair, PriceType.MidPrice) - active_buy_executors = self.filter_executors( - executors=self.executors_info, - filter_func=lambda e: not e.is_done and e.config.maker_side == TradeType.BUY - ) - active_sell_executors = self.filter_executors( - executors=self.executors_info, - filter_func=lambda e: not e.is_done and e.config.maker_side == TradeType.SELL - ) - stopped_buy_executors = self.filter_executors( - executors=self.executors_info, - filter_func=lambda e: e.is_done and e.config.maker_side == TradeType.BUY and e.filled_amount_quote != 0 - ) - stopped_sell_executors = self.filter_executors( - executors=self.executors_info, - filter_func=lambda e: e.is_done and e.config.maker_side == TradeType.SELL and e.filled_amount_quote != 0 - ) - imbalance = len(stopped_buy_executors) - len(stopped_sell_executors) - for target_profitability, amount in self.buy_levels_targets_amount: - active_buy_executors_target = [e.config.target_profitability == target_profitability for e in active_buy_executors] - - if len(active_buy_executors_target) == 0 and imbalance < self.config.max_executors_imbalance: - min_profitability = target_profitability - self.config.min_profitability - max_profitability = target_profitability + self.config.max_profitability - config = XEMMExecutorConfig( - controller_id=self.config.id, - timestamp=self.market_data_provider.time(), - buying_market=ConnectorPair(connector_name=self.config.maker_connector, - trading_pair=self.config.maker_trading_pair), - selling_market=ConnectorPair(connector_name=self.config.taker_connector, - trading_pair=self.config.taker_trading_pair), - maker_side=TradeType.BUY, - order_amount=amount / mid_price, - min_profitability=min_profitability, - target_profitability=target_profitability, - max_profitability=max_profitability - ) - executor_actions.append(CreateExecutorAction(executor_config=config, controller_id=self.config.id)) - for target_profitability, amount in self.sell_levels_targets_amount: - active_sell_executors_target = [e.config.target_profitability == target_profitability for e in active_sell_executors] - if len(active_sell_executors_target) == 0 and imbalance > -self.config.max_executors_imbalance: - min_profitability = target_profitability - self.config.min_profitability - max_profitability = target_profitability + self.config.max_profitability - config = XEMMExecutorConfig( - controller_id=self.config.id, - timestamp=time.time(), - buying_market=ConnectorPair(connector_name=self.config.taker_connector, - trading_pair=self.config.taker_trading_pair), - selling_market=ConnectorPair(connector_name=self.config.maker_connector, - trading_pair=self.config.maker_trading_pair), - maker_side=TradeType.SELL, - order_amount=amount / mid_price, - min_profitability=min_profitability, - target_profitability=target_profitability, - max_profitability=max_profitability - ) - executor_actions.append(CreateExecutorAction(executor_config=config, controller_id=self.config.id)) - return executor_actions - - def to_format_status(self) -> List[str]: - all_executors_custom_info = pd.DataFrame(e.custom_info for e in self.executors_info) - return [format_df_for_printout(all_executors_custom_info, table_format="psql", )] diff --git a/bots/controllers/market_making/__init__.py b/bots/controllers/market_making/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/bots/controllers/market_making/dman_maker_v2.py b/bots/controllers/market_making/dman_maker_v2.py deleted file mode 100644 index 2002fddd65..0000000000 --- a/bots/controllers/market_making/dman_maker_v2.py +++ /dev/null @@ -1,116 +0,0 @@ -from decimal import Decimal -from typing import List, Optional - -import pandas_ta as ta # noqa: F401 -from pydantic import Field, field_validator - -from hummingbot.core.data_type.common import TradeType -from hummingbot.data_feed.candles_feed.data_types import CandlesConfig -from hummingbot.strategy_v2.controllers.market_making_controller_base import ( - MarketMakingControllerBase, - MarketMakingControllerConfigBase, -) -from hummingbot.strategy_v2.executors.dca_executor.data_types import DCAExecutorConfig, DCAMode -from hummingbot.strategy_v2.models.executor_actions import ExecutorAction, StopExecutorAction - - -class DManMakerV2Config(MarketMakingControllerConfigBase): - """ - Configuration required to run the D-Man Maker V2 strategy. - """ - controller_name: str = "dman_maker_v2" - candles_config: List[CandlesConfig] = [] - - # DCA configuration - dca_spreads: List[Decimal] = Field( - default="0.01,0.02,0.04,0.08", - json_schema_extra={"prompt": "Enter a comma-separated list of spreads for each DCA level: ", "prompt_on_new": True}) - dca_amounts: List[Decimal] = Field( - default="0.1,0.2,0.4,0.8", - json_schema_extra={"prompt": "Enter a comma-separated list of amounts for each DCA level: ", "prompt_on_new": True}) - top_executor_refresh_time: Optional[float] = Field(default=None, json_schema_extra={"is_updatable": True}) - executor_activation_bounds: Optional[List[Decimal]] = Field(default=None, json_schema_extra={"is_updatable": True}) - - @field_validator("executor_activation_bounds", mode="before") - @classmethod - def parse_activation_bounds(cls, v): - if isinstance(v, list): - return [Decimal(val) for val in v] - elif isinstance(v, str): - if v == "": - return None - return [Decimal(val) for val in v.split(",")] - return v - - @field_validator('dca_spreads', mode="before") - @classmethod - def parse_dca_spreads(cls, v): - if v is None: - return [] - if isinstance(v, str): - if v == "": - return [] - return [float(x.strip()) for x in v.split(',')] - return v - - @field_validator('dca_amounts', mode="before") - @classmethod - def parse_and_validate_dca_amounts(cls, v, validation_info): - if v is None or v == "": - return [1 for _ in validation_info.data['dca_spreads']] - if isinstance(v, str): - return [float(x.strip()) for x in v.split(',')] - elif isinstance(v, list) and len(v) != len(validation_info.data['dca_spreads']): - raise ValueError( - f"The number of dca amounts must match the number of {validation_info.data['dca_spreads']}.") - return v - - -class DManMakerV2(MarketMakingControllerBase): - def __init__(self, config: DManMakerV2Config, *args, **kwargs): - super().__init__(config, *args, **kwargs) - self.config = config - self.dca_amounts_pct = [Decimal(amount) / sum(self.config.dca_amounts) for amount in self.config.dca_amounts] - self.spreads = self.config.dca_spreads - - def first_level_refresh_condition(self, executor): - if self.config.top_executor_refresh_time is not None: - if self.get_level_from_level_id(executor.custom_info["level_id"]) == 0: - return self.market_data_provider.time() - executor.timestamp > self.config.top_executor_refresh_time - return False - - def order_level_refresh_condition(self, executor): - return self.market_data_provider.time() - executor.timestamp > self.config.executor_refresh_time - - def executors_to_refresh(self) -> List[ExecutorAction]: - executors_to_refresh = self.filter_executors( - executors=self.executors_info, - filter_func=lambda x: not x.is_trading and x.is_active and (self.order_level_refresh_condition(x) or self.first_level_refresh_condition(x))) - return [StopExecutorAction( - controller_id=self.config.id, - executor_id=executor.id) for executor in executors_to_refresh] - - def get_executor_config(self, level_id: str, price: Decimal, amount: Decimal): - trade_type = self.get_trade_type_from_level_id(level_id) - if trade_type == TradeType.BUY: - prices = [price * (1 - spread) for spread in self.spreads] - else: - prices = [price * (1 + spread) for spread in self.spreads] - amounts = [amount * pct for pct in self.dca_amounts_pct] - amounts_quote = [amount * price for amount, price in zip(amounts, prices)] - return DCAExecutorConfig( - timestamp=self.market_data_provider.time(), - connector_name=self.config.connector_name, - trading_pair=self.config.trading_pair, - mode=DCAMode.MAKER, - side=trade_type, - prices=prices, - amounts_quote=amounts_quote, - level_id=level_id, - time_limit=self.config.time_limit, - stop_loss=self.config.stop_loss, - take_profit=self.config.take_profit, - trailing_stop=self.config.trailing_stop, - activation_bounds=self.config.executor_activation_bounds, - leverage=self.config.leverage, - ) diff --git a/bots/controllers/market_making/pmm_dynamic.py b/bots/controllers/market_making/pmm_dynamic.py deleted file mode 100644 index 612f7c9b49..0000000000 --- a/bots/controllers/market_making/pmm_dynamic.py +++ /dev/null @@ -1,125 +0,0 @@ -from decimal import Decimal -from typing import List - -import pandas_ta as ta # noqa: F401 -from pydantic import Field, field_validator -from pydantic_core.core_schema import ValidationInfo - -from hummingbot.data_feed.candles_feed.data_types import CandlesConfig -from hummingbot.strategy_v2.controllers.market_making_controller_base import ( - MarketMakingControllerBase, - MarketMakingControllerConfigBase, -) -from hummingbot.strategy_v2.executors.position_executor.data_types import PositionExecutorConfig - - -class PMMDynamicControllerConfig(MarketMakingControllerConfigBase): - controller_name: str = "pmm_dynamic" - candles_config: List[CandlesConfig] = [] - buy_spreads: List[float] = Field( - default="1,2,4", - json_schema_extra={ - "prompt": "Enter a comma-separated list of buy spreads measured in units of volatility(e.g., '1, 2'): ", - "prompt_on_new": True, "is_updatable": True} - ) - sell_spreads: List[float] = Field( - default="1,2,4", - json_schema_extra={ - "prompt": "Enter a comma-separated list of sell spreads measured in units of volatility(e.g., '1, 2'): ", - "prompt_on_new": True, "is_updatable": True} - ) - candles_connector: str = Field( - default=None, - json_schema_extra={ - "prompt": "Enter the connector for the candles data, leave empty to use the same exchange as the connector: ", - "prompt_on_new": True}) - candles_trading_pair: str = Field( - default=None, - json_schema_extra={ - "prompt": "Enter the trading pair for the candles data, leave empty to use the same trading pair as the connector: ", - "prompt_on_new": True}) - interval: str = Field( - default="3m", - json_schema_extra={ - "prompt": "Enter the candle interval (e.g., 1m, 5m, 1h, 1d): ", - "prompt_on_new": True}) - macd_fast: int = Field( - default=21, - json_schema_extra={"prompt": "Enter the MACD fast period: ", "prompt_on_new": True}) - macd_slow: int = Field( - default=42, - json_schema_extra={"prompt": "Enter the MACD slow period: ", "prompt_on_new": True}) - macd_signal: int = Field( - default=9, - json_schema_extra={"prompt": "Enter the MACD signal period: ", "prompt_on_new": True}) - natr_length: int = Field( - default=14, - json_schema_extra={"prompt": "Enter the NATR length: ", "prompt_on_new": True}) - - @field_validator("candles_connector", mode="before") - @classmethod - def set_candles_connector(cls, v, validation_info: ValidationInfo): - if v is None or v == "": - return validation_info.data.get("connector_name") - return v - - @field_validator("candles_trading_pair", mode="before") - @classmethod - def set_candles_trading_pair(cls, v, validation_info: ValidationInfo): - if v is None or v == "": - return validation_info.data.get("trading_pair") - return v - - -class PMMDynamicController(MarketMakingControllerBase): - """ - This is a dynamic version of the PMM controller.It uses the MACD to shift the mid-price and the NATR - to make the spreads dynamic. It also uses the Triple Barrier Strategy to manage the risk. - """ - def __init__(self, config: PMMDynamicControllerConfig, *args, **kwargs): - self.config = config - self.max_records = max(config.macd_slow, config.macd_fast, config.macd_signal, config.natr_length) + 100 - if len(self.config.candles_config) == 0: - self.config.candles_config = [CandlesConfig( - connector=config.candles_connector, - trading_pair=config.candles_trading_pair, - interval=config.interval, - max_records=self.max_records - )] - super().__init__(config, *args, **kwargs) - - async def update_processed_data(self): - candles = self.market_data_provider.get_candles_df(connector_name=self.config.candles_connector, - trading_pair=self.config.candles_trading_pair, - interval=self.config.interval, - max_records=self.max_records) - natr = ta.natr(candles["high"], candles["low"], candles["close"], length=self.config.natr_length) / 100 - macd_output = ta.macd(candles["close"], fast=self.config.macd_fast, - slow=self.config.macd_slow, signal=self.config.macd_signal) - macd = macd_output[f"MACD_{self.config.macd_fast}_{self.config.macd_slow}_{self.config.macd_signal}"] - macd_signal = - (macd - macd.mean()) / macd.std() - macdh = macd_output[f"MACDh_{self.config.macd_fast}_{self.config.macd_slow}_{self.config.macd_signal}"] - macdh_signal = macdh.apply(lambda x: 1 if x > 0 else -1) - max_price_shift = natr / 2 - price_multiplier = ((0.5 * macd_signal + 0.5 * macdh_signal) * max_price_shift).iloc[-1] - candles["spread_multiplier"] = natr - candles["reference_price"] = candles["close"] * (1 + price_multiplier) - self.processed_data = { - "reference_price": Decimal(candles["reference_price"].iloc[-1]), - "spread_multiplier": Decimal(candles["spread_multiplier"].iloc[-1]), - "features": candles - } - - def get_executor_config(self, level_id: str, price: Decimal, amount: Decimal): - trade_type = self.get_trade_type_from_level_id(level_id) - return PositionExecutorConfig( - timestamp=self.market_data_provider.time(), - level_id=level_id, - connector_name=self.config.connector_name, - trading_pair=self.config.trading_pair, - entry_price=price, - amount=amount, - triple_barrier_config=self.config.triple_barrier_config, - leverage=self.config.leverage, - side=trade_type, - ) diff --git a/bots/controllers/market_making/pmm_simple.py b/bots/controllers/market_making/pmm_simple.py deleted file mode 100644 index 6b09f33799..0000000000 --- a/bots/controllers/market_making/pmm_simple.py +++ /dev/null @@ -1,37 +0,0 @@ -from decimal import Decimal -from typing import List - -from pydantic import Field - -from hummingbot.data_feed.candles_feed.data_types import CandlesConfig -from hummingbot.strategy_v2.controllers.market_making_controller_base import ( - MarketMakingControllerBase, - MarketMakingControllerConfigBase, -) -from hummingbot.strategy_v2.executors.position_executor.data_types import PositionExecutorConfig - - -class PMMSimpleConfig(MarketMakingControllerConfigBase): - controller_name: str = "pmm_simple" - # As this controller is a simple version of the PMM, we are not using the candles feed - candles_config: List[CandlesConfig] = Field(default=[]) - - -class PMMSimpleController(MarketMakingControllerBase): - def __init__(self, config: PMMSimpleConfig, *args, **kwargs): - super().__init__(config, *args, **kwargs) - self.config = config - - def get_executor_config(self, level_id: str, price: Decimal, amount: Decimal): - trade_type = self.get_trade_type_from_level_id(level_id) - return PositionExecutorConfig( - timestamp=self.market_data_provider.time(), - level_id=level_id, - connector_name=self.config.connector_name, - trading_pair=self.config.trading_pair, - entry_price=price, - amount=amount, - triple_barrier_config=self.config.triple_barrier_config, - leverage=self.config.leverage, - side=trade_type, - ) diff --git a/bots/controllers/params_docs/controller_config_template_base.md b/bots/controllers/params_docs/controller_config_template_base.md deleted file mode 100644 index e314a1c514..0000000000 --- a/bots/controllers/params_docs/controller_config_template_base.md +++ /dev/null @@ -1,138 +0,0 @@ -# Controller Configuration Documentation Template - -## General Description - -This section should provide a comprehensive overview of the controller's trading strategy and operational characteristics. Include: - -- **Strategy Type**: Clearly identify the trading approach (market making, directional trading, arbitrage, cross-exchange market making, etc.) -- **Core Logic**: Explain how the controller analyzes market data and makes trading decisions -- **Market Conditions**: - - **Optimal Conditions**: Describe when this strategy performs best (e.g., high volatility, stable trends, specific liquidity conditions) - - **Challenging Conditions**: Identify scenarios where the strategy may underperform (e.g., low liquidity, extreme volatility spikes, trending markets for mean-reversion strategies) -- **Risk Profile**: Outline the primary risks and how the controller manages them -- **Expected Outcomes**: Provide realistic expectations for performance under various market conditions - -## Parameters - -Each parameter should be documented with the following structure: - -### `parameter_name` -- **Type**: `data_type` (e.g., `Decimal`, `int`, `str`, `List[float]`, `OrderType`) -- **Default**: `default_value` -- **Range**: `[min_value, max_value]` or constraints -- **Description**: Clear explanation of what this parameter controls - -#### Value Impact Analysis: -- **Low Values** (`example_range`): Explain the behavior and implications -- **Medium Values** (`example_range`): Typical use case and expected behavior -- **High Values** (`example_range`): Effects and potential risks -- **Edge Cases**: What happens at extremes (0, negative, very large values) - -#### Interaction Effects: -- List other parameters this interacts with -- Describe how combinations affect overall behavior - -#### Example Configurations: -```yaml -# Conservative setting -parameter_name: value_1 - -# Moderate setting -parameter_name: value_2 - -# Aggressive setting -parameter_name: value_3 -``` - -## Common Configurations - -This section presents complete, ready-to-use configurations for typical trading scenarios. Each configuration should include: - -### Configuration Name -**Use Case**: Brief description of when to use this configuration - -**Key Characteristics**: -- Risk level -- Capital requirements -- Market conditions suited for -- Expected behavior - -**Template**: -```yaml -# Configuration description and notes -controller_name: controller_type -controller_type: category -connector_name: PLACEHOLDER_EXCHANGE -trading_pair: PLACEHOLDER_TRADING_PAIR -portfolio_allocation: 0.XX - -# Core parameters with explanations -parameter_1: value # Why this value -parameter_2: value # Impact on strategy -parameter_3: value # Risk consideration - -# Advanced parameters -parameter_4: value -parameter_5: value -``` - -**Placeholders**: -- `PLACEHOLDER_EXCHANGE`: Replace with your exchange (e.g., binance, coinbase) -- `PLACEHOLDER_TRADING_PAIR`: Replace with your trading pair (e.g., BTC-USDT, ETH-USD) -- Adjust numerical values based on your risk tolerance and capital - -### Quick Start Configurations - -#### 1. Conservative Configuration -Suitable for beginners or low-risk tolerance -```yaml -# Full configuration here -``` - -#### 2. Balanced Configuration -Standard setup for most market conditions -```yaml -# Full configuration here -``` - -#### 3. Aggressive Configuration -Higher risk/reward for experienced traders -```yaml -# Full configuration here -``` - -## Performance Tuning Guide - -### Key Parameters for Optimization -1. **Parameter Group 1** - Impact on execution speed -2. **Parameter Group 2** - Risk management controls -3. **Parameter Group 3** - Profit targets and stops - -### Common Adjustments by Market Condition -- **High Volatility**: Adjust parameters X, Y, Z -- **Low Liquidity**: Modify parameters A, B, C -- **Trending Markets**: Update parameters D, E, F - -## Troubleshooting - -### Common Issues and Solutions -- **Issue**: Orders not filling - - **Solution**: Adjust spread parameters or check minimum order sizes - -- **Issue**: Excessive losses - - **Solution**: Review stop loss settings and position sizing - -## Best Practices - -1. **Start Conservative**: Begin with smaller position sizes and wider spreads -2. **Monitor Performance**: Track key metrics before increasing exposure -3. **Regular Review**: Periodically assess and adjust parameters based on performance -4. **Risk Management**: Always set appropriate stop losses and position limits -5. **Testing**: Use paper trading or small amounts when trying new configurations - -## Additional Notes - -- Version compatibility information -- Exchange-specific considerations -- Regulatory compliance notes (if applicable) -- Links to related documentation or resources \ No newline at end of file diff --git a/bots/controllers/params_docs/generic_pmm.md b/bots/controllers/params_docs/generic_pmm.md deleted file mode 100644 index 33a577482d..0000000000 --- a/bots/controllers/params_docs/generic_pmm.md +++ /dev/null @@ -1,490 +0,0 @@ -# PMM (Pure Market Making) Controller Documentation - -## General Description - -The PMM (Pure Market Making) controller implements a sophisticated market making strategy that continuously places buy and sell limit orders around the current market price to profit from the bid-ask spread. This controller uses dynamic position management with configurable inventory targets and risk controls to maintain balanced exposure while capturing spread profits. - -**Core Strategy Mechanics**: The controller maintains multiple order levels on both sides of the order book, adjusting order sizes based on current inventory levels through a skew mechanism. When inventory deviates from the target position, the controller automatically adjusts order sizes to encourage rebalancing - increasing buy orders when below target and sell orders when above target. - -**Optimal Market Conditions**: -- Stable, range-bound markets with consistent volatility -- High trading volume with good liquidity -- Markets with natural mean reversion tendencies -- Periods of low directional momentum - -**Challenging Conditions**: -- Strong trending markets (risk of adverse selection) -- Extreme volatility spikes or flash crashes -- Low liquidity environments with wide spreads -- Markets experiencing structural breaks or regime changes - -**Risk Profile**: Medium to high risk depending on configuration. Primary risks include inventory risk from accumulating positions during trends, adverse selection from informed traders, and execution risk from rapid price movements. - -## Parameters - -### `connector_name` -- **Type**: `str` -- **Default**: `"binance"` -- **Description**: The exchange connector to use for trading - -#### Value Impact Analysis: -- Different exchanges have varying fee structures, liquidity profiles, and API latencies -- Perpetual connectors (e.g., `binance_perpetual`) enable leverage trading -- Spot connectors (e.g., `binance`) are for unleveraged trading - -### `trading_pair` -- **Type**: `str` -- **Default**: `"BTC-FDUSD"` -- **Description**: The trading pair to make markets on - -#### Value Impact Analysis: -- Major pairs (BTC-USDT, ETH-USDT) typically have tighter spreads and higher competition -- Altcoin pairs may offer wider spreads but higher volatility risk -- Stablecoin pairs (USDC-USDT) have minimal directional risk but tiny spreads - -### `portfolio_allocation` -- **Type**: `Decimal` -- **Default**: `0.05` (5%) -- **Range**: `[0.01, 1.0]` -- **Description**: Maximum percentage of total capital to allocate around mid-price - -#### Value Impact Analysis: -- **Low Values** (`0.01-0.05`): Conservative exposure, suitable for testing or volatile markets -- **Medium Values** (`0.05-0.20`): Standard allocation for balanced risk/reward -- **High Values** (`0.20-1.0`): Aggressive allocation, higher profit potential but increased risk -- **Edge Cases**: Values above 0.5 may lead to insufficient reserves for rebalancing - -#### Interaction Effects: -- Combines with `total_amount_quote` to determine actual order sizes -- Affects how quickly the bot can adjust to inventory imbalances - -### `target_base_pct` -- **Type**: `Decimal` -- **Default**: `0.2` (20%) -- **Range**: `[0.0, 1.0]` -- **Description**: Target inventory level as percentage of total allocation - -#### Value Impact Analysis: -- **Low Values** (`0.0-0.2`): Quote-heavy strategy, profits from upward price moves -- **Medium Values** (`0.3-0.7`): Balanced inventory, neutral market exposure -- **High Values** (`0.8-1.0`): Base-heavy strategy, profits from downward moves -- **Typical**: 0.5 for market-neutral approach - -#### Interaction Effects: -- Works with `min_base_pct` and `max_base_pct` to define rebalancing boundaries -- Influences skew calculations for order sizing - -### `min_base_pct` / `max_base_pct` -- **Type**: `Decimal` -- **Default**: `0.1` / `0.4` -- **Range**: `[0.0, 1.0]` -- **Description**: Inventory boundaries that trigger rebalancing behavior - -#### Value Impact Analysis: -- **Tight Range** (`0.4-0.6`): Aggressive rebalancing, more frequent position adjustments -- **Medium Range** (`0.3-0.7`): Balanced approach, moderate rebalancing -- **Wide Range** (`0.1-0.9`): Tolerates large inventory swings, less rebalancing -- **Edge Cases**: Range too tight may cause excessive trading; too wide increases directional risk - -#### Interaction Effects: -- When inventory hits boundaries, controller only places orders on one side -- Affects profitability in trending vs ranging markets - -### `buy_spreads` / `sell_spreads` -- **Type**: `List[float]` -- **Default**: `[0.01, 0.02]` -- **Range**: `[0.0001, 0.10]` per spread -- **Description**: Distance from mid-price for each order level (as decimal percentage) - -#### Value Impact Analysis: -- **Tight Spreads** (`0.0001-0.001`): - - More fills but smaller profit per trade - - Higher risk of adverse selection - - Suitable for liquid markets with low volatility -- **Medium Spreads** (`0.001-0.01`): - - Balanced fill rate and profitability - - Standard for most market conditions -- **Wide Spreads** (`0.01-0.10`): - - Fewer fills but larger profit per trade - - Better protection against adverse moves - - Suitable for volatile or illiquid markets - -#### Example Configurations: -```yaml -# Liquid market (BTC-USDT) -buy_spreads: [0.0001, 0.0002, 0.0005, 0.0007] -sell_spreads: [0.0002, 0.0004, 0.0006, 0.0008] - -# Volatile altcoin -buy_spreads: [0.005, 0.01, 0.015, 0.02] -sell_spreads: [0.005, 0.01, 0.015, 0.02] -``` - -### `buy_amounts_pct` / `sell_amounts_pct` -- **Type**: `List[Decimal]` or `None` -- **Default**: `None` (distributes equally) -- **Description**: Percentage allocation for each order level - -#### Value Impact Analysis: -- **Equal Distribution** (`[1, 1, 1, 1]`): Same size for all levels -- **Front-Weighted** (`[2, 1.5, 1, 0.5]`): Larger orders near mid-price -- **Back-Weighted** (`[0.5, 1, 1.5, 2]`): Larger orders further from mid-price -- **Custom Patterns**: Design based on market microstructure - -#### Example Configurations: -```yaml -# Aggressive near touch -buy_amounts_pct: [3, 2, 1, 1] - -# Defensive depth building -buy_amounts_pct: [1, 1, 2, 3] -``` - -### `executor_refresh_time` -- **Type**: `int` (seconds) -- **Default**: `300` (5 minutes) -- **Range**: `[10, 3600]` -- **Description**: Time before refreshing unfilled orders - -#### Value Impact Analysis: -- **Fast Refresh** (`10-60s`): - - Rapid adjustment to price changes - - Higher fees from cancellations - - Better for volatile markets -- **Medium Refresh** (`60-300s`): - - Balanced between responsiveness and fees - - Standard for most conditions -- **Slow Refresh** (`300-3600s`): - - Patient order placement - - Lower fees - - Risk of stale orders in fast markets - -### `cooldown_time` -- **Type**: `int` (seconds) -- **Default**: `15` -- **Range**: `[0, 300]` -- **Description**: Wait time after a fill before replacing the order - -#### Value Impact Analysis: -- **No Cooldown** (`0`): Immediate replacement, aggressive market making -- **Short Cooldown** (`5-15s`): Quick recovery, standard operation -- **Long Cooldown** (`30-300s`): Cautious approach, allows market to settle -- **Use Case**: Increase during news events or high volatility - -### `leverage` -- **Type**: `int` -- **Default**: `20` -- **Range**: `[1, 125]` -- **Description**: Leverage multiplier for perpetual contracts (1 for spot) - -#### Value Impact Analysis: -- **No Leverage** (`1`): Spot trading only, no liquidation risk -- **Low Leverage** (`2-5x`): Moderate capital efficiency -- **Medium Leverage** (`10-20x`): Standard for experienced traders -- **High Leverage** (`50-125x`): Extreme risk, small moves can liquidate -- **Risk Warning**: Higher leverage amplifies both profits and losses - -### `position_mode` -- **Type**: `PositionMode` -- **Default**: `"HEDGE"` -- **Options**: `["HEDGE", "ONEWAY"]` -- **Description**: Position mode for perpetual contracts - -#### Value Impact Analysis: -- **HEDGE Mode**: Can hold both long and short positions simultaneously -- **ONEWAY Mode**: Single direction position only -- **Use Case**: HEDGE mode useful for complex strategies; ONEWAY for simplicity - -### `take_profit` -- **Type**: `Decimal` or `None` -- **Default**: `0.02` (2%) -- **Range**: `[0.001, 0.10]` -- **Description**: Take profit target for individual positions - -#### Value Impact Analysis: -- **Tight TP** (`0.001-0.01`): Quick profits, high turnover -- **Medium TP** (`0.01-0.03`): Balanced approach -- **Wide TP** (`0.03-0.10`): Patient strategy, larger moves -- **None**: No position-level take profit - -### `take_profit_order_type` -- **Type**: `OrderType` -- **Default**: `LIMIT_MAKER` -- **Options**: `[MARKET, LIMIT, LIMIT_MAKER]` -- **Description**: Order type for take profit execution - -#### Value Impact Analysis: -- **MARKET**: Immediate execution, guarantees fill but may slip -- **LIMIT**: Precise price, may not fill -- **LIMIT_MAKER**: Post-only limit, earns maker fees - -### `max_skew` -- **Type**: `Decimal` -- **Default**: `1.0` -- **Range**: `[0.0, 1.0]` -- **Description**: Maximum order size adjustment based on inventory (0=full skew, 1=no skew) - -#### Value Impact Analysis: -- **No Skew** (`1.0`): Orders don't adjust with inventory -- **Moderate Skew** (`0.5-0.8`): Gradual size adjustments -- **Full Skew** (`0.0-0.3`): Aggressive inventory management -- **Effect**: Lower values mean stronger rebalancing pressure - -### `global_take_profit` / `global_stop_loss` -- **Type**: `Decimal` -- **Default**: `0.02` / `0.05` -- **Range**: `[0.01, 0.20]` -- **Description**: Portfolio-level profit/loss triggers - -#### Value Impact Analysis: -- **Tight Stops** (`0.01-0.03`): Quick exit, capital preservation -- **Medium Stops** (`0.03-0.10`): Standard risk management -- **Wide Stops** (`0.10-0.20`): Tolerates larger drawdowns -- **Action**: Triggers market sell of entire position when hit - -### `total_amount_quote` -- **Type**: `Decimal` -- **Default**: `2000` -- **Description**: Total quote currency amount for position sizing - -#### Value Impact Analysis: -- Determines absolute position sizes when combined with `portfolio_allocation` -- Should be set based on account balance and risk tolerance -- Actual deployed = `total_amount_quote * portfolio_allocation` - -## Common Configurations - -### Conservative Market Making -**Use Case**: Low risk tolerance, stable markets, learning the strategy - -```yaml -controller_name: pmm -controller_type: generic -connector_name: binance -trading_pair: BTC-USDT -portfolio_allocation: 0.025 # Only 2.5% allocation -total_amount_quote: 1000 - -# Wide spreads for safety -buy_spreads: [0.002, 0.004, 0.006] -sell_spreads: [0.002, 0.004, 0.006] -buy_amounts_pct: [1, 1, 1] -sell_amounts_pct: [1, 1, 1] - -# Conservative inventory management -target_base_pct: 0.5 -min_base_pct: 0.3 -max_base_pct: 0.7 -max_skew: 0.5 - -# Longer refresh for lower fees -executor_refresh_time: 600 -cooldown_time: 30 - -# Risk controls -leverage: 1 # Spot only -take_profit: 0.01 -global_take_profit: 0.02 -global_stop_loss: 0.03 -``` - -### Balanced Market Making -**Use Case**: Standard configuration for most market conditions - -```yaml -controller_name: pmm -controller_type: generic -connector_name: binance_perpetual -trading_pair: ETH-USDT -portfolio_allocation: 0.05 -total_amount_quote: 5000 - -# Moderate spreads -buy_spreads: [0.0005, 0.001, 0.002, 0.003] -sell_spreads: [0.0005, 0.001, 0.002, 0.003] -buy_amounts_pct: [1.5, 1.25, 1, 0.75] # Front-weighted -sell_amounts_pct: [1.5, 1.25, 1, 0.75] - -# Balanced inventory -target_base_pct: 0.5 -min_base_pct: 0.2 -max_base_pct: 0.8 -max_skew: 0.7 - -# Standard timing -executor_refresh_time: 300 -cooldown_time: 15 - -# Moderate leverage -leverage: 10 -position_mode: HEDGE -take_profit: 0.02 -take_profit_order_type: LIMIT_MAKER -global_take_profit: 0.05 -global_stop_loss: 0.08 -``` - -### Aggressive Scalping -**Use Case**: High volume, liquid markets, experienced traders - -```yaml -controller_name: pmm -controller_type: generic -connector_name: binance_perpetual -trading_pair: BTC-USDT -portfolio_allocation: 0.1 -total_amount_quote: 10000 - -# Tight spreads for maximum fills -buy_spreads: [0.0001, 0.0002, 0.0003, 0.0005, 0.0008] -sell_spreads: [0.0001, 0.0002, 0.0003, 0.0005, 0.0008] -buy_amounts_pct: [2, 1.5, 1, 0.5, 0.5] # Heavy near touch -sell_amounts_pct: [2, 1.5, 1, 0.5, 0.5] - -# Tight inventory control -target_base_pct: 0.5 -min_base_pct: 0.4 -max_base_pct: 0.6 -max_skew: 0.3 # Strong rebalancing - -# Fast refresh for responsiveness -executor_refresh_time: 60 -cooldown_time: 5 - -# Higher leverage for capital efficiency -leverage: 20 -position_mode: HEDGE -take_profit: 0.005 # Quick profits -take_profit_order_type: MARKET -global_take_profit: 0.03 -global_stop_loss: 0.05 -``` - -### Volatile Market Configuration -**Use Case**: High volatility periods, news events, low liquidity - -```yaml -controller_name: pmm -controller_type: generic -connector_name: binance -trading_pair: DOGE-USDT -portfolio_allocation: 0.03 -total_amount_quote: 2000 - -# Wide spreads for protection -buy_spreads: [0.01, 0.02, 0.03, 0.05] -sell_spreads: [0.01, 0.02, 0.03, 0.05] -buy_amounts_pct: [0.5, 1, 1.5, 2] # Back-weighted for safety -sell_amounts_pct: [0.5, 1, 1.5, 2] - -# Wide inventory tolerance -target_base_pct: 0.5 -min_base_pct: 0.1 -max_base_pct: 0.9 -max_skew: 0.8 - -# Slower refresh to let market settle -executor_refresh_time: 900 -cooldown_time: 60 - -# Conservative leverage -leverage: 2 -take_profit: 0.05 -global_take_profit: 0.1 -global_stop_loss: 0.15 -``` - -## Performance Tuning Guide - -### Key Parameters for Optimization - -1. **Spread Parameters** (`buy_spreads`, `sell_spreads`) - - Primary driver of profitability vs fill rate - - Adjust based on market volatility and competition - - Monitor fill rates and adjust if too high/low - -2. **Inventory Management** (`target_base_pct`, `min/max_base_pct`, `max_skew`) - - Controls directional exposure - - Tighter ranges for ranging markets - - Wider ranges for trending markets - -3. **Timing Parameters** (`executor_refresh_time`, `cooldown_time`) - - Balance between responsiveness and transaction costs - - Shorter times for volatile markets - - Longer times for stable conditions - -### Market Condition Adjustments - -**High Volatility**: -- Increase spreads by 2-5x -- Reduce portfolio allocation by 50% -- Increase cooldown time to 30-60s -- Tighten global stop loss - -**Low Liquidity**: -- Increase spreads to capture wider bid-ask -- Reduce order sizes (lower portfolio allocation) -- Increase executor refresh time -- Use back-weighted amount distributions - -**Trending Market**: -- Adjust target_base_pct in trend direction -- Widen min/max boundaries -- Reduce max_skew for stronger rebalancing -- Consider reducing leverage - -**Range-Bound Market**: -- Tighten spreads for more fills -- Increase portfolio allocation -- Use balanced or front-weighted distributions -- Standard refresh times - -## Troubleshooting - -### Orders Not Filling -- **Check minimum order size** for the trading pair -- **Reduce spreads** to get closer to market price -- **Verify** market is active and liquid -- **Increase** portfolio allocation if orders too small - -### Excessive Inventory Accumulation -- **Reduce** max_skew to increase rebalancing pressure -- **Tighten** min/max_base_pct range -- **Review** spreads - may be too aggressive on one side -- **Check** for trending market conditions - -### High Drawdown -- **Reduce** leverage immediately -- **Tighten** global_stop_loss parameter -- **Decrease** portfolio_allocation -- **Widen** spreads for better entry prices -- **Increase** cooldown_time after losses - -### Frequent Order Cancellations -- **Increase** executor_refresh_time -- **Check** for API rate limits -- **Verify** network connection stability -- **Consider** wider spreads - -## Best Practices - -1. **Start Small**: Begin with 1-2% portfolio allocation and low/no leverage -2. **Paper Trade First**: Test configurations without real capital -3. **Monitor Actively**: Watch performance for first 24-48 hours of new config -4. **Gradual Scaling**: Increase allocation/leverage gradually as confidence builds -5. **Risk Limits**: Always set global stop loss and take profit levels -6. **Market Research**: Understand the specific dynamics of your chosen trading pair -7. **Regular Reviews**: Analyze performance weekly and adjust parameters -8. **Diversification**: Consider running multiple instances on different pairs -9. **Fee Awareness**: Account for trading fees in spread calculations -10. **Backup Plans**: Have exit strategy if market conditions change dramatically - -## Additional Notes - -- PMM works best in liquid markets with consistent two-way flow -- Avoid during major news events unless specifically configured for volatility -- Consider time-of-day effects (Asian/European/US sessions) -- Some exchanges have special maker fee rebates that improve profitability -- Always ensure sufficient balance for potential position accumulation -- The controller automatically handles position sizing based on available balance -- Monitor the skew visualization in status to understand rebalancing behavior \ No newline at end of file diff --git a/bots/credentials/.gitignore b/bots/credentials/.gitignore deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/bots/credentials/master_account/conf_client.yml b/bots/credentials/master_account/conf_client.yml deleted file mode 100644 index 9f68d0d962..0000000000 --- a/bots/credentials/master_account/conf_client.yml +++ /dev/null @@ -1,176 +0,0 @@ -#################################### -### client_config_map config ### -#################################### - -instance_id: da1a30622aec40d9f1a9f1333c2b845f12ec456d - -# Fetch trading pairs from all exchanges if True, otherwise fetch only from connected exchanges. -fetch_pairs_from_all_exchanges: false - -log_level: INFO - -debug_console: false - -strategy_report_interval: 900.0 - -logger_override_whitelist: -- hummingbot.strategy.arbitrage -- hummingbot.strategy.cross_exchange_market_making -- conf - - -kill_switch_mode: {} - -# What to auto-fill in the prompt after each import command (start/config) -autofill_import: disabled - -# MQTT Bridge configuration. -mqtt_bridge: - mqtt_host: localhost - mqtt_port: 1883 - mqtt_username: '' - mqtt_password: '' - mqtt_namespace: hbot - mqtt_ssl: false - mqtt_logger: true - mqtt_notifier: true - mqtt_commands: true - mqtt_events: true - mqtt_external_events: true - mqtt_autostart: true - -# Error log sharing -send_error_logs: true - -# Advanced database options, currently supports SQLAlchemy's included dialects -# Reference: https://docs.sqlalchemy.org/en/13/dialects/ -# To use an instance of SQLite DB the required configuration is -# db_engine: sqlite -# To use a DBMS the required configuration is -# db_host: 127.0.0.1 -# db_port: 3306 -# db_username: username -# db_password: password -# db_name: dbname -db_mode: - db_engine: sqlite - -# Balance Limit Configurations -# e.g. Setting USDT and BTC limits on Binance. -# balance_asset_limit: -# binance: -# BTC: 0.1 -# USDT: 1000 -balance_asset_limit: - kucoin: {} - ndax_testnet: {} - huobi: {} - bitmart: {} - polkadex: {} - hitbtc: {} - ndax: {} - foxbit: {} - bitmex_testnet: {} - coinbase_pro: {} - bybit: {} - binance_paper_trade: {} - bitfinex: {} - kucoin_paper_trade: {} - okx: {} - binance_us: {} - injective_v2: {} - ascend_ex: {} - binance: {} - bybit_testnet: {} - kraken: {} - mexc: {} - vertex: {} - gate_io_paper_trade: {} - gate_io: {} - woo_x: {} - woo_x_testnet: {} - btc_markets: {} - vertex_testnet: {} - mock_paper_exchange: {} - bitmex: {} - ascend_ex_paper_trade: {} - -# Fixed gas price (in Gwei) for Ethereum transactions -manual_gas_price: 50.0 - -# Gateway API Configurations -# default host to only use localhost -# Port need to match the final installation port for Gateway -gateway: - gateway_api_host: localhost - gateway_api_port: '15888' - gateway_use_ssl: false - -# Whether to enable aggregated order and trade data collection -anonymized_metrics_mode: - anonymized_metrics_interval_min: 15.0 - -# A source for rate oracle, currently ascend_ex, binance, coin_gecko, coin_cap, kucoin, gate_io -rate_oracle_source: - name: binance - -# A universal token which to display tokens values in, e.g. USD,EUR,BTC -global_token: - global_token_name: USDT - global_token_symbol: $ - -# Percentage of API rate limits (on any exchange and any end point) allocated to this bot instance. -# Enter 50 to indicate 50%. E.g. if the API rate limit is 100 calls per second, and you allocate -# 50% to this setting, the bot will have a maximum (limit) of 50 calls per second -rate_limits_share_pct: 100.0 - -commands_timeout: - create_command_timeout: 10.0 - other_commands_timeout: 30.0 - -# Tabulate table format style (https://github.com/astanin/python-tabulate#table-format) -tables_format: psql - -paper_trade: - paper_trade_exchanges: - - binance - - kucoin - - ascend_ex - - gate_io - paper_trade_account_balance: - BTC: 1.0 - USDT: 1000.0 - ONE: 1000.0 - USDQ: 1000.0 - TUSD: 1000.0 - ETH: 10.0 - WETH: 10.0 - USDC: 1000.0 - DAI: 1000.0 - -color: - top_pane: '#000000' - bottom_pane: '#000000' - output_pane: '#262626' - input_pane: '#1C1C1C' - logs_pane: '#121212' - terminal_primary: '#5FFFD7' - primary_label: '#5FFFD7' - secondary_label: '#FFFFFF' - success_label: '#5FFFD7' - warning_label: '#FFFF00' - info_label: '#5FD7FF' - error_label: '#FF0000' - gold_label: '#FFD700' - silver_label: '#C0C0C0' - bronze_label: '#CD7F32' - -# The tick size is the frequency with which the clock notifies the time iterators by calling the -# c_tick() method, that means for example that if the tick size is 1, the logic of the strategy -# will run every second. -tick_size: 1.0 - -market_data_collection: - market_data_collection_enabled: false - market_data_collection_interval: 60 - market_data_collection_depth: 20 diff --git a/bots/credentials/master_account/conf_fee_overrides.yml b/bots/credentials/master_account/conf_fee_overrides.yml deleted file mode 100644 index 1dbff8fd7e..0000000000 --- a/bots/credentials/master_account/conf_fee_overrides.yml +++ /dev/null @@ -1,298 +0,0 @@ -######################################## -### Fee overrides configurations ### -######################################## - -# For more detailed information: https://docs.hummingbot.io -template_version: 14 - -# Example of the fields that can be specified to override the `TradeFeeFactory` default settings. -# If the field is missing or the value is left blank, the default value will be used. -# The percentage values are specified as 0.1 for 0.1%. -# -# [exchange name]_percent_fee_token: -# [exchange name]_maker_percent_fee: -# [exchange name]_taker_percent_fee: -# [exchange name]_buy_percent_fee_deducted_from_returns: # if False, the buy fee is added to the order costs -# [exchange name]_maker_fixed_fees: # a list of lists of token-fee pairs (e.g. [["ETH", 1]]) -# [exchange name]_taker_fixed_fees: # a list of lists of token-fee pairs (e.g. [["ETH", 1]]) - -binance_percent_fee_token: # BNB -binance_maker_percent_fee: # 0.75 -binance_taker_percent_fee: # 0.75 -binance_buy_percent_fee_deducted_from_returns: # True - -# List of supported Exchanges for which the user's conf/conf_fee_override.yml -# will work. This file currently needs to be in sync with hummingbot list of -# supported exchanges -ascend_ex_buy_percent_fee_deducted_from_returns: -ascend_ex_maker_fixed_fees: -ascend_ex_maker_percent_fee: -ascend_ex_percent_fee_token: -ascend_ex_taker_fixed_fees: -ascend_ex_taker_percent_fee: -binance_maker_fixed_fees: -binance_perpetual_buy_percent_fee_deducted_from_returns: -binance_perpetual_maker_fixed_fees: -binance_perpetual_maker_percent_fee: -binance_perpetual_percent_fee_token: -binance_perpetual_taker_fixed_fees: -binance_perpetual_taker_percent_fee: -binance_perpetual_testnet_buy_percent_fee_deducted_from_returns: -binance_perpetual_testnet_maker_fixed_fees: -binance_perpetual_testnet_maker_percent_fee: -binance_perpetual_testnet_percent_fee_token: -binance_perpetual_testnet_taker_fixed_fees: -binance_perpetual_testnet_taker_percent_fee: -binance_taker_fixed_fees: -binance_us_buy_percent_fee_deducted_from_returns: -binance_us_maker_fixed_fees: -binance_us_maker_percent_fee: -binance_us_percent_fee_token: -binance_us_taker_fixed_fees: -binance_us_taker_percent_fee: -bitfinex_buy_percent_fee_deducted_from_returns: -bitfinex_maker_fixed_fees: -bitfinex_maker_percent_fee: -bitfinex_percent_fee_token: -bitfinex_taker_fixed_fees: -bitfinex_taker_percent_fee: -bitmart_buy_percent_fee_deducted_from_returns: -bitmart_maker_fixed_fees: -bitmart_maker_percent_fee: -bitmart_percent_fee_token: -bitmart_taker_fixed_fees: -bitmart_taker_percent_fee: -btc_markets_percent_fee_token: -btc_markets_maker_percent_fee: -btc_markets_taker_percent_fee: -btc_markets_buy_percent_fee_deducted_from_returns: -bybit_perpetual_buy_percent_fee_deducted_from_returns: -bybit_perpetual_maker_fixed_fees: -bybit_perpetual_maker_percent_fee: -bybit_perpetual_percent_fee_token: -bybit_perpetual_taker_fixed_fees: -bybit_perpetual_taker_percent_fee: -bybit_perpetual_testnet_buy_percent_fee_deducted_from_returns: -bybit_perpetual_testnet_maker_fixed_fees: -bybit_perpetual_testnet_maker_percent_fee: -bybit_perpetual_testnet_percent_fee_token: -bybit_perpetual_testnet_taker_fixed_fees: -bybit_perpetual_testnet_taker_percent_fee: -coinbase_pro_buy_percent_fee_deducted_from_returns: -coinbase_pro_maker_fixed_fees: -coinbase_pro_maker_percent_fee: -coinbase_pro_percent_fee_token: -coinbase_pro_taker_fixed_fees: -coinbase_pro_taker_percent_fee: -dydx_perpetual_buy_percent_fee_deducted_from_returns: -dydx_perpetual_maker_fixed_fees: -dydx_perpetual_maker_percent_fee: -dydx_perpetual_percent_fee_token: -dydx_perpetual_taker_fixed_fees: -dydx_perpetual_taker_percent_fee: -gate_io_buy_percent_fee_deducted_from_returns: -gate_io_maker_fixed_fees: -gate_io_maker_percent_fee: -gate_io_percent_fee_token: -gate_io_taker_fixed_fees: -gate_io_taker_percent_fee: -hitbtc_buy_percent_fee_deducted_from_returns: -hitbtc_maker_fixed_fees: -hitbtc_maker_percent_fee: -hitbtc_percent_fee_token: -hitbtc_taker_fixed_fees: -hitbtc_taker_percent_fee: -huobi_buy_percent_fee_deducted_from_returns: -huobi_maker_fixed_fees: -huobi_maker_percent_fee: -huobi_percent_fee_token: -huobi_taker_fixed_fees: -huobi_taker_percent_fee: -kraken_buy_percent_fee_deducted_from_returns: -kraken_maker_fixed_fees: -kraken_maker_percent_fee: -kraken_percent_fee_token: -kraken_taker_fixed_fees: -kraken_taker_percent_fee: -kucoin_buy_percent_fee_deducted_from_returns: -kucoin_maker_fixed_fees: -kucoin_maker_percent_fee: -kucoin_percent_fee_token: -kucoin_taker_fixed_fees: -kucoin_taker_percent_fee: -mexc_buy_percent_fee_deducted_from_returns: -mexc_maker_fixed_fees: -mexc_maker_percent_fee: -mexc_percent_fee_token: -mexc_taker_fixed_fees: -mexc_taker_percent_fee: -ndax_buy_percent_fee_deducted_from_returns: -ndax_maker_fixed_fees: -ndax_maker_percent_fee: -ndax_percent_fee_token: -ndax_taker_fixed_fees: -ndax_taker_percent_fee: -ndax_testnet_buy_percent_fee_deducted_from_returns: -ndax_testnet_maker_fixed_fees: -ndax_testnet_maker_percent_fee: -ndax_testnet_percent_fee_token: -ndax_testnet_taker_fixed_fees: -ndax_testnet_taker_percent_fee: -okx_buy_percent_fee_deducted_from_returns: -okx_maker_fixed_fees: -okx_maker_percent_fee: -okx_percent_fee_token: -okx_taker_fixed_fees: -okx_taker_percent_fee: -phemex_perpetual_percent_fee_token: -phemex_perpetual_maker_percent_fee: -phemex_perpetual_taker_percent_fee: -phemex_perpetual_buy_percent_fee_deducted_from_returns: -phemex_perpetual_maker_fixed_fees: -phemex_perpetual_taker_fixed_fees: -phemex_perpetual_testnet_percent_fee_token: -phemex_perpetual_testnet_maker_percent_fee: -phemex_perpetual_testnet_taker_percent_fee: -phemex_perpetual_testnet_buy_percent_fee_deducted_from_returns: -phemex_perpetual_testnet_maker_fixed_fees: -phemex_perpetual_testnet_taker_fixed_fees: -injective_v2_perpetual_percent_fee_token: -injective_v2_perpetual_maker_percent_fee: -injective_v2_perpetual_taker_percent_fee: -injective_v2_perpetual_buy_percent_fee_deducted_from_returns: -injective_v2_perpetual_maker_fixed_fees: -injective_v2_perpetual_taker_fixed_fees: -bitget_perpetual_percent_fee_token: -bitget_perpetual_maker_percent_fee: -bitget_perpetual_taker_percent_fee: -bitget_perpetual_buy_percent_fee_deducted_from_returns: -bitget_perpetual_maker_fixed_fees: -bitget_perpetual_taker_fixed_fees: -bit_com_perpetual_percent_fee_token: -bit_com_perpetual_maker_percent_fee: -bit_com_perpetual_taker_percent_fee: -bit_com_perpetual_buy_percent_fee_deducted_from_returns: -bit_com_perpetual_maker_fixed_fees: -bit_com_perpetual_taker_fixed_fees: -bit_com_perpetual_testnet_percent_fee_token: -bit_com_perpetual_testnet_maker_percent_fee: -bit_com_perpetual_testnet_taker_percent_fee: -bit_com_perpetual_testnet_buy_percent_fee_deducted_from_returns: -bit_com_perpetual_testnet_maker_fixed_fees: -bit_com_perpetual_testnet_taker_fixed_fees: -gate_io_perpetual_percent_fee_token: -gate_io_perpetual_maker_percent_fee: -gate_io_perpetual_taker_percent_fee: -gate_io_perpetual_buy_percent_fee_deducted_from_returns: -gate_io_perpetual_maker_fixed_fees: -gate_io_perpetual_taker_fixed_fees: -bitmex_perpetual_percent_fee_token: -bitmex_perpetual_maker_percent_fee: -bitmex_perpetual_taker_percent_fee: -bitmex_perpetual_buy_percent_fee_deducted_from_returns: -bitmex_perpetual_maker_fixed_fees: -bitmex_perpetual_taker_fixed_fees: -bitmex_perpetual_testnet_percent_fee_token: -bitmex_perpetual_testnet_maker_percent_fee: -bitmex_perpetual_testnet_taker_percent_fee: -bitmex_perpetual_testnet_buy_percent_fee_deducted_from_returns: -bitmex_perpetual_testnet_maker_fixed_fees: -bitmex_perpetual_testnet_taker_fixed_fees: -kucoin_perpetual_percent_fee_token: -kucoin_perpetual_maker_percent_fee: -kucoin_perpetual_taker_percent_fee: -kucoin_perpetual_buy_percent_fee_deducted_from_returns: -kucoin_perpetual_maker_fixed_fees: -kucoin_perpetual_taker_fixed_fees: -bitmex_percent_fee_token: -bitmex_maker_percent_fee: -bitmex_taker_percent_fee: -bitmex_buy_percent_fee_deducted_from_returns: -bitmex_maker_fixed_fees: -bitmex_taker_fixed_fees: -bitmex_testnet_percent_fee_token: -bitmex_testnet_maker_percent_fee: -bitmex_testnet_taker_percent_fee: -bitmex_testnet_buy_percent_fee_deducted_from_returns: -bitmex_testnet_maker_fixed_fees: -bitmex_testnet_taker_fixed_fees: -btc_markets_maker_fixed_fees: -btc_markets_taker_fixed_fees: -woo_x_percent_fee_token: -woo_x_maker_percent_fee: -woo_x_taker_percent_fee: -woo_x_buy_percent_fee_deducted_from_returns: -woo_x_maker_fixed_fees: -woo_x_taker_fixed_fees: -woo_x_testnet_percent_fee_token: -woo_x_testnet_maker_percent_fee: -woo_x_testnet_taker_percent_fee: -woo_x_testnet_buy_percent_fee_deducted_from_returns: -woo_x_testnet_maker_fixed_fees: -woo_x_testnet_taker_fixed_fees: -polkadex_percent_fee_token: -polkadex_maker_percent_fee: -polkadex_taker_percent_fee: -polkadex_buy_percent_fee_deducted_from_returns: -polkadex_maker_fixed_fees: -polkadex_taker_fixed_fees: -bybit_percent_fee_token: -bybit_maker_percent_fee: -bybit_taker_percent_fee: -bybit_buy_percent_fee_deducted_from_returns: -bybit_maker_fixed_fees: -bybit_taker_fixed_fees: -bybit_testnet_percent_fee_token: -bybit_testnet_maker_percent_fee: -bybit_testnet_taker_percent_fee: -bybit_testnet_buy_percent_fee_deducted_from_returns: -bybit_testnet_maker_fixed_fees: -bybit_testnet_taker_fixed_fees: -injective_v2_percent_fee_token: -injective_v2_maker_percent_fee: -injective_v2_taker_percent_fee: -injective_v2_buy_percent_fee_deducted_from_returns: -injective_v2_maker_fixed_fees: -injective_v2_taker_fixed_fees: -vertex_percent_fee_token: -vertex_maker_percent_fee: -vertex_taker_percent_fee: -vertex_buy_percent_fee_deducted_from_returns: -vertex_maker_fixed_fees: -vertex_taker_fixed_fees: -vertex_testnet_percent_fee_token: -vertex_testnet_maker_percent_fee: -vertex_testnet_taker_percent_fee: -vertex_testnet_buy_percent_fee_deducted_from_returns: -vertex_testnet_maker_fixed_fees: -vertex_testnet_taker_fixed_fees: -mock_paper_exchange_percent_fee_token: -mock_paper_exchange_maker_percent_fee: -mock_paper_exchange_taker_percent_fee: -mock_paper_exchange_buy_percent_fee_deducted_from_returns: -mock_paper_exchange_maker_fixed_fees: -mock_paper_exchange_taker_fixed_fees: -kucoin_perpetual_testnet_percent_fee_token: -kucoin_perpetual_testnet_maker_percent_fee: -kucoin_perpetual_testnet_taker_percent_fee: -kucoin_perpetual_testnet_buy_percent_fee_deducted_from_returns: -kucoin_perpetual_testnet_maker_fixed_fees: -kucoin_perpetual_testnet_taker_fixed_fees: -foxbit_percent_fee_token: -foxbit_maker_percent_fee: -foxbit_taker_percent_fee: -foxbit_buy_percent_fee_deducted_from_returns: -foxbit_maker_fixed_fees: -foxbit_taker_fixed_fees: -hyperliquid_perpetual_percent_fee_token: -hyperliquid_perpetual_maker_percent_fee: -hyperliquid_perpetual_taker_percent_fee: -hyperliquid_perpetual_buy_percent_fee_deducted_from_returns: -hyperliquid_perpetual_maker_fixed_fees: -hyperliquid_perpetual_taker_fixed_fees: -hyperliquid_perpetual_testnet_percent_fee_token: -hyperliquid_perpetual_testnet_maker_percent_fee: -hyperliquid_perpetual_testnet_taker_percent_fee: -hyperliquid_perpetual_testnet_buy_percent_fee_deducted_from_returns: -hyperliquid_perpetual_testnet_maker_fixed_fees: -hyperliquid_perpetual_testnet_taker_fixed_fees: diff --git a/bots/credentials/master_account/connectors/.gitignore b/bots/credentials/master_account/connectors/.gitignore deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/bots/credentials/master_account/hummingbot_logs.yml b/bots/credentials/master_account/hummingbot_logs.yml deleted file mode 100755 index 8e65271c08..0000000000 --- a/bots/credentials/master_account/hummingbot_logs.yml +++ /dev/null @@ -1,83 +0,0 @@ ---- -version: 1 -template_version: 12 - -formatters: - simple: - format: "%(asctime)s - %(process)d - %(name)s - %(levelname)s - %(message)s" - -handlers: - console: - class: hummingbot.logger.cli_handler.CLIHandler - level: DEBUG - formatter: simple - stream: ext://sys.stdout - console_warning: - class: hummingbot.logger.cli_handler.CLIHandler - level: WARNING - formatter: simple - stream: ext://sys.stdout - console_info: - class: hummingbot.logger.cli_handler.CLIHandler - level: INFO - formatter: simple - stream: ext://sys.stdout - file_handler: - class: logging.handlers.TimedRotatingFileHandler - level: DEBUG - formatter: simple - filename: $PROJECT_DIR/logs/logs_$STRATEGY_FILE_PATH.log - encoding: utf8 - when: "D" - interval: 1 - backupCount: 7 - "null": - class: logging.NullHandler - level: DEBUG - -loggers: - hummingbot.core.utils.eth_gas_station_lookup: - level: NETWORK - propagate: false - handlers: [console, file_handler] - mqtt: true - hummingbot.logger.log_server_client: - level: WARNING - propagate: false - handlers: [console, file_handler] - mqtt: true - hummingbot.logger.reporting_proxy_handler: - level: WARNING - propagate: false - handlers: [console, file_handler] - mqtt: true - hummingbot.strategy: - level: NETWORK - propagate: false - handlers: [console, file_handler] - mqtt: true - hummingbot.connector: - level: NETWORK - propagate: false - handlers: [console, file_handler] - mqtt: true - hummingbot.client: - level: NETWORK - propagate: false - handlers: [console, file_handler] - mqtt: true - hummingbot.core.event.event_reporter: - level: EVENT_LOG - propagate: false - handlers: [file_handler] - mqtt: false - conf: - level: NETWORK - handlers: ["null"] - propagate: false - mqtt: false - -root: - level: INFO - handlers: [console, file_handler] - mqtt: true diff --git a/bots/instances/.gitignore b/bots/instances/.gitignore deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/bots/scripts/__init__.py b/bots/scripts/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/bots/scripts/v2_with_controllers.py b/bots/scripts/v2_with_controllers.py deleted file mode 100644 index d345f0bf34..0000000000 --- a/bots/scripts/v2_with_controllers.py +++ /dev/null @@ -1,147 +0,0 @@ -import os -from decimal import Decimal -from typing import Dict, List, Optional, Set - -from hummingbot.client.hummingbot_application import HummingbotApplication -from hummingbot.connector.connector_base import ConnectorBase -from hummingbot.data_feed.candles_feed.data_types import CandlesConfig -from hummingbot.strategy.strategy_v2_base import StrategyV2Base, StrategyV2ConfigBase -from hummingbot.strategy_v2.models.base import RunnableStatus -from hummingbot.strategy_v2.models.executor_actions import CreateExecutorAction, StopExecutorAction - - -class V2WithControllersConfig(StrategyV2ConfigBase): - script_file_name: str = os.path.basename(__file__) - candles_config: List[CandlesConfig] = [] - markets: Dict[str, Set[str]] = {} - max_global_drawdown_quote: Optional[float] = None - max_controller_drawdown_quote: Optional[float] = None - - -class V2WithControllers(StrategyV2Base): - """ - This script runs a generic strategy with cash out feature. Will also check if the controllers configs have been - updated and apply the new settings. - The cash out of the script can be set by the time_to_cash_out parameter in the config file. If set, the script will - stop the controllers after the specified time has passed, and wait until the active executors finalize their - execution. - The controllers will also have a parameter to manually cash out. In that scenario, the main strategy will stop the - specific controller and wait until the active executors finalize their execution. The rest of the executors will - wait until the main strategy stops them. - """ - performance_report_interval: int = 1 - - def __init__(self, connectors: Dict[str, ConnectorBase], config: V2WithControllersConfig): - super().__init__(connectors, config) - self.config = config - self.max_pnl_by_controller = {} - self.max_global_pnl = Decimal("0") - self.drawdown_exited_controllers = [] - self.closed_executors_buffer: int = 30 - self._last_performance_report_timestamp = 0 - - def on_tick(self): - super().on_tick() - if not self._is_stop_triggered: - self.check_manual_kill_switch() - self.control_max_drawdown() - self.send_performance_report() - - def control_max_drawdown(self): - if self.config.max_controller_drawdown_quote: - self.check_max_controller_drawdown() - if self.config.max_global_drawdown_quote: - self.check_max_global_drawdown() - - def check_max_controller_drawdown(self): - for controller_id, controller in self.controllers.items(): - if controller.status != RunnableStatus.RUNNING: - continue - controller_pnl = self.get_performance_report(controller_id).global_pnl_quote - last_max_pnl = self.max_pnl_by_controller[controller_id] - if controller_pnl > last_max_pnl: - self.max_pnl_by_controller[controller_id] = controller_pnl - else: - current_drawdown = last_max_pnl - controller_pnl - if current_drawdown > self.config.max_controller_drawdown_quote: - self.logger().info(f"Controller {controller_id} reached max drawdown. Stopping the controller.") - controller.stop() - executors_order_placed = self.filter_executors( - executors=self.get_executors_by_controller(controller_id), - filter_func=lambda x: x.is_active and not x.is_trading, - ) - self.executor_orchestrator.execute_actions( - actions=[StopExecutorAction(controller_id=controller_id, executor_id=executor.id) for executor in executors_order_placed] - ) - self.drawdown_exited_controllers.append(controller_id) - - def check_max_global_drawdown(self): - current_global_pnl = sum([self.get_performance_report(controller_id).global_pnl_quote for controller_id in self.controllers.keys()]) - if current_global_pnl > self.max_global_pnl: - self.max_global_pnl = current_global_pnl - else: - current_global_drawdown = self.max_global_pnl - current_global_pnl - if current_global_drawdown > self.config.max_global_drawdown_quote: - self.drawdown_exited_controllers.extend(list(self.controllers.keys())) - self.logger().info("Global drawdown reached. Stopping the strategy.") - self._is_stop_triggered = True - HummingbotApplication.main_application().stop() - - def send_performance_report(self): - if self.current_timestamp - self._last_performance_report_timestamp >= self.performance_report_interval and self._pub: - performance_reports = {controller_id: self.get_performance_report(controller_id).dict() for controller_id in self.controllers.keys()} - self._pub(performance_reports) - self._last_performance_report_timestamp = self.current_timestamp - - def check_manual_kill_switch(self): - for controller_id, controller in self.controllers.items(): - if controller.config.manual_kill_switch and controller.status == RunnableStatus.RUNNING: - self.logger().info(f"Manual cash out for controller {controller_id}.") - controller.stop() - executors_to_stop = self.get_executors_by_controller(controller_id) - self.executor_orchestrator.execute_actions( - [StopExecutorAction(executor_id=executor.id, - controller_id=executor.controller_id) for executor in executors_to_stop]) - if not controller.config.manual_kill_switch and controller.status == RunnableStatus.TERMINATED: - if controller_id in self.drawdown_exited_controllers: - continue - self.logger().info(f"Restarting controller {controller_id}.") - controller.start() - - def check_executors_status(self): - active_executors = self.filter_executors( - executors=self.get_all_executors(), - filter_func=lambda executor: executor.status == RunnableStatus.RUNNING - ) - if not active_executors: - self.logger().info("All executors have finalized their execution. Stopping the strategy.") - HummingbotApplication.main_application().stop() - else: - non_trading_executors = self.filter_executors( - executors=active_executors, - filter_func=lambda executor: not executor.is_trading - ) - self.executor_orchestrator.execute_actions( - [StopExecutorAction(executor_id=executor.id, - controller_id=executor.controller_id) for executor in non_trading_executors]) - - def create_actions_proposal(self) -> List[CreateExecutorAction]: - return [] - - def stop_actions_proposal(self) -> List[StopExecutorAction]: - return [] - - def apply_initial_setting(self): - connectors_position_mode = {} - for controller_id, controller in self.controllers.items(): - self.max_pnl_by_controller[controller_id] = Decimal("0") - config_dict = controller.config.model_dump() - if "connector_name" in config_dict: - if self.is_perpetual(config_dict["connector_name"]): - if "position_mode" in config_dict: - connectors_position_mode[config_dict["connector_name"]] = config_dict["position_mode"] - if "leverage" in config_dict: - self.connectors[config_dict["connector_name"]].set_leverage(leverage=config_dict["leverage"], - trading_pair=config_dict["trading_pair"]) - for connector_name, position_mode in connectors_position_mode.items(): - self.connectors[connector_name].set_position_mode(position_mode) diff --git a/credentials.yml b/credentials.yml deleted file mode 100644 index 92ad99b588..0000000000 --- a/credentials.yml +++ /dev/null @@ -1,15 +0,0 @@ -# This only works if you change the env variable in the docker-compose.yml -credentials: - usernames: - admin: - email: admin@gmail.com - name: John Doe - logged_in: False - password: abc -cookie: - expiry_days: 0 - key: some_signature_key # Must be string - name: some_cookie_name -pre-authorized: - emails: - - admin@admin.com diff --git a/docker-compose-dydx.yml b/docker-compose-dydx.yml deleted file mode 100644 index 298bc5bd94..0000000000 --- a/docker-compose-dydx.yml +++ /dev/null @@ -1,100 +0,0 @@ -services: - dashboard: - container_name: dashboard - image: hummingbot/dashboard:dydx - ports: - - "8501:8501" - environment: - - AUTH_SYSTEM_ENABLED=False - - BACKEND_API_HOST=hummingbot-api - - BACKEND_API_PORT=8000 - - BACKEND_API_USERNAME=admin - - BACKEND_API_PASSWORD=admin - volumes: - - ./credentials.yml:/home/dashboard/credentials.yml - - ./pages:/home/dashboard/frontend/pages - networks: - - emqx-bridge - hummingbot-api: - container_name: hummingbot-api - image: hummingbot/hummingbot-api:dydx - ports: - - "8000:8000" - volumes: - - ./bots:/hummingbot-api/bots - - /var/run/docker.sock:/var/run/docker.sock - env_file: - - .env - environment: - # Override specific values for Docker networking - - BROKER_HOST=emqx - - DATABASE_URL=postgresql+asyncpg://hbot:hummingbot-api@postgres:5432/hummingbot_api - networks: - - emqx-bridge - depends_on: - - postgres - emqx: - container_name: hummingbot-broker - image: emqx:5 - restart: unless-stopped - environment: - - EMQX_NAME=emqx - - EMQX_HOST=node1.emqx.local - - EMQX_CLUSTER__DISCOVERY_STRATEGY=static - - EMQX_CLUSTER__STATIC__SEEDS=[emqx@node1.emqx.local] - - EMQX_LOADED_PLUGINS="emqx_recon,emqx_retainer,emqx_management,emqx_dashboard" - volumes: - - emqx-data:/opt/emqx/data - - emqx-log:/opt/emqx/log - - emqx-etc:/opt/emqx/etc - ports: - - "1883:1883" # mqtt:tcp - - "8883:8883" # mqtt:tcp:ssl - - "8083:8083" # mqtt:ws - - "8084:8084" # mqtt:ws:ssl - - "8081:8081" # http:management - - "18083:18083" # http:dashboard - - "61613:61613" # web-stomp gateway - networks: - emqx-bridge: - aliases: - - node1.emqx.local - healthcheck: - test: [ "CMD", "/opt/emqx/bin/emqx_ctl", "status" ] - interval: 5s - timeout: 25s - retries: 5 - -networks: - emqx-bridge: - driver: bridge - - postgres: - container_name: hummingbot-postgres - image: postgres:15 - restart: unless-stopped - environment: - - POSTGRES_DB=hummingbot_api - - POSTGRES_USER=hbot - - POSTGRES_PASSWORD=hummingbot-api - volumes: - - postgres-data:/var/lib/postgresql/data - ports: - - "5432:5432" - networks: - - emqx-bridge - healthcheck: - test: ["CMD-SHELL", "pg_isready -U hbot -d hummingbot_api"] - interval: 10s - timeout: 5s - retries: 5 - -networks: - emqx-bridge: - driver: bridge - -volumes: - emqx-data: { } - emqx-log: { } - emqx-etc: { } - postgres-data: { } diff --git a/docker-compose.yml b/docker-compose.yml deleted file mode 100644 index d9a19ea3e5..0000000000 --- a/docker-compose.yml +++ /dev/null @@ -1,96 +0,0 @@ -services: - dashboard: - container_name: dashboard - image: hummingbot/dashboard:latest - ports: - - "8501:8501" - environment: - - AUTH_SYSTEM_ENABLED=False - - BACKEND_API_HOST=hummingbot-api - - BACKEND_API_PORT=8000 - - BACKEND_API_USERNAME=${USERNAME} - - BACKEND_API_PASSWORD=${PASSWORD} - volumes: - - ./credentials.yml:/home/dashboard/credentials.yml - - ./pages:/home/dashboard/frontend/pages - networks: - - emqx-bridge - hummingbot-api: - container_name: hummingbot-api - image: hummingbot/hummingbot-api:latest - ports: - - "8000:8000" - volumes: - - ./bots:/hummingbot-api/bots - - /var/run/docker.sock:/var/run/docker.sock - env_file: - - .env - environment: - # Override specific values for Docker networking - - BROKER_HOST=emqx - - DATABASE_URL=postgresql+asyncpg://hbot:hummingbot-api@postgres:5432/hummingbot_api - networks: - - emqx-bridge - depends_on: - - postgres - emqx: - container_name: hummingbot-broker - image: emqx:5 - restart: unless-stopped - environment: - - EMQX_NAME=emqx - - EMQX_HOST=node1.emqx.local - - EMQX_CLUSTER__DISCOVERY_STRATEGY=static - - EMQX_CLUSTER__STATIC__SEEDS=[emqx@node1.emqx.local] - - EMQX_LOADED_PLUGINS="emqx_recon,emqx_retainer,emqx_management,emqx_dashboard" - volumes: - - emqx-data:/opt/emqx/data - - emqx-log:/opt/emqx/log - - emqx-etc:/opt/emqx/etc - ports: - - "1883:1883" # mqtt:tcp - - "8883:8883" # mqtt:tcp:ssl - - "8083:8083" # mqtt:ws - - "8084:8084" # mqtt:ws:ssl - - "8081:8081" # http:management - - "18083:18083" # http:dashboard - - "61613:61613" # web-stomp gateway - networks: - emqx-bridge: - aliases: - - node1.emqx.local - healthcheck: - test: [ "CMD", "/opt/emqx/bin/emqx_ctl", "status" ] - interval: 5s - timeout: 25s - retries: 5 - - postgres: - container_name: hummingbot-postgres - image: postgres:15 - restart: unless-stopped - environment: - - POSTGRES_DB=hummingbot_api - - POSTGRES_USER=hbot - - POSTGRES_PASSWORD=hummingbot-api - volumes: - - postgres-data:/var/lib/postgresql/data - ports: - - "5432:5432" - networks: - - emqx-bridge - healthcheck: - test: ["CMD-SHELL", "pg_isready -U hbot -d hummingbot_api"] - interval: 10s - timeout: 5s - retries: 5 - -networks: - emqx-bridge: - driver: bridge - -volumes: - emqx-data: { } - emqx-log: { } - emqx-etc: { } - postgres-data: { } diff --git a/pages/__init__.py b/pages/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/pages/config/__init__.py b/pages/config/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/pages/config/bollinger_v1/README.md b/pages/config/bollinger_v1/README.md deleted file mode 100644 index 0f0918a71a..0000000000 --- a/pages/config/bollinger_v1/README.md +++ /dev/null @@ -1,67 +0,0 @@ -# Bollinger V1 Configuration Tool - -Welcome to the Bollinger V1 Configuration Tool! This tool allows you to create, modify, visualize, backtest, and save configurations for the Bollinger V1 directional trading strategy. Here’s how you can make the most out of it. - -## Features - -- **Start from Default Configurations**: Begin with a default configuration or use the values from an existing configuration. -- **Modify Configuration Values**: Change various parameters of the configuration to suit your trading strategy. -- **Visualize Results**: See the impact of your changes through visual charts. -- **Backtest Your Strategy**: Run backtests to evaluate the performance of your strategy. -- **Save and Deploy**: Once satisfied, save the configuration to deploy it later. - -## How to Use - -### 1. Load Default Configuration - -Start by loading the default configuration for the Bollinger V1 strategy. This provides a baseline setup that you can customize to fit your needs. - -### 2. User Inputs - -Input various parameters for the strategy configuration. These parameters include: - -- **Connector Name**: Select the trading platform or exchange. -- **Trading Pair**: Choose the cryptocurrency trading pair. -- **Leverage**: Set the leverage ratio. (Note: if you are using spot trading, set the leverage to 1) -- **Total Amount (Quote Currency)**: Define the total amount you want to allocate for trading. -- **Max Executors per Side**: Specify the maximum number of executors per side. -- **Cooldown Time**: Set the cooldown period between trades. -- **Position Mode**: Choose between different position modes. -- **Candles Connector**: Select the data source for candlestick data. -- **Candles Trading Pair**: Choose the trading pair for candlestick data. -- **Interval**: Set the interval for candlestick data. -- **Bollinger Bands Length**: Define the length of the Bollinger Bands. -- **Standard Deviation Multiplier**: Set the standard deviation multiplier for the Bollinger Bands. -- **Long Threshold**: Configure the threshold for long positions. -- **Short Threshold**: Configure the threshold for short positions. -- **Risk Management**: Set parameters for stop loss, take profit, time limit, and trailing stop settings. - -### 3. Visualize Bollinger Bands - -Visualize the Bollinger Bands on the OHLC (Open, High, Low, Close) chart to see the impact of your configuration. Here are some hints to help you fine-tune the Bollinger Bands: - -- **Bollinger Bands Length**: A larger length will make the Bollinger Bands wider and smoother, while a smaller length will make them narrower and more volatile. -- **Long Threshold**: This is a reference to the Bollinger Band. A value of 0 means the lower band, and a value of 1 means the upper band. For example, if the long threshold is 0, long positions will only be taken if the price is below the lower band. -- **Short Threshold**: Similarly, a value of 1.1 means the price must be above the upper band by 0.1 of the band’s range to take a short position. -- **Thresholds**: The closer you set the thresholds to 0.5, the more trades will be executed. The farther away they are, the fewer trades will be executed. - -### 4. Executor Distribution - -The total amount in the quote currency will be distributed among the maximum number of executors per side. For example, if the total amount quote is 1000 and the max executors per side is 5, each executor will have 200 to trade. If the signal is on, the first executor will place an order and wait for the cooldown time before the next one executes, continuing this pattern for the subsequent orders. - -### 5. Backtesting - -Run backtests to evaluate the performance of your configured strategy. The backtesting section allows you to: - -- **Process Data**: Analyze historical trading data. -- **Visualize Results**: See performance metrics and charts. -- **Evaluate Accuracy**: Assess the accuracy of your strategy’s predictions and trades. -- **Understand Close Types**: Review different types of trade closures and their frequencies. - -### 6. Save Configuration - -Once you are satisfied with your configuration and backtest results, save the configuration for future use in the Deploy tab. This allows you to deploy the same strategy later without having to reconfigure it from scratch. - ---- - -Feel free to experiment with different configurations to find the optimal setup for your trading strategy. Happy trading! \ No newline at end of file diff --git a/pages/config/bollinger_v1/__init__.py b/pages/config/bollinger_v1/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/pages/config/bollinger_v1/app.py b/pages/config/bollinger_v1/app.py deleted file mode 100644 index 977c26e3ec..0000000000 --- a/pages/config/bollinger_v1/app.py +++ /dev/null @@ -1,65 +0,0 @@ -import pandas_ta as ta # noqa: F401 -import streamlit as st -from plotly.subplots import make_subplots - -from frontend.components.backtesting import backtesting_section -from frontend.components.config_loader import get_default_config_loader -from frontend.components.save_config import render_save_config -from frontend.pages.config.bollinger_v1.user_inputs import user_inputs -from frontend.pages.config.utils import get_candles -from frontend.st_utils import get_backend_api_client, initialize_st_page -from frontend.visualization import theme -from frontend.visualization.backtesting import create_backtesting_figure -from frontend.visualization.backtesting_metrics import render_accuracy_metrics, render_backtesting_metrics, render_close_types -from frontend.visualization.candles import get_candlestick_trace -from frontend.visualization.indicators import get_bbands_traces, get_volume_trace -from frontend.visualization.signals import get_bollinger_v1_signal_traces -from frontend.visualization.utils import add_traces_to_fig - -# Initialize the Streamlit page -initialize_st_page(title="Bollinger V1", icon="📈", initial_sidebar_state="expanded") -backend_api_client = get_backend_api_client() - -st.text("This tool will let you create a config for Bollinger V1 and visualize the strategy.") -get_default_config_loader("bollinger_v1") - -inputs = user_inputs() -st.session_state["default_config"].update(inputs) - -st.write("### Visualizing Bollinger Bands and Trading Signals") -days_to_visualize = st.number_input("Days to Visualize", min_value=1, max_value=365, value=7) -# Load candle data -candles = get_candles(connector_name=inputs["candles_connector"], trading_pair=inputs["candles_trading_pair"], - interval=inputs["interval"], days=days_to_visualize) - -# Create a subplot with 2 rows -fig = make_subplots(rows=2, cols=1, shared_xaxes=True, - vertical_spacing=0.02, subplot_titles=('Candlestick with Bollinger Bands', 'Volume'), - row_heights=[0.8, 0.2]) - -add_traces_to_fig(fig, [get_candlestick_trace(candles)], row=1, col=1) -add_traces_to_fig(fig, get_bbands_traces(candles, inputs["bb_length"], inputs["bb_std"]), row=1, col=1) -add_traces_to_fig(fig, get_bollinger_v1_signal_traces(candles, inputs["bb_length"], inputs["bb_std"], - inputs["bb_long_threshold"], inputs["bb_short_threshold"]), row=1, - col=1) -add_traces_to_fig(fig, [get_volume_trace(candles)], row=2, col=1) - -fig.update_layout(**theme.get_default_layout()) -# Use Streamlit's functionality to display the plot -st.plotly_chart(fig, use_container_width=True) -bt_results = backtesting_section(inputs, backend_api_client) -if bt_results: - fig = create_backtesting_figure( - df=bt_results["processed_data"], - executors=bt_results["executors"], - config=inputs) - c1, c2 = st.columns([0.9, 0.1]) - with c1: - render_backtesting_metrics(bt_results["results"]) - st.plotly_chart(fig, use_container_width=True) - with c2: - render_accuracy_metrics(bt_results["results"]) - st.write("---") - render_close_types(bt_results["results"]) -st.write("---") -render_save_config(st.session_state["default_config"]["id"], st.session_state["default_config"]) diff --git a/pages/config/bollinger_v1/user_inputs.py b/pages/config/bollinger_v1/user_inputs.py deleted file mode 100644 index d5a7e0a8ee..0000000000 --- a/pages/config/bollinger_v1/user_inputs.py +++ /dev/null @@ -1,51 +0,0 @@ -import streamlit as st - -from frontend.components.directional_trading_general_inputs import get_directional_trading_general_inputs -from frontend.components.risk_management import get_risk_management_inputs - - -def user_inputs(): - default_config = st.session_state.get("default_config", {}) - bb_length = default_config.get("bb_length", 100) - bb_std = default_config.get("bb_std", 2.0) - bb_long_threshold = default_config.get("bb_long_threshold", 0.0) - bb_short_threshold = default_config.get("bb_short_threshold", 1.0) - connector_name, trading_pair, leverage, total_amount_quote, max_executors_per_side, cooldown_time, position_mode, \ - candles_connector_name, candles_trading_pair, interval = get_directional_trading_general_inputs() - sl, tp, time_limit, ts_ap, ts_delta, take_profit_order_type = get_risk_management_inputs() - with st.expander("Bollinger Bands Configuration", expanded=True): - c1, c2, c3, c4 = st.columns(4) - with c1: - bb_length = st.number_input("Bollinger Bands Length", min_value=5, max_value=1000, value=bb_length) - with c2: - bb_std = st.number_input("Standard Deviation Multiplier", min_value=1.0, max_value=2.0, value=bb_std) - with c3: - bb_long_threshold = st.number_input("Long Threshold", value=bb_long_threshold) - with c4: - bb_short_threshold = st.number_input("Short Threshold", value=bb_short_threshold) - return { - "controller_name": "bollinger_v1", - "controller_type": "directional_trading", - "connector_name": connector_name, - "trading_pair": trading_pair, - "leverage": leverage, - "total_amount_quote": total_amount_quote, - "max_executors_per_side": max_executors_per_side, - "cooldown_time": cooldown_time, - "position_mode": position_mode, - "candles_connector": candles_connector_name, - "candles_trading_pair": candles_trading_pair, - "interval": interval, - "bb_length": bb_length, - "bb_std": bb_std, - "bb_long_threshold": bb_long_threshold, - "bb_short_threshold": bb_short_threshold, - "stop_loss": sl, - "take_profit": tp, - "time_limit": time_limit, - "trailing_stop": { - "activation_price": ts_ap, - "trailing_delta": ts_delta - }, - "take_profit_order_type": take_profit_order_type.value - } diff --git a/pages/config/dman_maker_v2/README.md b/pages/config/dman_maker_v2/README.md deleted file mode 100644 index da98f61322..0000000000 --- a/pages/config/dman_maker_v2/README.md +++ /dev/null @@ -1,61 +0,0 @@ -# D-Man Maker V2 Configuration Tool - -Welcome to the D-Man Maker V2 Configuration Tool! This tool allows you to create, modify, visualize, backtest, and save configurations for the D-Man Maker V2 trading strategy. Here’s how you can make the most out of it. - -## Features - -- **Start from Default Configurations**: Begin with a default configuration or use the values from an existing configuration. -- **Modify Configuration Values**: Change various parameters of the configuration to suit your trading strategy. -- **Visualize Results**: See the impact of your changes through visual charts. -- **Backtest Your Strategy**: Run backtests to evaluate the performance of your strategy. -- **Save and Deploy**: Once satisfied, save the configuration to deploy it later. - -## How to Use - -### 1. Load Default Configuration - -Start by loading the default configuration for the D-Man Maker V2 strategy. This provides a baseline setup that you can customize to fit your needs. - -### 2. User Inputs - -Input various parameters for the strategy configuration. These parameters include: - -- **Connector Name**: Select the trading platform or exchange. -- **Trading Pair**: Choose the cryptocurrency trading pair. -- **Leverage**: Set the leverage ratio. (Note: if you are using spot trading, set the leverage to 1) -- **Total Amount (Quote Currency)**: Define the total amount you want to allocate for trading. -- **Position Mode**: Choose between different position modes. -- **Cooldown Time**: Set the cooldown period between trades. -- **Executor Refresh Time**: Define how often the executors refresh. -- **Buy/Sell Spread Distributions**: Configure the distribution of buy and sell spreads. -- **Order Amounts**: Specify the percentages for buy and sell order amounts. -- **Custom D-Man Maker V2 Settings**: Set specific parameters like top executor refresh time and activation bounds. - -### 3. Executor Distribution Visualization - -Visualize the distribution of your trading executors. This helps you understand how your buy and sell orders are spread across different price levels and amounts. - -### 4. DCA Distribution - -After setting the executor distribution, you will need to configure the internal distribution of the DCA (Dollar Cost Averaging). This involves multiple open orders and one close order per executor level. Visualize the DCA distribution to see how the entry prices are spread and ensure the initial DCA order amounts are above the minimum order size of the exchange. - -### 5. Risk Management - -Configure risk management settings, including take profit, stop loss, time limit, and trailing stop settings for each DCA. This step is crucial for managing your trades and minimizing risk. - -### 6. Backtesting - -Run backtests to evaluate the performance of your configured strategy. The backtesting section allows you to: - -- **Process Data**: Analyze historical trading data. -- **Visualize Results**: See performance metrics and charts. -- **Evaluate Accuracy**: Assess the accuracy of your strategy’s predictions and trades. -- **Understand Close Types**: Review different types of trade closures and their frequencies. - -### 7. Save Configuration - -Once you are satisfied with your configuration and backtest results, save the configuration for future use in the Deploy tab. This allows you to deploy the same strategy later without having to reconfigure it from scratch. - ---- - -Feel free to experiment with different configurations to find the optimal setup for your trading strategy. Happy trading! \ No newline at end of file diff --git a/pages/config/dman_maker_v2/__init__.py b/pages/config/dman_maker_v2/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/pages/config/dman_maker_v2/app.py b/pages/config/dman_maker_v2/app.py deleted file mode 100644 index 0649f9c13e..0000000000 --- a/pages/config/dman_maker_v2/app.py +++ /dev/null @@ -1,68 +0,0 @@ -import streamlit as st - -from frontend.components.backtesting import backtesting_section -from frontend.components.config_loader import get_default_config_loader -from frontend.components.dca_distribution import get_dca_distribution_inputs -from frontend.components.save_config import render_save_config -from frontend.pages.config.dman_maker_v2.user_inputs import user_inputs -from frontend.st_utils import get_backend_api_client, initialize_st_page -from frontend.visualization.backtesting import create_backtesting_figure -from frontend.visualization.backtesting_metrics import render_accuracy_metrics, render_backtesting_metrics, render_close_types -from frontend.visualization.dca_builder import create_dca_graph -from frontend.visualization.executors_distribution import create_executors_distribution_traces - -# Initialize the Streamlit page -initialize_st_page(title="D-Man Maker V2", icon="🧙‍♂️") -backend_api_client = get_backend_api_client() - -# Page content -st.text("This tool will let you create a config for D-Man Maker V2 and upload it to the BackendAPI.") -get_default_config_loader("dman_maker_v2") - -inputs = user_inputs() -with st.expander("Executor Distribution:", expanded=True): - fig = create_executors_distribution_traces(inputs["buy_spreads"], inputs["sell_spreads"], inputs["buy_amounts_pct"], - inputs["sell_amounts_pct"], inputs["total_amount_quote"]) - st.plotly_chart(fig, use_container_width=True) - -dca_inputs = get_dca_distribution_inputs() - -st.write("### Visualizing DCA Distribution for specific Executor Level") -st.write("---") -buy_order_levels = len(inputs["buy_spreads"]) -sell_order_levels = len(inputs["sell_spreads"]) - -buy_executor_levels = [f"BUY_{i}" for i in range(buy_order_levels)] -sell_executor_levels = [f"SELL_{i}" for i in range(sell_order_levels)] -c1, c2 = st.columns(2) -with c1: - executor_level = st.selectbox("Executor Level", buy_executor_levels + sell_executor_levels) - side, level = executor_level.split("_") - if side == "BUY": - dca_amount = inputs["buy_amounts_pct"][int(level)] * inputs["total_amount_quote"] - else: - dca_amount = inputs["sell_amounts_pct"][int(level)] * inputs["total_amount_quote"] -with c2: - st.metric(label="DCA Amount", value=f"{dca_amount:.2f}") -fig = create_dca_graph(dca_inputs, dca_amount) -st.plotly_chart(fig, use_container_width=True) - -# Combine inputs and dca_inputs into final config -config = {**inputs, **dca_inputs} -st.session_state["default_config"].update(config) -bt_results = backtesting_section(config, backend_api_client) -if bt_results: - fig = create_backtesting_figure( - df=bt_results["processed_data"], - executors=bt_results["executors"], - config=inputs) - c1, c2 = st.columns([0.9, 0.1]) - with c1: - render_backtesting_metrics(bt_results["results"]) - st.plotly_chart(fig, use_container_width=True) - with c2: - render_accuracy_metrics(bt_results["results"]) - st.write("---") - render_close_types(bt_results["results"]) -st.write("---") -render_save_config(st.session_state["default_config"]["id"], st.session_state["default_config"]) diff --git a/pages/config/dman_maker_v2/user_inputs.py b/pages/config/dman_maker_v2/user_inputs.py deleted file mode 100644 index a9f12098e3..0000000000 --- a/pages/config/dman_maker_v2/user_inputs.py +++ /dev/null @@ -1,39 +0,0 @@ -import streamlit as st - -from frontend.components.executors_distribution import get_executors_distribution_inputs -from frontend.components.market_making_general_inputs import get_market_making_general_inputs - - -def user_inputs(): - connector_name, trading_pair, leverage, total_amount_quote, position_mode, cooldown_time,\ - executor_refresh_time, _, _, _ = get_market_making_general_inputs() - buy_spread_distributions, sell_spread_distributions, buy_order_amounts_pct, \ - sell_order_amounts_pct = get_executors_distribution_inputs() - with st.expander("Custom D-Man Maker V2 Settings"): - c1, c2 = st.columns(2) - with c1: - top_executor_refresh_time = st.number_input("Top Refresh Time (minutes)", value=60) * 60 - with c2: - executor_activation_bounds = st.number_input("Activation Bounds (%)", value=0.1) / 100 - # Create the config - config = { - "controller_name": "dman_maker_v2", - "controller_type": "market_making", - "manual_kill_switch": False, - "candles_config": [], - "connector_name": connector_name, - "trading_pair": trading_pair, - "total_amount_quote": total_amount_quote, - "buy_spreads": buy_spread_distributions, - "sell_spreads": sell_spread_distributions, - "buy_amounts_pct": buy_order_amounts_pct, - "sell_amounts_pct": sell_order_amounts_pct, - "executor_refresh_time": executor_refresh_time, - "cooldown_time": cooldown_time, - "leverage": leverage, - "position_mode": position_mode, - "top_executor_refresh_time": top_executor_refresh_time, - "executor_activation_bounds": [executor_activation_bounds] - } - - return config diff --git a/pages/config/grid_strike/README.md b/pages/config/grid_strike/README.md deleted file mode 100644 index f4d19e69c4..0000000000 --- a/pages/config/grid_strike/README.md +++ /dev/null @@ -1,137 +0,0 @@ -# Grid Strike Grid Component Configuration Tool - -Welcome to the Grid Strike Grid Component Configuration Tool! This tool allows you to create, modify, visualize, and save configurations for the Grid Strike Grid Component trading strategy, which is a simplified version of the Grid Strike strategy focused on a single grid. - -## Features - -- **Simple Grid Configuration**: Configure a single grid with start, end, and limit prices. -- **Dynamic Price Range Defaults**: Automatically sets price ranges based on current market conditions. -- **Visual Grid Configuration**: See your grid settings directly on the price chart. -- **Triple Barrier Risk Management**: Configure take profit, stop loss, and time limit parameters. -- **Save and Deploy**: Once satisfied, save the configuration to deploy it later. - -## How to Use - -### 1. Basic Configuration - -Start by configuring the basic parameters: -- **ID Prefix**: Prefix for the strategy ID (default: "grid_"). -- **Trading Pair**: Choose the cryptocurrency trading pair (e.g., "BTC-FDUSD"). -- **Connector Name**: Select the trading platform or exchange (e.g., "binance"). -- **Leverage**: Set the leverage ratio for margin/futures trading. - -### 2. Chart Configuration - -Configure how you want to visualize the market data: -- **Candles Connector**: Select the data source for candlestick data. -- **Interval**: Choose the timeframe for the candlesticks (1m to 1d). -- **Days to Display**: Select how many days of historical data to show. - -### 3. Grid Configuration - -Configure your grid parameters: -- **Side**: Choose BUY or SELL for the grid. -- **Start Price**: The price where the grid begins. -- **End Price**: The price where the grid ends. -- **Limit Price**: A price limit that will stop the strategy. -- **Min Spread Between Orders**: Minimum price difference between orders. -- **Min Order Amount (Quote)**: Minimum size for individual orders. -- **Maximum Open Orders**: Maximum number of active orders in the grid. - -### 4. Order Configuration - -Fine-tune your order placement: -- **Max Orders Per Batch**: Maximum number of orders to place at once. -- **Order Frequency**: Time between order placements in seconds. -- **Activation Bounds**: Price deviation to trigger updates. - -### 5. Triple Barrier Configuration - -Set up risk management parameters: -- **Open Order Type**: The type of order to open positions (e.g., MARKET, LIMIT). -- **Take Profit**: Price movement percentage for take profit. -- **Stop Loss**: Price movement percentage for stop loss. -- **Time Limit**: Time limit for orders in hours. -- **Order Type Settings**: Configure order types for each barrier. - -### 6. Advanced Configuration - -Additional settings: -- **Position Mode**: Choose between HEDGE or ONE-WAY. -- **Strategy Time Limit**: Maximum duration for the entire strategy in hours. -- **Manual Kill Switch**: Option to enable manual kill switch. - -## Understanding Grid Strike Grid Component - -The Grid Strike Grid Component strategy creates a single grid of orders within a specified price range. Here's how it works: - -### Grid Mechanics -- The strategy places orders uniformly between the start and end prices -- BUY grids place buy orders from start (higher) to end (lower) prices -- SELL grids place sell orders from start (lower) to end (higher) prices -- The limit price serves as an additional safety boundary - -### Order Placement -- Orders are placed within the grid based on the min spread between orders -- The amount per order is calculated based on the total amount specified -- Orders are automatically adjusted when price moves beyond activation bounds - -### Visual Indicators -- Green lines represent the start and end prices -- Red line represents the limit price -- Candlestick chart shows the market price action - -## Example Configuration - -Here's a sample configuration for a BTC-FDUSD grid: - -```yaml -id: grid_btcfdusd -controller_name: grid_strike -controller_type: generic -total_amount_quote: 200 -manual_kill_switch: null -candles_config: [] -leverage: 75 -position_mode: HEDGE -connector_name: binance -trading_pair: BTC-FDUSD -side: 1 -start_price: 84000 -end_price: 84300 -limit_price: 83700 -min_spread_between_orders: 0.0001 -min_order_amount_quote: 5 -max_open_orders: 40 -max_orders_per_batch: 1 -order_frequency: 2 -activation_bounds: 0.01 -triple_barrier_config: - open_order_type: 3 - stop_loss: null - stop_loss_order_type: 1 - take_profit: 0.0001 - take_profit_order_type: 3 - time_limit: 21600 - time_limit_order_type: 1 -time_limit: 172800 -``` - -## Best Practices - -1. **Grid Placement** - - For BUY grids, set start price above end price - - For SELL grids, set end price above start price - - Set limit price as a safety boundary where you want to stop the strategy - -2. **Amount Management** - - Set total amount based on your risk tolerance - - Configure min order amount to ensure meaningful trade sizes - -3. **Grid Density** - - Adjust min spread between orders based on the asset's volatility - - Set max open orders to control grid density - -4. **Risk Management** - - Use triple barrier parameters to manage risk for individual positions - - Set appropriate time limits for both positions and the overall strategy \ No newline at end of file diff --git a/pages/config/grid_strike/__init__.py b/pages/config/grid_strike/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/pages/config/grid_strike/app.py b/pages/config/grid_strike/app.py deleted file mode 100644 index 6040dc5f56..0000000000 --- a/pages/config/grid_strike/app.py +++ /dev/null @@ -1,161 +0,0 @@ -import plotly.graph_objects as go -import streamlit as st -from hummingbot.core.data_type.common import TradeType -from plotly.subplots import make_subplots - -from frontend.components.config_loader import get_default_config_loader -from frontend.components.save_config import render_save_config -from frontend.pages.config.grid_strike.user_inputs import user_inputs -from frontend.pages.config.utils import get_candles -from frontend.st_utils import get_backend_api_client, initialize_st_page -from frontend.visualization import theme -from frontend.visualization.candles import get_candlestick_trace -from frontend.visualization.utils import add_traces_to_fig - - -def get_grid_trace(start_price, end_price, limit_price): - """Generate horizontal line traces for the grid with different colors.""" - traces = [] - - # Start price line - traces.append(go.Scatter( - x=[], # Will be set to full range when plotting - y=[float(start_price), float(start_price)], - mode='lines', - line=dict(color='rgba(0, 255, 0, 1)', width=1.5, dash='solid'), - name=f'Start Price: {float(start_price):,.2f}', - hoverinfo='name' - )) - - # End price line - traces.append(go.Scatter( - x=[], # Will be set to full range when plotting - y=[float(end_price), float(end_price)], - mode='lines', - line=dict(color='rgba(0, 255, 0, 1)', width=1.5, dash='dot'), - name=f'End Price: {float(end_price):,.2f}', - hoverinfo='name' - )) - - # Limit price line (if provided) - if limit_price: - traces.append(go.Scatter( - x=[], # Will be set to full range when plotting - y=[float(limit_price), float(limit_price)], - mode='lines', - line=dict(color='rgba(255, 0, 0, 1)', width=1.5, dash='dashdot'), - name=f'Limit Price: {float(limit_price):,.2f}', - hoverinfo='name' - )) - - return traces - - -# Initialize the Streamlit page -initialize_st_page(title="Grid Strike Grid Component", icon="📊", initial_sidebar_state="expanded") -backend_api_client = get_backend_api_client() - -get_default_config_loader("grid_strike") -# User inputs -inputs = user_inputs() -st.session_state["default_config"].update(inputs) - -# Load candle data -candles = get_candles( - connector_name=inputs["connector_name"], - trading_pair=inputs["trading_pair"], - interval=inputs["interval"], - days=inputs["days_to_visualize"] -) - -# Create a subplot with just 1 row for price action -fig = make_subplots( - rows=1, cols=1, - subplot_titles=(f'Grid Strike Grid Component - {inputs["trading_pair"]} ({inputs["interval"]})',), -) - -# Add basic candlestick chart -candlestick_trace = get_candlestick_trace(candles) -add_traces_to_fig(fig, [candlestick_trace], row=1, col=1) - -# Add grid visualization -grid_traces = get_grid_trace( - inputs["start_price"], - inputs["end_price"], - inputs["limit_price"] -) - -for trace in grid_traces: - # Set the x-axis range for all grid traces - trace.x = [candles.index[0], candles.index[-1]] - fig.add_trace(trace, row=1, col=1) - -# Update y-axis to make sure all grid points and candles are visible -all_prices = [] -# Add candle prices -all_prices.extend(candles['high'].tolist()) -all_prices.extend(candles['low'].tolist()) -# Add grid prices -all_prices.extend([float(inputs["start_price"]), float(inputs["end_price"])]) -if inputs["limit_price"]: - all_prices.append(float(inputs["limit_price"])) - -y_min, y_max = min(all_prices), max(all_prices) -padding = (y_max - y_min) * 0.1 # Add 10% padding -fig.update_yaxes(range=[y_min - padding, y_max + padding]) - -# Update layout for better visualization -layout_updates = { - "legend": dict( - yanchor="top", - y=0.99, - xanchor="left", - x=0.01, - bgcolor="rgba(0,0,0,0.5)" - ), - "hovermode": 'x unified', - "showlegend": True, - "height": 600, # Make the chart taller - "yaxis": dict( - fixedrange=False, # Allow y-axis zooming - autorange=True, # Enable auto-ranging - ) -} - -# Merge the default theme with our updates -fig.update_layout( - **(theme.get_default_layout() | layout_updates) -) - -# Use Streamlit's functionality to display the plot -st.plotly_chart(fig, use_container_width=True) - - -# Add after user inputs and before saving -def prepare_config_for_save(config): - """Prepare config for JSON serialization.""" - prepared_config = config.copy() - - # Convert position mode enum to value - prepared_config["position_mode"] = prepared_config["position_mode"].value - - # Convert side to value - prepared_config["side"] = prepared_config["side"].value - - # Convert triple barrier order types to values - if "triple_barrier_config" in prepared_config and prepared_config["triple_barrier_config"]: - for key in ["open_order_type", "stop_loss_order_type", "take_profit_order_type", "time_limit_order_type"]: - if key in prepared_config["triple_barrier_config"] and prepared_config["triple_barrier_config"][key] is not None: - prepared_config["triple_barrier_config"][key] = prepared_config["triple_barrier_config"][key].value - - # Remove chart-specific fields - del prepared_config["candles_connector"] - del prepared_config["interval"] - del prepared_config["days_to_visualize"] - - return prepared_config - - -# Render save config component -render_save_config(st.session_state["default_config"]["id"], - prepare_config_for_save(st.session_state["default_config"])) \ No newline at end of file diff --git a/pages/config/grid_strike/user_inputs.py b/pages/config/grid_strike/user_inputs.py deleted file mode 100644 index eae099ba4d..0000000000 --- a/pages/config/grid_strike/user_inputs.py +++ /dev/null @@ -1,256 +0,0 @@ -import streamlit as st -from hummingbot.core.data_type.common import OrderType, PositionMode, TradeType - -from frontend.pages.config.utils import get_candles - - -def get_price_range_defaults(connector_name: str, trading_pair: str, interval: str, days: int = 7): - """Fetch candles and compute default price range based on recent min/max prices.""" - try: - candles = get_candles( - connector_name=connector_name, - trading_pair=trading_pair, - interval=interval, - days=days - ) - current_price = float(candles['close'].iloc[-1]) - min_price = float(candles['low'].quantile(0.05)) - max_price = float(candles['high'].quantile(0.95)) - return round(min_price, 2), round(current_price, 2), round(max_price, 2) - except Exception as e: - st.warning(f"Could not fetch price data: {str(e)}. Using default values.") - return 40000.0, 42000.0, 44000.0 # Fallback defaults - - -def user_inputs(): - # Split the page into two columns for the expanders - left_col, right_col = st.columns(2) - with left_col: - # Combined Basic, Amount, and Grid Configuration - with st.expander("Grid Configuration", expanded=True): - # Basic parameters - c1, c2 = st.columns(2) - with c1: - connector_name = st.text_input("Connector Name", value="binance_perpetual") - # Side selection - side = st.selectbox( - "Side", - options=["BUY", "SELL"], - index=0, - help="Trading direction for the grid" - ) - leverage = st.number_input("Leverage", min_value=1, value=20) - with c2: - trading_pair = st.text_input("Trading Pair", value="WLD-USDT") - # Amount parameter - total_amount_quote = st.number_input( - "Total Amount (Quote)", - min_value=0.0, - value=200.0, - help="Total amount in quote currency to use for trading" - ) - position_mode = st.selectbox( - "Position Mode", - options=["HEDGE", "ONEWAY"], - index=0 - ) - # Grid price parameters - with c1: - # Get default price ranges based on current market data - min_price, current_price, max_price = get_price_range_defaults( - connector_name, - trading_pair, - "1h", # Default interval for price range calculation - 30 # Default days for price range calculation - ) - if side == "BUY": - start_price = min(min_price, current_price) - end_price = max(current_price, max_price) - limit_price = start_price * 0.95 - else: - start_price = max(max_price, current_price) - end_price = min(current_price, min_price) - limit_price = start_price * 1.05 - # Price configuration with meaningful defaults - start_price = st.number_input( - "Start Price", - value=start_price, - format="%.2f", - help="Grid start price" - ) - - end_price = st.number_input( - "End Price", - value=end_price, - format="%.2f", - help="Grid end price" - ) - - limit_price = st.number_input( - "Limit Price", - value=limit_price, - format="%.2f", - help="Price limit to stop the strategy" - ) - - with c2: - # Grid spacing configuration - min_spread = st.number_input( - "Min Spread Between Orders", - min_value=0.0000, - value=0.0001, - format="%.4f", - help="Minimum price difference between orders", - step=0.0001 - ) - - min_order_amount = st.number_input( - "Min Order Amount (Quote)", - min_value=1.0, - value=6.0, - help="Minimum amount for each order in quote currency" - ) - - max_open_orders = st.number_input( - "Maximum Open Orders", - min_value=1, - value=3, - help="Maximum number of active orders in the grid" - ) - - with right_col: - # Order configuration - with st.expander("Order Configuration", expanded=True): - c1, c2, c3 = st.columns(3) - with c1: - max_orders_per_batch = st.number_input( - "Max Orders Per Batch", - min_value=1, - value=1, - help="Maximum number of orders to place at once" - ) - with c2: - order_frequency = st.number_input( - "Order Frequency (s)", - min_value=1, - value=2, - help="Time between order placements in seconds" - ) - with c3: - activation_bounds = st.number_input( - "Activation Bounds", - min_value=0.0, - value=0.01, - format="%.4f", - help="Price deviation to trigger updates" - ) - - # Triple barrier configuration - with st.expander("Triple Barrier Configuration", expanded=True): - c1, c2 = st.columns(2) - with c1: - # Order types - open_order_type_options = ["LIMIT", "LIMIT_MAKER", "MARKET"] - open_order_type = st.selectbox( - "Open Order Type", - options=open_order_type_options, - index=1, # Default to MARKET - key="open_order_type" - ) - - take_profit_order_type_options = ["LIMIT", "LIMIT_MAKER", "MARKET"] - take_profit_order_type = st.selectbox( - "Take Profit Order Type", - options=take_profit_order_type_options, - index=1, # Default to MARKET - key="tp_order_type" - ) - - with c2: - # Barrier values - take_profit = st.number_input( - "Take Profit", - min_value=0.0, - value=0.0001, - format="%.4f", - help="Price movement percentage for take profit" - ) - - stop_loss = st.number_input( - "Stop Loss", - min_value=0.0, - value=0.1, - format="%.4f", - help="Price movement percentage for stop loss (0 for none)" - ) - - # Keep position parameter - keep_position = st.checkbox( - "Keep Position", - value=False, - help="Keep the position open after grid execution" - ) - # Chart configuration - with st.expander("Chart Configuration", expanded=True): - c1, c2, c3 = st.columns(3) - with c1: - candles_connector = st.text_input( - "Candles Connector", - value=connector_name, # Use same connector as trading by default - help="Connector to fetch price data from" - ) - with c2: - interval = st.selectbox( - "Interval", - options=["1m", "3m", "5m", "15m", "30m", "1h", "2h", "4h", "6h", "12h", "1d"], - index=4, # Default to 1h - help="Candlestick interval" - ) - with c3: - days_to_visualize = st.number_input( - "Days to Display", - min_value=1, - max_value=365, - value=30, - help="Number of days of historical data to display" - ) - - - - # Convert stop_loss to None if it's zero - stop_loss_value = stop_loss if stop_loss > 0 else None - - # Prepare triple barrier config - triple_barrier_config = { - "open_order_type": OrderType[open_order_type], - "stop_loss": stop_loss_value, - "take_profit": take_profit, - "take_profit_order_type": OrderType[take_profit_order_type], - "time_limit": None, - } - - return { - "controller_name": "grid_strike", - "controller_type": "generic", - "connector_name": connector_name, - "candles_connector": candles_connector, - "trading_pair": trading_pair, - "interval": interval, - "days_to_visualize": days_to_visualize, - "leverage": leverage, - "side": TradeType[side], - "start_price": start_price, - "end_price": end_price, - "limit_price": limit_price, - "position_mode": PositionMode[position_mode], - "total_amount_quote": total_amount_quote, - "min_spread_between_orders": min_spread, - "min_order_amount_quote": min_order_amount, - "max_open_orders": max_open_orders, - "max_orders_per_batch": max_orders_per_batch, - "order_frequency": order_frequency, - "activation_bounds": activation_bounds, - "triple_barrier_config": triple_barrier_config, - "keep_position": keep_position, - "candles_config": [] - } diff --git a/pages/config/kalman_filter_v1/README.md b/pages/config/kalman_filter_v1/README.md deleted file mode 100644 index 2fa8d53fad..0000000000 --- a/pages/config/kalman_filter_v1/README.md +++ /dev/null @@ -1,19 +0,0 @@ -# D-Man Maker V2 - -## Features -- **Interactive Configuration**: Configure market making parameters such as spreads, amounts, and order levels through an intuitive web interface. -- **Visual Feedback**: Visualize order spread and amount distributions using dynamic Plotly charts. -- **Backend Integration**: Save and deploy configurations directly to a backend system for active management and execution. - -### Using the Tool -1. **Configure Parameters**: Use the Streamlit interface to input parameters such as connector type, trading pair, and leverage. -2. **Set Distributions**: Define distributions for buy and sell orders, including spread and amount, either manually or through predefined distribution types like Geometric or Fibonacci. -3. **Visualize Orders**: View the configured order distributions on a Plotly graph, which illustrates the relationship between spread and amount. -4. **Export Configuration**: Once the configuration is set, export it as a YAML file or directly upload it to the Backend API. -5. **Upload**: Use the "Upload Config to BackendAPI" button to send your configuration to the backend system. Then can be used to deploy a new bot. - -## Troubleshooting -- **UI Not Loading**: Ensure all Python dependencies are installed and that the Streamlit server is running correctly. -- **API Errors**: Check the console for any error messages that may indicate issues with the backend connection. - -For more detailed documentation on the backend API and additional configurations, please refer to the project's documentation or contact the development team. \ No newline at end of file diff --git a/pages/config/kalman_filter_v1/__init__.py b/pages/config/kalman_filter_v1/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/pages/config/kalman_filter_v1/app.py b/pages/config/kalman_filter_v1/app.py deleted file mode 100644 index 96d69fb0f2..0000000000 --- a/pages/config/kalman_filter_v1/app.py +++ /dev/null @@ -1,236 +0,0 @@ -import pandas as pd -import plotly.graph_objects as go -import streamlit as st -import yaml -from hummingbot.connector.connector_base import OrderType -from plotly.subplots import make_subplots -from pykalman import KalmanFilter - -from backend.services.backend_api_client import BackendAPIClient -from CONFIG import BACKEND_API_HOST, BACKEND_API_PORT -from frontend.st_utils import get_backend_api_client, initialize_st_page - -# Initialize the Streamlit page -initialize_st_page(title="Kalman Filter V1", icon="📈", initial_sidebar_state="expanded") - - -@st.cache_data -def get_candles(connector_name="binance", trading_pair="BTC-USDT", interval="1m", max_records=5000): - backend_client = BackendAPIClient(BACKEND_API_HOST, BACKEND_API_PORT) - return backend_client.get_real_time_candles(connector_name, trading_pair, interval, max_records) - - -@st.cache_data -def add_indicators(df, observation_covariance=1, transition_covariance=0.01, initial_state_covariance=0.001): - # Add Bollinger Bands - # Construct a Kalman filter - kf = KalmanFilter(transition_matrices=[1], - observation_matrices=[1], - initial_state_mean=df["close"].values[0], - initial_state_covariance=initial_state_covariance, - observation_covariance=observation_covariance, - transition_covariance=transition_covariance) - mean, cov = kf.filter(df["close"].values) - df["kf"] = pd.Series(mean.flatten(), index=df["close"].index) - df["kf_upper"] = pd.Series(mean.flatten() + 1.96 * cov.flatten(), index=df["close"].index) - df["kf_lower"] = pd.Series(mean.flatten() - 1.96 * cov.flatten(), index=df["close"].index) - - # Generate signal - long_condition = df["close"] < df["kf_lower"] - short_condition = df["close"] > df["kf_upper"] - - # Generate signal - df["signal"] = 0 - df.loc[long_condition, "signal"] = 1 - df.loc[short_condition, "signal"] = -1 - return df - - -st.text("This tool will let you create a config for Kalman Filter V1 and visualize the strategy.") -st.write("---") - -# Inputs for Kalman Filter configuration -st.write("## Candles Configuration") -c1, c2, c3, c4 = st.columns(4) -with c1: - connector_name = st.text_input("Connector Name", value="binance_perpetual") - candles_connector = st.text_input("Candles Connector", value="binance_perpetual") -with c2: - trading_pair = st.text_input("Trading Pair", value="WLD-USDT") - candles_trading_pair = st.text_input("Candles Trading Pair", value="WLD-USDT") -with c3: - interval = st.selectbox("Candle Interval", options=["1m", "3m", "5m", "15m", "30m"], index=1) -with c4: - max_records = st.number_input("Max Records", min_value=100, max_value=10000, value=1000) - -st.write("## Positions Configuration") -c1, c2, c3, c4 = st.columns(4) -with c1: - sl = st.number_input("Stop Loss (%)", min_value=0.0, max_value=100.0, value=2.0, step=0.1) - tp = st.number_input("Take Profit (%)", min_value=0.0, max_value=100.0, value=3.0, step=0.1) - take_profit_order_type = st.selectbox("Take Profit Order Type", (OrderType.LIMIT, OrderType.MARKET)) -with c2: - ts_ap = st.number_input("Trailing Stop Activation Price (%)", min_value=0.0, max_value=100.0, value=1.0, step=0.1) - ts_delta = st.number_input("Trailing Stop Delta (%)", min_value=0.0, max_value=100.0, value=0.3, step=0.1) - time_limit = st.number_input("Time Limit (minutes)", min_value=0, value=60 * 6) -with c3: - executor_amount_quote = st.number_input("Executor Amount Quote", min_value=10.0, value=100.0, step=1.0) - max_executors_per_side = st.number_input("Max Executors Per Side", min_value=1, value=2) - cooldown_time = st.number_input("Cooldown Time (seconds)", min_value=0, value=300) -with c4: - leverage = st.number_input("Leverage", min_value=1, value=20) - position_mode = st.selectbox("Position Mode", ("HEDGE", "ONEWAY")) - -st.write("## Kalman Filter Configuration") -c1, c2 = st.columns(2) -with c1: - observation_covariance = st.number_input("Observation Covariance", value=1.0) -with c2: - transition_covariance = st.number_input("Transition Covariance", value=0.001, step=0.0001, format="%.4f") - -# Load candle data -candle_data = get_candles(connector_name=candles_connector, trading_pair=candles_trading_pair, interval=interval, - max_records=max_records) -df = pd.DataFrame(candle_data) -df.index = pd.to_datetime(df['timestamp'], unit='s') -candles_processed = add_indicators(df, observation_covariance, transition_covariance) - -# Prepare data for signals -signals = candles_processed[candles_processed['signal'] != 0] -buy_signals = signals[signals['signal'] == 1] -sell_signals = signals[signals['signal'] == -1] - - -# Define your color palette -tech_colors = { - 'upper_band': '#4682B4', # Steel Blue for the Upper Bollinger Band - 'middle_band': '#FFD700', # Gold for the Middle Bollinger Band - 'lower_band': '#32CD32', # Green for the Lower Bollinger Band - 'buy_signal': '#1E90FF', # Dodger Blue for Buy Signals - 'sell_signal': '#FF0000', # Red for Sell Signals -} - -# Create a subplot with 2 rows -fig = make_subplots(rows=2, cols=1, shared_xaxes=True, - vertical_spacing=0.02, subplot_titles=('Candlestick with Kalman Filter', 'Trading Signals'), - row_heights=[0.7, 0.3]) - -# Candlestick plot -fig.add_trace(go.Candlestick(x=candles_processed.index, - open=candles_processed['open'], - high=candles_processed['high'], - low=candles_processed['low'], - close=candles_processed['close'], - name="Candlesticks", increasing_line_color='#2ECC71', decreasing_line_color='#E74C3C'), - row=1, col=1) - -# Bollinger Bands -fig.add_trace( - go.Scatter(x=candles_processed.index, y=candles_processed['kf_upper'], line=dict(color=tech_colors['upper_band']), - name='Upper Band'), row=1, col=1) -fig.add_trace( - go.Scatter(x=candles_processed.index, y=candles_processed['kf'], line=dict(color=tech_colors['middle_band']), - name='Middle Band'), row=1, col=1) -fig.add_trace( - go.Scatter(x=candles_processed.index, y=candles_processed['kf_lower'], line=dict(color=tech_colors['lower_band']), - name='Lower Band'), row=1, col=1) - -# Signals plot -fig.add_trace(go.Scatter(x=buy_signals.index, y=buy_signals['close'], mode='markers', - marker=dict(color=tech_colors['buy_signal'], size=10, symbol='triangle-up'), - name='Buy Signal'), row=1, col=1) -fig.add_trace(go.Scatter(x=sell_signals.index, y=sell_signals['close'], mode='markers', - marker=dict(color=tech_colors['sell_signal'], size=10, symbol='triangle-down'), - name='Sell Signal'), row=1, col=1) - -fig.add_trace(go.Scatter(x=signals.index, y=signals['signal'], mode='markers', - marker=dict(color=signals['signal'].map( - {1: tech_colors['buy_signal'], -1: tech_colors['sell_signal']}), size=10), - showlegend=False), row=2, col=1) - -# Update layout -fig.update_layout( - height=1000, # Increased height for better visibility - title="Kalman Filter and Trading Signals", - xaxis_title="Time", - yaxis_title="Price", - template="plotly_dark", - showlegend=False -) - -# Update xaxis properties -fig.update_xaxes( - rangeslider_visible=False, # Disable range slider for all - row=1, col=1 -) -fig.update_xaxes( - row=2, col=1 -) - -# Update yaxis properties -fig.update_yaxes( - title_text="Price", row=1, col=1 -) -fig.update_yaxes( - title_text="Signal", row=2, col=1 -) - -# Use Streamlit's functionality to display the plot -st.plotly_chart(fig, use_container_width=True) - -c1, c2, c3 = st.columns([2, 2, 1]) - -with c1: - config_base = st.text_input("Config Base", value=f"bollinger_v1-{connector_name}-{trading_pair.split('-')[0]}") -with c2: - config_tag = st.text_input("Config Tag", value="1.1") - -id = f"{config_base}-{config_tag}" -config = { - "id": id, - "controller_name": "bollinger_v1", - "controller_type": "directional_trading", - "manual_kill_switch": False, - "candles_config": [], - "connector_name": connector_name, - "trading_pair": trading_pair, - "executor_amount_quote": executor_amount_quote, - "max_executors_per_side": max_executors_per_side, - "cooldown_time": cooldown_time, - "leverage": leverage, - "position_mode": position_mode, - "stop_loss": sl / 100, - "take_profit": tp / 100, - "time_limit": time_limit, - "take_profit_order_type": take_profit_order_type.value, - "trailing_stop": { - "activation_price": ts_ap / 100, - "trailing_delta": ts_delta / 100 - }, - "candles_connector": candles_connector, - "candles_trading_pair": candles_trading_pair, - "interval": interval, -} - -yaml_config = yaml.dump(config, default_flow_style=False) - -with c3: - download_config = st.download_button( - label="Download YAML", - data=yaml_config, - file_name=f'{id.lower()}.yml', - mime='text/yaml' - ) - upload_config_to_backend = st.button("Upload Config to BackendAPI") - -if upload_config_to_backend: - backend_api_client = get_backend_api_client() - try: - config_name = config.get("id", id) - backend_api_client.controllers.create_or_update_controller_config( - config_name=config_name, - config=config - ) - st.success("Config uploaded successfully!") - except Exception as e: - st.error(f"Failed to upload config: {e}") diff --git a/pages/config/macd_bb_v1/README.md b/pages/config/macd_bb_v1/README.md deleted file mode 100644 index b7e7adb1b7..0000000000 --- a/pages/config/macd_bb_v1/README.md +++ /dev/null @@ -1,80 +0,0 @@ -# MACD BB V1 Configuration Tool - -Welcome to the MACD BB V1 Configuration Tool! This tool allows you to create, modify, visualize, backtest, and save configurations for the MACD BB V1 directional trading strategy. Here’s how you can make the most out of it. - -## Features - -- **Start from Default Configurations**: Begin with a default configuration or use the values from an existing configuration. -- **Modify Configuration Values**: Change various parameters of the configuration to suit your trading strategy. -- **Visualize Results**: See the impact of your changes through visual charts. -- **Backtest Your Strategy**: Run backtests to evaluate the performance of your strategy. -- **Save and Deploy**: Once satisfied, save the configuration to deploy it later. - -## How to Use - -### 1. Load Default Configuration - -Start by loading the default configuration for the MACD BB V1 strategy. This provides a baseline setup that you can customize to fit your needs. - -### 2. User Inputs - -Input various parameters for the strategy configuration. These parameters include: - -- **Connector Name**: Select the trading platform or exchange. -- **Trading Pair**: Choose the cryptocurrency trading pair. -- **Leverage**: Set the leverage ratio. (Note: if you are using spot trading, set the leverage to 1) -- **Total Amount (Quote Currency)**: Define the total amount you want to allocate for trading. -- **Max Executors per Side**: Specify the maximum number of executors per side. -- **Cooldown Time**: Set the cooldown period between trades. -- **Position Mode**: Choose between different position modes. -- **Candles Connector**: Select the data source for candlestick data. -- **Candles Trading Pair**: Choose the trading pair for candlestick data. -- **Interval**: Set the interval for candlestick data. -- **Bollinger Bands Length**: Define the length of the Bollinger Bands. -- **Standard Deviation Multiplier**: Set the standard deviation multiplier for the Bollinger Bands. -- **Long Threshold**: Configure the threshold for long positions. -- **Short Threshold**: Configure the threshold for short positions. -- **MACD Fast**: Set the fast period for the MACD indicator. -- **MACD Slow**: Set the slow period for the MACD indicator. -- **MACD Signal**: Set the signal period for the MACD indicator. -- **Risk Management**: Set parameters for stop loss, take profit, time limit, and trailing stop settings. - -### 3. Visualize Indicators - -Visualize the Bollinger Bands and MACD on the OHLC (Open, High, Low, Close) chart to see the impact of your configuration. Here are some hints to help you fine-tune the indicators: - -- **Bollinger Bands Length**: A larger length will make the Bollinger Bands wider and smoother, while a smaller length will make them narrower and more volatile. -- **Long Threshold**: This is a reference to the Bollinger Band. A value of 0 means the lower band, and a value of 1 means the upper band. For example, if the long threshold is 0, long positions will only be taken if the price is below the lower band. -- **Short Threshold**: Similarly, a value of 1.1 means the price must be above the upper band by 0.1 of the band’s range to take a short position. -- **Thresholds**: The closer you set the thresholds to 0.5, the more trades will be executed. The farther away they are, the fewer trades will be executed. -- **MACD**: The MACD is used to determine trend changes. If the MACD value is negative and the histogram becomes positive, it signals a market trend up, suggesting a long position. Conversely, if the MACD value is positive and the histogram becomes negative, it signals a market trend down, suggesting a short position. - -### Combining MACD and Bollinger Bands for Trade Signals - -The MACD BB V1 strategy uses the MACD to identify potential trend changes and the Bollinger Bands to filter these signals: - -- **Long Signal**: The MACD value must be negative, and the histogram must become positive, indicating a potential uptrend. The price must also be below the long threshold of the Bollinger Bands (e.g., below the lower band if the threshold is 0). -- **Short Signal**: The MACD value must be positive, and the histogram must become negative, indicating a potential downtrend. The price must also be above the short threshold of the Bollinger Bands (e.g., above the upper band if the threshold is 1.1). - -This combination ensures that you only take trend-following trades when the market is already deviated from the mean, enhancing the effectiveness of your trading strategy. - -### 4. Executor Distribution - -The total amount in the quote currency will be distributed among the maximum number of executors per side. For example, if the total amount quote is 1000 and the max executors per side is 5, each executor will have 200 to trade. If the signal is on, the first executor will place an order and wait for the cooldown time before the next one executes, continuing this pattern for the subsequent orders. - -### 5. Backtesting - -Run backtests to evaluate the performance of your configured strategy. The backtesting section allows you to: - -- **Process Data**: Analyze historical trading data. -- **Visualize Results**: See performance metrics and charts. -- **Evaluate Accuracy**: Assess the accuracy of your strategy’s predictions and trades. -- **Understand Close Types**: Review different types of trade closures and their frequencies. - -### 6. Save Configuration - -Once you are satisfied with your configuration and backtest results, save the configuration for future use in the Deploy tab. This allows you to deploy the same strategy later without having to reconfigure it from scratch. - ---- - -Feel free to experiment with different configurations to find the optimal setup for your trading strategy. Happy trading! \ No newline at end of file diff --git a/pages/config/macd_bb_v1/__init__.py b/pages/config/macd_bb_v1/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/pages/config/macd_bb_v1/app.py b/pages/config/macd_bb_v1/app.py deleted file mode 100644 index 02372924b5..0000000000 --- a/pages/config/macd_bb_v1/app.py +++ /dev/null @@ -1,65 +0,0 @@ -import streamlit as st -from plotly.subplots import make_subplots - -from frontend.components.backtesting import backtesting_section -from frontend.components.config_loader import get_default_config_loader -from frontend.components.save_config import render_save_config -from frontend.pages.config.macd_bb_v1.user_inputs import user_inputs -from frontend.pages.config.utils import get_candles -from frontend.st_utils import get_backend_api_client, initialize_st_page -from frontend.visualization import theme -from frontend.visualization.backtesting import create_backtesting_figure -from frontend.visualization.backtesting_metrics import render_accuracy_metrics, render_backtesting_metrics, render_close_types -from frontend.visualization.candles import get_candlestick_trace -from frontend.visualization.indicators import get_bbands_traces, get_macd_traces -from frontend.visualization.signals import get_macdbb_v1_signal_traces -from frontend.visualization.utils import add_traces_to_fig - -# Initialize the Streamlit page -initialize_st_page(title="MACD_BB V1", icon="📊", initial_sidebar_state="expanded") -backend_api_client = get_backend_api_client() - -get_default_config_loader("macd_bb_v1") -# User inputs -inputs = user_inputs() -st.session_state["default_config"].update(inputs) - -st.write("### Visualizing MACD Bollinger Trading Signals") -days_to_visualize = st.number_input("Days to Visualize", min_value=1, max_value=365, value=7) -# Load candle data -candles = get_candles(connector_name=inputs["candles_connector"], trading_pair=inputs["candles_trading_pair"], - interval=inputs["interval"], days=days_to_visualize) - -# Create a subplot with 2 rows -fig = make_subplots(rows=2, cols=1, shared_xaxes=True, - vertical_spacing=0.02, subplot_titles=('Candlestick with Bollinger Bands', 'Volume', "MACD"), - row_heights=[0.8, 0.2]) -add_traces_to_fig(fig, [get_candlestick_trace(candles)], row=1, col=1) -add_traces_to_fig(fig, get_bbands_traces(candles, inputs["bb_length"], inputs["bb_std"]), row=1, col=1) -add_traces_to_fig(fig, get_macdbb_v1_signal_traces(df=candles, bb_length=inputs["bb_length"], bb_std=inputs["bb_std"], - bb_long_threshold=inputs["bb_long_threshold"], - bb_short_threshold=inputs["bb_short_threshold"], - macd_fast=inputs["macd_fast"], macd_slow=inputs["macd_slow"], - macd_signal=inputs["macd_signal"]), row=1, col=1) -add_traces_to_fig(fig, get_macd_traces(df=candles, macd_fast=inputs["macd_fast"], macd_slow=inputs["macd_slow"], - macd_signal=inputs["macd_signal"]), row=2, col=1) - -fig.update_layout(**theme.get_default_layout()) -# Use Streamlit's functionality to display the plot -st.plotly_chart(fig, use_container_width=True) -bt_results = backtesting_section(inputs, backend_api_client) -if bt_results: - fig = create_backtesting_figure( - df=bt_results["processed_data"], - executors=bt_results["executors"], - config=inputs) - c1, c2 = st.columns([0.9, 0.1]) - with c1: - render_backtesting_metrics(bt_results["results"]) - st.plotly_chart(fig, use_container_width=True) - with c2: - render_accuracy_metrics(bt_results["results"]) - st.write("---") - render_close_types(bt_results["results"]) -st.write("---") -render_save_config(st.session_state["default_config"]["id"], st.session_state["default_config"]) diff --git a/pages/config/macd_bb_v1/user_inputs.py b/pages/config/macd_bb_v1/user_inputs.py deleted file mode 100644 index b928a827ce..0000000000 --- a/pages/config/macd_bb_v1/user_inputs.py +++ /dev/null @@ -1,64 +0,0 @@ -import streamlit as st - -from frontend.components.directional_trading_general_inputs import get_directional_trading_general_inputs -from frontend.components.risk_management import get_risk_management_inputs - - -def user_inputs(): - default_config = st.session_state.get("default_config", {}) - bb_length = default_config.get("bb_length", 100) - bb_std = default_config.get("bb_std", 2.0) - bb_long_threshold = default_config.get("bb_long_threshold", 0.0) - bb_short_threshold = default_config.get("bb_short_threshold", 1.0) - macd_fast = default_config.get("macd_fast", 21) - macd_slow = default_config.get("macd_slow", 42) - macd_signal = default_config.get("macd_signal", 9) - connector_name, trading_pair, leverage, total_amount_quote, max_executors_per_side, cooldown_time, position_mode,\ - candles_connector_name, candles_trading_pair, interval = get_directional_trading_general_inputs() - sl, tp, time_limit, ts_ap, ts_delta, take_profit_order_type = get_risk_management_inputs() - with st.expander("MACD Bollinger Configuration", expanded=True): - c1, c2, c3, c4, c5, c6, c7 = st.columns(7) - with c1: - bb_length = st.number_input("Bollinger Bands Length", min_value=5, max_value=1000, value=bb_length) - with c2: - bb_std = st.number_input("Standard Deviation Multiplier", min_value=1.0, max_value=2.0, value=bb_std) - with c3: - bb_long_threshold = st.number_input("Long Threshold", value=bb_long_threshold) - with c4: - bb_short_threshold = st.number_input("Short Threshold", value=bb_short_threshold) - with c5: - macd_fast = st.number_input("MACD Fast", min_value=1, value=macd_fast) - with c6: - macd_slow = st.number_input("MACD Slow", min_value=1, value=macd_slow) - with c7: - macd_signal = st.number_input("MACD Signal", min_value=1, value=macd_signal) - - return { - "controller_name": "macd_bb_v1", - "controller_type": "directional_trading", - "connector_name": connector_name, - "trading_pair": trading_pair, - "leverage": leverage, - "total_amount_quote": total_amount_quote, - "max_executors_per_side": max_executors_per_side, - "cooldown_time": cooldown_time, - "position_mode": position_mode, - "candles_connector": candles_connector_name, - "candles_trading_pair": candles_trading_pair, - "interval": interval, - "bb_length": bb_length, - "bb_std": bb_std, - "bb_long_threshold": bb_long_threshold, - "bb_short_threshold": bb_short_threshold, - "macd_fast": macd_fast, - "macd_slow": macd_slow, - "macd_signal": macd_signal, - "stop_loss": sl, - "take_profit": tp, - "time_limit": time_limit, - "trailing_stop": { - "activation_price": ts_ap, - "trailing_delta": ts_delta - }, - "take_profit_order_type": take_profit_order_type.value - } diff --git a/pages/config/pmm_dynamic/README.md b/pages/config/pmm_dynamic/README.md deleted file mode 100644 index c16ce627c6..0000000000 --- a/pages/config/pmm_dynamic/README.md +++ /dev/null @@ -1,62 +0,0 @@ -# PMM Dynamic Configuration Tool - -Welcome to the PMM Dynamic Configuration Tool! This tool allows you to create, modify, visualize, backtest, and save configurations for the PMM Dynamic trading strategy. Here’s how you can make the most out of it. - -## Features - -- **Start from Default Configurations**: Begin with a default configuration or use the values from an existing configuration. -- **Modify Configuration Values**: Change various parameters of the configuration to suit your trading strategy. -- **Visualize Results**: See the impact of your changes through visual charts, including indicators like MACD and NATR. -- **Backtest Your Strategy**: Run backtests to evaluate the performance of your strategy. -- **Save and Deploy**: Once satisfied, save the configuration to deploy it later. - -## How to Use - -### 1. Load Default Configuration - -Start by loading the default configuration for the PMM Dynamic strategy. This provides a baseline setup that you can customize to fit your needs. - -### 2. User Inputs - -Input various parameters for the strategy configuration. These parameters include: - -- **Connector Name**: Select the trading platform or exchange. -- **Trading Pair**: Choose the cryptocurrency trading pair. -- **Leverage**: Set the leverage ratio. (Note: if you are using spot trading, set the leverage to 1) -- **Total Amount (Quote Currency)**: Define the total amount you want to allocate for trading. -- **Position Mode**: Choose between different position modes. -- **Cooldown Time**: Set the cooldown period between trades. -- **Executor Refresh Time**: Define how often the executors refresh. -- **Candles Connector**: Select the data source for candlestick data. -- **Candles Trading Pair**: Choose the trading pair for candlestick data. -- **Interval**: Set the interval for candlestick data. -- **MACD Fast Period**: Set the fast period for the MACD indicator. -- **MACD Slow Period**: Set the slow period for the MACD indicator. -- **MACD Signal Period**: Set the signal period for the MACD indicator. -- **NATR Length**: Define the length for the NATR indicator. -- **Risk Management**: Set parameters for stop loss, take profit, time limit, and trailing stop settings. - -### 3. Indicator Visualization - -Visualize the candlestick data along with the MACD and NATR indicators. This helps you understand how the MACD will shift the mid-price and how the NATR will be used as a base multiplier for spreads. - -### 4. Executor Distribution - -The distribution of orders is now a multiplier of the base spread, which is determined by the NATR indicator. This allows the algorithm to adapt to changing market conditions by adjusting the spread based on the average size of the candles. - -### 5. Backtesting - -Run backtests to evaluate the performance of your configured strategy. The backtesting section allows you to: - -- **Process Data**: Analyze historical trading data. -- **Visualize Results**: See performance metrics and charts. -- **Evaluate Accuracy**: Assess the accuracy of your strategy’s predictions and trades. -- **Understand Close Types**: Review different types of trade closures and their frequencies. - -### 6. Save Configuration - -Once you are satisfied with your configuration and backtest results, save the configuration for future use in the Deploy tab. This allows you to deploy the same strategy later without having to reconfigure it from scratch. - ---- - -Feel free to experiment with different configurations to find the optimal setup for your trading strategy. Happy trading! \ No newline at end of file diff --git a/pages/config/pmm_dynamic/__init__.py b/pages/config/pmm_dynamic/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/pages/config/pmm_dynamic/app.py b/pages/config/pmm_dynamic/app.py deleted file mode 100644 index 3c7eb52143..0000000000 --- a/pages/config/pmm_dynamic/app.py +++ /dev/null @@ -1,94 +0,0 @@ -import plotly.graph_objects as go -import streamlit as st -from plotly.subplots import make_subplots - -# Import submodules -from frontend.components.backtesting import backtesting_section -from frontend.components.config_loader import get_default_config_loader -from frontend.components.executors_distribution import get_executors_distribution_inputs -from frontend.components.save_config import render_save_config -from frontend.pages.config.pmm_dynamic.spread_and_price_multipliers import get_pmm_dynamic_multipliers -from frontend.pages.config.pmm_dynamic.user_inputs import user_inputs -from frontend.pages.config.utils import get_candles -from frontend.st_utils import get_backend_api_client, initialize_st_page -from frontend.visualization import theme -from frontend.visualization.backtesting import create_backtesting_figure -from frontend.visualization.backtesting_metrics import render_accuracy_metrics, render_backtesting_metrics, render_close_types -from frontend.visualization.candles import get_candlestick_trace -from frontend.visualization.executors_distribution import create_executors_distribution_traces -from frontend.visualization.indicators import get_macd_traces -from frontend.visualization.utils import add_traces_to_fig - -# Initialize the Streamlit page -initialize_st_page(title="PMM Dynamic", icon="👩‍🏫") -backend_api_client = get_backend_api_client() - -# Page content -st.text("This tool will let you create a config for PMM Dynamic, backtest and upload it to the Backend API.") -get_default_config_loader("pmm_dynamic") -# Get user inputs -inputs = user_inputs() -st.write("### Visualizing MACD and NATR indicators for PMM Dynamic") -st.text("The MACD is used to shift the mid price and the NATR to make the spreads dynamic. " - "In the order distributions graph, we are going to see the values of the orders affected by the average NATR") -days_to_visualize = st.number_input("Days to Visualize", min_value=1, max_value=365, value=7) -# Load candle data -candles = get_candles(connector_name=inputs["candles_connector"], trading_pair=inputs["candles_trading_pair"], - interval=inputs["interval"], days=days_to_visualize) -with st.expander("Visualizing PMM Dynamic Indicators", expanded=True): - fig = make_subplots(rows=4, cols=1, shared_xaxes=True, - vertical_spacing=0.02, subplot_titles=("Candlestick with Bollinger Bands", "MACD", - "Price Multiplier", "Spreads Multiplier"), - row_heights=[0.8, 0.2, 0.2, 0.2]) - add_traces_to_fig(fig, [get_candlestick_trace(candles)], row=1, col=1) - add_traces_to_fig(fig, get_macd_traces(df=candles, macd_fast=inputs["macd_fast"], macd_slow=inputs["macd_slow"], - macd_signal=inputs["macd_signal"]), row=2, col=1) - price_multiplier, spreads_multiplier = get_pmm_dynamic_multipliers(candles, inputs["macd_fast"], - inputs["macd_slow"], inputs["macd_signal"], - inputs["natr_length"]) - add_traces_to_fig(fig, [ - go.Scatter(x=candles.index, y=price_multiplier, name="Price Multiplier", line=dict(color="blue"))], row=3, - col=1) - add_traces_to_fig(fig, - [go.Scatter(x=candles.index, y=spreads_multiplier, name="Base Spread", line=dict(color="red"))], - row=4, col=1) - fig.update_layout(**theme.get_default_layout(height=1000)) - fig.update_yaxes(tickformat=".2%", row=3, col=1) - fig.update_yaxes(tickformat=".2%", row=4, col=1) - st.plotly_chart(fig, use_container_width=True) - -st.write("### Executors Distribution") -st.write("The order distributions are affected by the average NATR. This means that if the first order has a spread of " - "1 and the NATR is 0.005, the first order will have a spread of 0.5% of the mid price.") -buy_spread_distributions, sell_spread_distributions, buy_order_amounts_pct, \ - sell_order_amounts_pct = get_executors_distribution_inputs(use_custom_spread_units=True) -inputs["buy_spreads"] = [spread * 100 for spread in buy_spread_distributions] -inputs["sell_spreads"] = [spread * 100 for spread in sell_spread_distributions] -inputs["buy_amounts_pct"] = buy_order_amounts_pct -inputs["sell_amounts_pct"] = sell_order_amounts_pct -st.session_state["default_config"].update(inputs) -with st.expander("Executor Distribution:", expanded=True): - natr_avarage = spreads_multiplier.mean() - buy_spreads = [spread * natr_avarage for spread in inputs["buy_spreads"]] - sell_spreads = [spread * natr_avarage for spread in inputs["sell_spreads"]] - st.write(f"Average NATR: {natr_avarage:.2%}") - fig = create_executors_distribution_traces(buy_spreads, sell_spreads, inputs["buy_amounts_pct"], - inputs["sell_amounts_pct"], inputs["total_amount_quote"]) - st.plotly_chart(fig, use_container_width=True) - -bt_results = backtesting_section(inputs, backend_api_client) -if bt_results: - fig = create_backtesting_figure( - df=bt_results["processed_data"], - executors=bt_results["executors"], - config=inputs) - c1, c2 = st.columns([0.9, 0.1]) - with c1: - render_backtesting_metrics(bt_results["results"]) - st.plotly_chart(fig, use_container_width=True) - with c2: - render_accuracy_metrics(bt_results["results"]) - st.write("---") - render_close_types(bt_results["results"]) -st.write("---") -render_save_config(st.session_state["default_config"]["id"], st.session_state["default_config"]) diff --git a/pages/config/pmm_dynamic/spread_and_price_multipliers.py b/pages/config/pmm_dynamic/spread_and_price_multipliers.py deleted file mode 100644 index efd99c56c0..0000000000 --- a/pages/config/pmm_dynamic/spread_and_price_multipliers.py +++ /dev/null @@ -1,17 +0,0 @@ -import pandas_ta as ta # noqa: F401 - - -def get_pmm_dynamic_multipliers(df, macd_fast, macd_slow, macd_signal, natr_length): - """ - Get the spread and price multipliers for PMM Dynamic - """ - natr = ta.natr(df["high"], df["low"], df["close"], length=natr_length) / 100 - macd_output = ta.macd(df["close"], fast=macd_fast, - slow=macd_slow, signal=macd_signal) - macd = macd_output[f"MACD_{macd_fast}_{macd_slow}_{macd_signal}"] - macdh = macd_output[f"MACDh_{macd_fast}_{macd_slow}_{macd_signal}"] - macd_signal = - (macd - macd.mean()) / macd.std() - macdh_signal = macdh.apply(lambda x: 1 if x > 0 else -1) - max_price_shift = natr / 2 - price_multiplier = ((0.5 * macd_signal + 0.5 * macdh_signal) * max_price_shift) - return price_multiplier, natr diff --git a/pages/config/pmm_dynamic/user_inputs.py b/pages/config/pmm_dynamic/user_inputs.py deleted file mode 100644 index 5f2e9e64af..0000000000 --- a/pages/config/pmm_dynamic/user_inputs.py +++ /dev/null @@ -1,80 +0,0 @@ -import streamlit as st - -from frontend.components.market_making_general_inputs import get_market_making_general_inputs -from frontend.components.risk_management import get_risk_management_inputs - - -def user_inputs(): - default_config = st.session_state.get("default_config", {}) - macd_fast = default_config.get("macd_fast", 21) - macd_slow = default_config.get("macd_slow", 42) - macd_signal = default_config.get("macd_signal", 9) - natr_length = default_config.get("natr_length", 14) - position_rebalance_threshold_pct = default_config.get("position_rebalance_threshold_pct", 0.05) - skip_rebalance = default_config.get("skip_rebalance", False) - - connector_name, trading_pair, leverage, total_amount_quote, position_mode, cooldown_time, executor_refresh_time, \ - candles_connector, candles_trading_pair, interval = get_market_making_general_inputs(custom_candles=True) - sl, tp, time_limit, ts_ap, ts_delta, take_profit_order_type = get_risk_management_inputs() - with st.expander("PMM Dynamic Configuration", expanded=True): - c1, c2, c3, c4 = st.columns(4) - with c1: - macd_fast = st.number_input("MACD Fast Period", min_value=1, max_value=200, value=macd_fast) - with c2: - macd_slow = st.number_input("MACD Slow Period", min_value=1, max_value=200, value=macd_slow) - with c3: - macd_signal = st.number_input("MACD Signal Period", min_value=1, max_value=200, value=macd_signal) - with c4: - natr_length = st.number_input("NATR Length", min_value=1, max_value=200, value=natr_length) - - with st.expander("Position Rebalancing", expanded=True): - c1, c2 = st.columns(2) - with c1: - position_rebalance_threshold_pct = st.number_input( - "Position Rebalance Threshold (%)", - min_value=0.0, - max_value=100.0, - value=position_rebalance_threshold_pct * 100, - step=0.1, - help="Threshold percentage for position rebalancing" - ) / 100 - with c2: - skip_rebalance = st.checkbox( - "Skip Rebalance", - value=skip_rebalance, - help="Skip position rebalancing" - ) - - # Create the config - config = { - "controller_name": "pmm_dynamic", - "controller_type": "market_making", - "manual_kill_switch": False, - "candles_config": [], - "connector_name": connector_name, - "trading_pair": trading_pair, - "total_amount_quote": total_amount_quote, - "executor_refresh_time": executor_refresh_time, - "cooldown_time": cooldown_time, - "leverage": leverage, - "position_mode": position_mode, - "candles_connector": candles_connector, - "candles_trading_pair": candles_trading_pair, - "interval": interval, - "macd_fast": macd_fast, - "macd_slow": macd_slow, - "macd_signal": macd_signal, - "natr_length": natr_length, - "stop_loss": sl, - "take_profit": tp, - "time_limit": time_limit, - "take_profit_order_type": take_profit_order_type.value, - "trailing_stop": { - "activation_price": ts_ap, - "trailing_delta": ts_delta - }, - "position_rebalance_threshold_pct": position_rebalance_threshold_pct, - "skip_rebalance": skip_rebalance - } - - return config diff --git a/pages/config/pmm_simple/README.md b/pages/config/pmm_simple/README.md deleted file mode 100644 index 4b3640de5e..0000000000 --- a/pages/config/pmm_simple/README.md +++ /dev/null @@ -1,49 +0,0 @@ -# PMM Simple Configuration Tool - -Welcome to the PMM Simple Configuration Tool! This tool allows you to create, modify, visualize, backtest, and save configurations for the PMM Simple trading strategy. Here’s how you can make the most out of it. - -## Features - -- **Start from Default Configurations**: Begin with a default configuration or use the values from an existing configuration. -- **Modify Configuration Values**: Change various parameters of the configuration to suit your trading strategy. -- **Visualize Results**: See the impact of your changes through visual charts. -- **Backtest Your Strategy**: Run backtests to evaluate the performance of your strategy. -- **Save and Deploy**: Once satisfied, save the configuration to deploy it later. - -## How to Use - -### 1. Load Default Configuration - -Start by loading the default configuration for the PMM Simple strategy. This provides a baseline setup that you can customize to fit your needs. - -### 2. User Inputs - -Input various parameters for the strategy configuration. These parameters include: - -- **Connector Name**: Select the trading platform or exchange. -- **Trading Pair**: Choose the cryptocurrency trading pair. -- **Leverage**: Set the leverage ratio. (Note: if you are using spot trading, set the leverage to 1) -- **Total Amount (Quote Currency)**: Define the total amount you want to allocate for trading. -- **Position Mode**: Choose between different position modes. -- **Cooldown Time**: Set the cooldown period between trades. -- **Executor Refresh Time**: Define how often the executors refresh. -- **Buy/Sell Spread Distributions**: Configure the distribution of buy and sell spreads. -- **Order Amounts**: Specify the percentages for buy and sell order amounts. -- **Risk Management**: Set parameters for stop loss, take profit, time limit, and trailing stop settings. - -### 3. Executor Distribution Visualization - -Visualize the distribution of your trading executors. This helps you understand how your buy and sell orders are spread across different price levels and amounts. - -### 4. Backtesting - -Run backtests to evaluate the performance of your configured strategy. The backtesting section allows you to: - -- **Process Data**: Analyze historical trading data. -- **Visualize Results**: See performance metrics and charts. -- **Evaluate Accuracy**: Assess the accuracy of your strategy’s predictions and trades. -- **Understand Close Types**: Review different types of trade closures and their frequencies. - -### 5. Save Configuration - -Once you are satisfied with your configuration and backtest results, save the configuration for future use in the Deploy tab. This allows you to deploy the same strategy later without having to reconfigure it from scratch. \ No newline at end of file diff --git a/pages/config/pmm_simple/__init__.py b/pages/config/pmm_simple/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/pages/config/pmm_simple/app.py b/pages/config/pmm_simple/app.py deleted file mode 100644 index e6f259896f..0000000000 --- a/pages/config/pmm_simple/app.py +++ /dev/null @@ -1,45 +0,0 @@ -import streamlit as st - -from frontend.components.backtesting import backtesting_section -from frontend.components.config_loader import get_default_config_loader -from frontend.components.save_config import render_save_config - -# Import submodules -from frontend.pages.config.pmm_simple.user_inputs import user_inputs -from frontend.st_utils import get_backend_api_client, initialize_st_page -from frontend.visualization.backtesting import create_backtesting_figure -from frontend.visualization.backtesting_metrics import render_accuracy_metrics, render_backtesting_metrics, render_close_types -from frontend.visualization.executors_distribution import create_executors_distribution_traces - -# Initialize the Streamlit page -initialize_st_page(title="PMM Simple", icon="👨‍🏫") -backend_api_client = get_backend_api_client() - -# Page content -st.text("This tool will let you create a config for PMM Simple, backtest and upload it to the Backend API.") -get_default_config_loader("pmm_simple") - -inputs = user_inputs() - -st.session_state["default_config"].update(inputs) -with st.expander("Executor Distribution:", expanded=True): - fig = create_executors_distribution_traces(inputs["buy_spreads"], inputs["sell_spreads"], inputs["buy_amounts_pct"], - inputs["sell_amounts_pct"], inputs["total_amount_quote"]) - st.plotly_chart(fig, use_container_width=True) - -bt_results = backtesting_section(inputs, backend_api_client) -if bt_results: - fig = create_backtesting_figure( - df=bt_results["processed_data"], - executors=bt_results["executors"], - config=inputs) - c1, c2 = st.columns([0.9, 0.1]) - with c1: - render_backtesting_metrics(bt_results["results"]) - st.plotly_chart(fig, use_container_width=True) - with c2: - render_accuracy_metrics(bt_results["results"]) - st.write("---") - render_close_types(bt_results["results"]) -st.write("---") -render_save_config(st.session_state["default_config"]["id"], st.session_state["default_config"]) diff --git a/pages/config/pmm_simple/user_inputs.py b/pages/config/pmm_simple/user_inputs.py deleted file mode 100644 index 53031a80c6..0000000000 --- a/pages/config/pmm_simple/user_inputs.py +++ /dev/null @@ -1,64 +0,0 @@ -import streamlit as st - -from frontend.components.executors_distribution import get_executors_distribution_inputs -from frontend.components.market_making_general_inputs import get_market_making_general_inputs -from frontend.components.risk_management import get_risk_management_inputs - - -def user_inputs(): - default_config = st.session_state.get("default_config", {}) - position_rebalance_threshold_pct = default_config.get("position_rebalance_threshold_pct", 0.05) - skip_rebalance = default_config.get("skip_rebalance", False) - - connector_name, trading_pair, leverage, total_amount_quote, position_mode, cooldown_time, \ - executor_refresh_time, _, _, _ = get_market_making_general_inputs() - buy_spread_distributions, sell_spread_distributions, buy_order_amounts_pct, \ - sell_order_amounts_pct = get_executors_distribution_inputs() - sl, tp, time_limit, ts_ap, ts_delta, take_profit_order_type = get_risk_management_inputs() - - with st.expander("Position Rebalancing", expanded=True): - c1, c2 = st.columns(2) - with c1: - position_rebalance_threshold_pct = st.number_input( - "Position Rebalance Threshold (%)", - min_value=0.0, - max_value=100.0, - value=position_rebalance_threshold_pct * 100, - step=0.1, - help="Threshold percentage for position rebalancing" - ) / 100 - with c2: - skip_rebalance = st.checkbox( - "Skip Rebalance", - value=skip_rebalance, - help="Skip position rebalancing" - ) - # Create the config - config = { - "controller_name": "pmm_simple", - "controller_type": "market_making", - "manual_kill_switch": False, - "candles_config": [], - "connector_name": connector_name, - "trading_pair": trading_pair, - "total_amount_quote": total_amount_quote, - "buy_spreads": buy_spread_distributions, - "sell_spreads": sell_spread_distributions, - "buy_amounts_pct": buy_order_amounts_pct, - "sell_amounts_pct": sell_order_amounts_pct, - "executor_refresh_time": executor_refresh_time, - "cooldown_time": cooldown_time, - "leverage": leverage, - "position_mode": position_mode, - "stop_loss": sl, - "take_profit": tp, - "time_limit": time_limit, - "take_profit_order_type": take_profit_order_type.value, - "trailing_stop": { - "activation_price": ts_ap, - "trailing_delta": ts_delta - }, - "position_rebalance_threshold_pct": position_rebalance_threshold_pct, - "skip_rebalance": skip_rebalance - } - return config diff --git a/pages/config/supertrend_v1/README.md b/pages/config/supertrend_v1/README.md deleted file mode 100644 index f93bf3c873..0000000000 --- a/pages/config/supertrend_v1/README.md +++ /dev/null @@ -1,72 +0,0 @@ -# Super Trend Configuration Tool - -Welcome to the Super Trend Configuration Tool! This tool allows you to create, modify, visualize, backtest, and save configurations for the Super Trend directional trading strategy. Here’s how you can make the most out of it. - -## Features - -- **Start from Default Configurations**: Begin with a default configuration or use the values from an existing configuration. -- **Modify Configuration Values**: Change various parameters of the configuration to suit your trading strategy. -- **Visualize Results**: See the impact of your changes through visual charts. -- **Backtest Your Strategy**: Run backtests to evaluate the performance of your strategy. -- **Save and Deploy**: Once satisfied, save the configuration to deploy it later. - -## How to Use - -### 1. Load Default Configuration - -Start by loading the default configuration for the Super Trend strategy. This provides a baseline setup that you can customize to fit your needs. - -### 2. User Inputs - -Input various parameters for the strategy configuration. These parameters include: - -- **Connector Name**: Select the trading platform or exchange. -- **Trading Pair**: Choose the cryptocurrency trading pair. -- **Leverage**: Set the leverage ratio. (Note: if you are using spot trading, set the leverage to 1) -- **Total Amount (Quote Currency)**: Define the total amount you want to allocate for trading. -- **Max Executors per Side**: Specify the maximum number of executors per side. -- **Cooldown Time**: Set the cooldown period between trades. -- **Position Mode**: Choose between different position modes. -- **Candles Connector**: Select the data source for candlestick data. -- **Candles Trading Pair**: Choose the trading pair for candlestick data. -- **Interval**: Set the interval for candlestick data. -- **Super Trend Length**: Define the length of the Super Trend indicator. -- **Super Trend Multiplier**: Set the multiplier for the Super Trend indicator. -- **Percentage Threshold**: Set the percentage threshold for signal generation. -- **Risk Management**: Set parameters for stop loss, take profit, time limit, and trailing stop settings. - -### 3. Visualize Indicators - -Visualize the Super Trend indicator on the OHLC (Open, High, Low, Close) chart to see the impact of your configuration. Here are some hints to help you fine-tune the indicators: - -- **Super Trend Length**: A larger length will make the Super Trend indicator smoother and less sensitive to short-term price fluctuations, while a smaller length will make it more responsive to recent price changes. -- **Super Trend Multiplier**: Adjusting the multiplier affects the sensitivity of the Super Trend indicator. A higher multiplier makes the trend detection more conservative, while a lower multiplier makes it more aggressive. -- **Percentage Threshold**: This defines how close the price needs to be to the Super Trend band to generate a signal. For example, a 0.5% threshold means the price needs to be within 0.5% of the Super Trend band to consider a trade. - -### Combining Super Trend and Percentage Threshold for Trade Signals - -The Super Trend V1 strategy uses the Super Trend indicator combined with a percentage threshold to generate trade signals: - -- **Long Signal**: The Super Trend indicator must signal a long trend, and the price must be within the percentage threshold of the Super Trend long band. For example, if the threshold is 0.5%, the price must be within 0.5% of the Super Trend long band to trigger a long trade. -- **Short Signal**: The Super Trend indicator must signal a short trend, and the price must be within the percentage threshold of the Super Trend short band. Similarly, if the threshold is 0.5%, the price must be within 0.5% of the Super Trend short band to trigger a short trade. - -### 4. Executor Distribution - -The total amount in the quote currency will be distributed among the maximum number of executors per side. For example, if the total amount quote is 1000 and the max executors per side is 5, each executor will have 200 to trade. If the signal is on, the first executor will place an order and wait for the cooldown time before the next one executes, continuing this pattern for the subsequent orders. - -### 5. Backtesting - -Run backtests to evaluate the performance of your configured strategy. The backtesting section allows you to: - -- **Process Data**: Analyze historical trading data. -- **Visualize Results**: See performance metrics and charts. -- **Evaluate Accuracy**: Assess the accuracy of your strategy’s predictions and trades. -- **Understand Close Types**: Review different types of trade closures and their frequencies. - -### 6. Save Configuration - -Once you are satisfied with your configuration and backtest results, save the configuration for future use in the Deploy tab. This allows you to deploy the same strategy later without having to reconfigure it from scratch. - ---- - -Feel free to experiment with different configurations to find the optimal setup for your trading strategy. Happy trading! \ No newline at end of file diff --git a/pages/config/supertrend_v1/__init__.py b/pages/config/supertrend_v1/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/pages/config/supertrend_v1/app.py b/pages/config/supertrend_v1/app.py deleted file mode 100644 index 97e68bf70c..0000000000 --- a/pages/config/supertrend_v1/app.py +++ /dev/null @@ -1,63 +0,0 @@ -import streamlit as st -from plotly.subplots import make_subplots - -from frontend.components.backtesting import backtesting_section -from frontend.components.config_loader import get_default_config_loader -from frontend.components.save_config import render_save_config -from frontend.pages.config.supertrend_v1.user_inputs import user_inputs -from frontend.pages.config.utils import get_candles -from frontend.st_utils import get_backend_api_client, initialize_st_page -from frontend.visualization import theme -from frontend.visualization.backtesting import create_backtesting_figure -from frontend.visualization.backtesting_metrics import render_accuracy_metrics, render_backtesting_metrics, render_close_types -from frontend.visualization.candles import get_candlestick_trace -from frontend.visualization.indicators import get_supertrend_traces, get_volume_trace -from frontend.visualization.signals import get_supertrend_v1_signal_traces -from frontend.visualization.utils import add_traces_to_fig - -# Initialize the Streamlit page -initialize_st_page(title="SuperTrend V1", icon="📊", initial_sidebar_state="expanded") -backend_api_client = get_backend_api_client() - -get_default_config_loader("supertrend_v1") -# User inputs -inputs = user_inputs() -st.session_state["default_config"].update(inputs) - -st.write("### Visualizing Supertrend Trading Signals") -days_to_visualize = st.number_input("Days to Visualize", min_value=1, max_value=365, value=7) -# Load candle data -candles = get_candles(connector_name=inputs["candles_connector"], trading_pair=inputs["candles_trading_pair"], - interval=inputs["interval"], days=days_to_visualize) - -# Create a subplot with 2 rows -fig = make_subplots(rows=2, cols=1, shared_xaxes=True, - vertical_spacing=0.02, subplot_titles=('Candlestick with Bollinger Bands', 'Volume', "MACD"), - row_heights=[0.8, 0.2]) -add_traces_to_fig(fig, [get_candlestick_trace(candles)], row=1, col=1) -add_traces_to_fig(fig, get_supertrend_traces(candles, inputs["length"], inputs["multiplier"]), row=1, col=1) -add_traces_to_fig(fig, get_supertrend_v1_signal_traces(candles, inputs["length"], inputs["multiplier"], - inputs["percentage_threshold"]), row=1, col=1) -add_traces_to_fig(fig, [get_volume_trace(candles)], row=2, col=1) - -layout_settings = theme.get_default_layout() -layout_settings["showlegend"] = False -fig.update_layout(**layout_settings) -# Use Streamlit's functionality to display the plot -st.plotly_chart(fig, use_container_width=True) -bt_results = backtesting_section(inputs, backend_api_client) -if bt_results: - fig = create_backtesting_figure( - df=bt_results["processed_data"], - executors=bt_results["executors"], - config=inputs) - c1, c2 = st.columns([0.9, 0.1]) - with c1: - render_backtesting_metrics(bt_results["results"]) - st.plotly_chart(fig, use_container_width=True) - with c2: - render_accuracy_metrics(bt_results["results"]) - st.write("---") - render_close_types(bt_results["results"]) -st.write("---") -render_save_config(st.session_state["default_config"]["id"], st.session_state["default_config"]) diff --git a/pages/config/supertrend_v1/user_inputs.py b/pages/config/supertrend_v1/user_inputs.py deleted file mode 100644 index d4a9436b5d..0000000000 --- a/pages/config/supertrend_v1/user_inputs.py +++ /dev/null @@ -1,48 +0,0 @@ -import streamlit as st - -from frontend.components.directional_trading_general_inputs import get_directional_trading_general_inputs -from frontend.components.risk_management import get_risk_management_inputs - - -def user_inputs(): - default_config = st.session_state.get("default_config", {}) - length = default_config.get("length", 20) - multiplier = default_config.get("multiplier", 3.0) - percentage_threshold = default_config.get("percentage_threshold", 0.5) - connector_name, trading_pair, leverage, total_amount_quote, max_executors_per_side, cooldown_time, position_mode, \ - candles_connector_name, candles_trading_pair, interval = get_directional_trading_general_inputs() - sl, tp, time_limit, ts_ap, ts_delta, take_profit_order_type = get_risk_management_inputs() - - with st.expander("SuperTrend Configuration", expanded=True): - c1, c2, c3 = st.columns(3) - with c1: - length = st.number_input("Supertrend Length", min_value=1, max_value=200, value=length) - with c2: - multiplier = st.number_input("Supertrend Multiplier", min_value=1.0, max_value=5.0, value=multiplier) - with c3: - percentage_threshold = st.number_input("Percentage Threshold (%)", value=percentage_threshold) / 100 - return { - "controller_name": "supertrend_v1", - "controller_type": "directional_trading", - "connector_name": connector_name, - "trading_pair": trading_pair, - "leverage": leverage, - "total_amount_quote": total_amount_quote, - "max_executors_per_side": max_executors_per_side, - "cooldown_time": cooldown_time, - "position_mode": position_mode, - "candles_connector": candles_connector_name, - "candles_trading_pair": candles_trading_pair, - "interval": interval, - "length": length, - "multiplier": multiplier, - "percentage_threshold": percentage_threshold, - "stop_loss": sl, - "take_profit": tp, - "time_limit": time_limit, - "trailing_stop": { - "activation_price": ts_ap, - "trailing_delta": ts_delta - }, - "take_profit_order_type": take_profit_order_type.value - } diff --git a/pages/config/utils.py b/pages/config/utils.py deleted file mode 100644 index 71e3dfb221..0000000000 --- a/pages/config/utils.py +++ /dev/null @@ -1,32 +0,0 @@ -import datetime - -import pandas as pd -import streamlit as st - -from frontend.st_utils import get_backend_api_client - - -def get_max_records(days_to_download: int, interval: str) -> int: - conversion = {"s": 1 / 60, "m": 1, "h": 60, "d": 1440} - unit = interval[-1] - quantity = int(interval[:-1]) - return int(days_to_download * 24 * 60 / (quantity * conversion[unit])) - - -@st.cache_data -def get_candles(connector_name="binance", trading_pair="BTC-USDT", interval="1m", days=7): - backend_client = get_backend_api_client() - - # Use the market_data.get_candles_last_days method - candles = backend_client.market_data.get_candles_last_days( - connector_name=connector_name, - trading_pair=trading_pair, - days=days, - interval=interval - ) - - # Convert the response to DataFrame (response is a list of candles) - df = pd.DataFrame(candles) - if not df.empty and 'timestamp' in df.columns: - df.index = pd.to_datetime(df.timestamp, unit='s') - return df diff --git a/pages/config/xemm_controller/README.md b/pages/config/xemm_controller/README.md deleted file mode 100644 index a5ae1108d6..0000000000 --- a/pages/config/xemm_controller/README.md +++ /dev/null @@ -1,60 +0,0 @@ -# XEMM Configuration Tool - -Welcome to the XEMM Configuration Tool! This tool allows you to create, modify, visualize, backtest, and save configurations for the XEMM (Cross-Exchange Market Making) strategy. Here’s how you can make the most out of it. - -## Features - -- **Start from Default Configurations**: Begin with a default configuration or use the values from an existing configuration. -- **Modify Configuration Values**: Change various parameters of the configuration to suit your trading strategy. -- **Visualize Results**: See the impact of your changes through visual charts. -- **Backtest Your Strategy**: Run backtests to evaluate the performance of your strategy. -- **Save and Deploy**: Once satisfied, save the configuration to deploy it later. - -## How to Use - -### 1. Load Default Configuration - -Start by loading the default configuration for the XEMM strategy. This provides a baseline setup that you can customize to fit your needs. - -### 2. User Inputs - -Input various parameters for the strategy configuration. These parameters include: - -- **Maker Connector**: Select the maker trading platform or exchange where limit orders will be placed. -- **Maker Trading Pair**: Choose the trading pair on the maker exchange. -- **Taker Connector**: Select the taker trading platform or exchange where market orders will be executed to hedge the imbalance. -- **Taker Trading Pair**: Choose the trading pair on the taker exchange. -- **Min Profitability**: Set the minimum profitability percentage at which orders will be refreshed to avoid risking liquidity. -- **Max Profitability**: Set the maximum profitability percentage at which orders will be refreshed to avoid being too far from the mid-price. -- **Buy Maker Levels**: Specify the number of buy maker levels. -- **Buy Targets and Amounts**: Define the target profitability and amounts for each buy maker level. -- **Sell Maker Levels**: Specify the number of sell maker levels. -- **Sell Targets and Amounts**: Define the target profitability and amounts for each sell maker level. - -### 3. Visualize Order Distribution - -Visualize the order distribution with profitability targets using Plotly charts. This helps you understand how your buy and sell orders are distributed across different profitability levels. - -### Min and Max Profitability - -The XEMM strategy uses min and max profitability bounds to manage the placement of limit orders: - -- **Min Profitability**: If the expected profitability of a limit order drops below this value, the order will be refreshed to avoid risking liquidity. -- **Max Profitability**: If the expected profitability of a limit order exceeds this value, the order will be refreshed to avoid being too far from the mid-price. - -### Combining Profitability Targets and Order Amounts - -- **Buy Orders**: Configure the target profitability and amounts for each buy maker level. The orders will be refreshed if they fall outside the min and max profitability bounds. -- **Sell Orders**: Similarly, configure the target profitability and amounts for each sell maker level, with orders being refreshed based on the profitability bounds. - -### 4. Save and Download Configuration - -Once you have configured your strategy, you can save and download the configuration as a YAML file. This allows you to deploy the strategy later without having to reconfigure it from scratch. - -### 5. Upload Configuration to Backend API - -You can also upload the configuration directly to the Backend API for immediate deployment. This ensures that your strategy is ready to be executed in real-time. - -## Conclusion - -By following these steps, you can efficiently configure your XEMM strategy, visualize its potential performance, and deploy it for trading. Feel free to experiment with different configurations to find the optimal setup for your trading needs. Happy trading! \ No newline at end of file diff --git a/pages/config/xemm_controller/__init__.py b/pages/config/xemm_controller/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/pages/config/xemm_controller/app.py b/pages/config/xemm_controller/app.py deleted file mode 100644 index dc9bfe1e2c..0000000000 --- a/pages/config/xemm_controller/app.py +++ /dev/null @@ -1,141 +0,0 @@ -import plotly.graph_objects as go -import streamlit as st -import yaml - -from frontend.st_utils import get_backend_api_client, initialize_st_page - -# Initialize the Streamlit page -initialize_st_page(title="XEMM Multiple Levels", icon="⚡️") - -# Page content -st.text("This tool will let you create a config for XEMM Controller and upload it to the BackendAPI.") -st.write("---") -c1, c2, c3, c4, c5 = st.columns([1, 1, 1, 1, 1]) - -with c1: - maker_connector = st.text_input("Maker Connector", value="kucoin") - maker_trading_pair = st.text_input("Maker Trading Pair", value="LBR-USDT") -with c2: - taker_connector = st.text_input("Taker Connector", value="okx") - taker_trading_pair = st.text_input("Taker Trading Pair", value="LBR-USDT") -with c3: - min_profitability = st.number_input("Min Profitability (%)", value=0.2, step=0.01) / 100 - max_profitability = st.number_input("Max Profitability (%)", value=1.0, step=0.01) / 100 -with c4: - buy_maker_levels = st.number_input("Buy Maker Levels", value=1, step=1) - buy_targets_amounts = [] - c41, c42 = st.columns([1, 1]) - for i in range(buy_maker_levels): - with c41: - target_profitability = st.number_input(f"Target Profitability {i + 1} B% ", value=0.3, step=0.01) - with c42: - amount = st.number_input(f"Amount {i + 1}B Quote", value=10, step=1) - buy_targets_amounts.append([target_profitability / 100, amount]) -with c5: - sell_maker_levels = st.number_input("Sell Maker Levels", value=1, step=1) - sell_targets_amounts = [] - c51, c52 = st.columns([1, 1]) - for i in range(sell_maker_levels): - with c51: - target_profitability = st.number_input(f"Target Profitability {i + 1}S %", value=0.3, step=0.001) - with c52: - amount = st.number_input(f"Amount {i + 1} S Quote", value=10, step=1) - sell_targets_amounts.append([target_profitability / 100, amount]) - - -def create_order_graph(order_type, targets, min_profit, max_profit): - # Create a figure - fig = go.Figure() - - # Convert profit targets to percentage for x-axis and prepare data for bar chart - x_values = [t[0] * 100 for t in targets] # Convert to percentage - y_values = [t[1] for t in targets] - x_labels = [f"{x:.2f}%" for x in x_values] # Format x labels as strings with percentage sign - - # Add bar plot for visualization of targets - fig.add_trace(go.Bar( - x=x_labels, - y=y_values, - width=0.01, - name=f'{order_type.capitalize()} Targets', - marker=dict(color='gold') - )) - - # Convert min and max profitability to percentages for reference lines - min_profit_percent = min_profit * 100 - max_profit_percent = max_profit * 100 - - # Add vertical lines for min and max profitability - fig.add_shape(type="line", - x0=min_profit_percent, y0=0, x1=min_profit_percent, y1=max(y_values, default=10), - line=dict(color="red", width=2), - name='Min Profitability') - fig.add_shape(type="line", - x0=max_profit_percent, y0=0, x1=max_profit_percent, y1=max(y_values, default=10), - line=dict(color="red", width=2), - name='Max Profitability') - - # Update layouts with x-axis starting at 0 - fig.update_layout( - title=f"{order_type.capitalize()} Order Distribution with Profitability Targets", - xaxis=dict( - title="Profitability (%)", - range=[0, max(max(x_values + [min_profit_percent, max_profit_percent]) + 0.1, 1)] - # Adjust range to include a buffer - ), - yaxis=dict( - title="Order Amount" - ), - height=400, - width=600 - ) - - return fig - - -# Use the function for both buy and sell orders -buy_order_fig = create_order_graph('buy', buy_targets_amounts, min_profitability, max_profitability) -sell_order_fig = create_order_graph('sell', sell_targets_amounts, min_profitability, max_profitability) - -# Display the Plotly graphs in Streamlit -st.plotly_chart(buy_order_fig, use_container_width=True) -st.plotly_chart(sell_order_fig, use_container_width=True) - -# Display in Streamlit -c1, c2, c3 = st.columns([2, 2, 1]) -with c1: - config_base = st.text_input("Config Base", - value=f"xemm-{maker_connector}-{taker_connector}-{maker_trading_pair.split('-')[0]}") -with c2: - config_tag = st.text_input("Config Tag", value="1.1") - -id = f"{config_base}_{config_tag}" -config = { - "id": id.lower(), - "controller_name": "xemm_multiple_levels", - "controller_type": "generic", - "maker_connector": maker_connector, - "maker_trading_pair": maker_trading_pair, - "taker_connector": taker_connector, - "taker_trading_pair": taker_trading_pair, - "min_profitability": min_profitability, - "max_profitability": max_profitability, - "buy_levels_targets_amount": buy_targets_amounts, - "sell_levels_targets_amount": sell_targets_amounts -} -yaml_config = yaml.dump(config, default_flow_style=False) - -with c3: - upload_config_to_backend = st.button("Upload Config to Hummingbot-API") - -if upload_config_to_backend: - backend_api_client = get_backend_api_client() - try: - config_name = config.get("id", id.lower()) - backend_api_client.controllers.create_or_update_controller_config( - config_name=config_name, - config=config - ) - st.success("Config uploaded successfully!") - except Exception as e: - st.error(f"Failed to upload config: {e}") diff --git a/pages/data/__init__.py b/pages/data/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/pages/data/download_candles/README.md b/pages/data/download_candles/README.md deleted file mode 100644 index 1d08d65ad5..0000000000 --- a/pages/data/download_candles/README.md +++ /dev/null @@ -1 +0,0 @@ -Download historical exchange data as OHLVC candles. Supports multiple trading pairs and custom time ranges/intervals. \ No newline at end of file diff --git a/pages/data/download_candles/__init__.py b/pages/data/download_candles/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/pages/data/download_candles/app.py b/pages/data/download_candles/app.py deleted file mode 100644 index c4c7da4c87..0000000000 --- a/pages/data/download_candles/app.py +++ /dev/null @@ -1,73 +0,0 @@ -from datetime import datetime, time - -import pandas as pd -import plotly.graph_objects as go -import streamlit as st - -from frontend.st_utils import get_backend_api_client, initialize_st_page - -# Initialize Streamlit page -initialize_st_page(title="Download Candles", icon="💾") -backend_api_client = get_backend_api_client() - -c1, c2, c3, c4 = st.columns([2, 2, 2, 0.5]) -with c1: - connector = st.selectbox("Exchange", - ["binance_perpetual", "binance", "gate_io", "gate_io_perpetual", "kucoin", "ascend_ex"], - index=0) - trading_pair = st.text_input("Trading Pair", value="BTC-USDT") -with c2: - interval = st.selectbox("Interval", options=["1m", "3m", "5m", "15m", "1h", "4h", "1d", "1s"]) -with c3: - start_date = st.date_input("Start Date", value=datetime(2023, 1, 1)) - end_date = st.date_input("End Date", value=datetime(2023, 1, 2)) -with c4: - get_data_button = st.button("Get Candles!") - -if get_data_button: - start_datetime = datetime.combine(start_date, time.min) - end_datetime = datetime.combine(end_date, time.max) - if end_datetime < start_datetime: - st.error("End Date should be greater than Start Date.") - st.stop() - - candles = backend_api_client.market_data.get_historical_candles( - connector_name=connector, - trading_pair=trading_pair, - interval=interval, - start_time=int(start_datetime.timestamp()), - end_time=int(end_datetime.timestamp()) - ) - - candles_df = pd.DataFrame(candles) - candles_df.index = pd.to_datetime(candles_df["timestamp"], unit='s') - - # Plotting the candlestick chart - fig = go.Figure(data=[go.Candlestick( - x=candles_df.index, - open=candles_df['open'], - high=candles_df['high'], - low=candles_df['low'], - close=candles_df['close'] - )]) - fig.update_layout( - height=1000, - title="Candlesticks", - xaxis_title="Time", - yaxis_title="Price", - template="plotly_dark", - showlegend=False - ) - fig.update_xaxes(rangeslider_visible=False) - fig.update_yaxes(title_text="Price") - st.plotly_chart(fig, use_container_width=True) - - # Generating CSV and download button - csv = candles_df.to_csv(index=False) - filename = f"{connector}_{trading_pair}_{start_date.strftime('%Y%m%d')}_{end_date.strftime('%Y%m%d')}.csv" - st.download_button( - label="Download Candles as CSV", - data=csv, - file_name=filename, - mime='text/csv', - ) diff --git a/pages/data/tvl_vs_mcap/README.md b/pages/data/tvl_vs_mcap/README.md deleted file mode 100644 index 251fc88055..0000000000 --- a/pages/data/tvl_vs_mcap/README.md +++ /dev/null @@ -1,3 +0,0 @@ -Easily compare various DeFi protocols based on their market capitalization and total value locked, using DeFiLlama data. - -Data Source: [DefiLlama](https://defillama.com/) diff --git a/pages/data/tvl_vs_mcap/__init__.py b/pages/data/tvl_vs_mcap/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/pages/data/tvl_vs_mcap/app.py b/pages/data/tvl_vs_mcap/app.py deleted file mode 100644 index d6e7de422c..0000000000 --- a/pages/data/tvl_vs_mcap/app.py +++ /dev/null @@ -1,76 +0,0 @@ -import numpy as np -import pandas as pd -import plotly.express as px -import streamlit as st -from defillama import DefiLlama - -from frontend.st_utils import initialize_st_page - -initialize_st_page(title="TVL vs Market Cap", icon="🦉") - -# Start content here -MIN_TVL = 1000000. -MIN_MCAP = 1000000. - - -@st.cache_data -def get_tvl_mcap_data(): - llama = DefiLlama() - df = pd.DataFrame(llama.get_all_protocols()) - tvl_mcap_df = df.loc[ - (df["tvl"] > 0) & (df["mcap"] > 0), ["name", "tvl", "mcap", "chain", "category", "slug"]].sort_values( - by=["mcap"], ascending=False) - return tvl_mcap_df[(tvl_mcap_df["tvl"] > MIN_TVL) & (tvl_mcap_df["mcap"] > MIN_MCAP)] - - -def get_protocols_by_chain_category(protocols: pd.DataFrame, group_by: list, nth: list): - return protocols.sort_values('tvl', ascending=False).groupby(group_by).nth(nth).reset_index() - - -with st.spinner(text='In progress'): - tvl_mcap_df = get_tvl_mcap_data() - -default_chains = ["Ethereum", "Solana", "Binance", "Polygon", "Multi-Chain", "Avalanche"] - -st.write("### Chains 🔗") -chains = st.multiselect( - "Select the chains to analyze:", - options=tvl_mcap_df["chain"].unique(), - default=default_chains) - -scatter = px.scatter( - data_frame=tvl_mcap_df[tvl_mcap_df["chain"].isin(chains)], - x="tvl", - y="mcap", - color="chain", - trendline="ols", - log_x=True, - log_y=True, - height=800, - hover_data=["name"], - template="plotly_dark", - title="TVL vs MCAP", - labels={ - "tvl": 'TVL (USD)', - 'mcap': 'Market Cap (USD)' - }) - -st.plotly_chart(scatter, use_container_width=True) - -st.write("---") -st.write("### SunBurst 🌞") -groupby = st.selectbox('Group by:', [['chain', 'category'], ['category', 'chain']]) -nth = st.slider('Top protocols by Category', min_value=1, max_value=5) - -proto_agg = get_protocols_by_chain_category(tvl_mcap_df[tvl_mcap_df["chain"].isin(chains)], - groupby, np.arange(0, nth, 1).tolist()) -groupby.append("slug") -sunburst = px.sunburst( - proto_agg, - path=groupby, - values='tvl', - height=800, - title="SunBurst", - template="plotly_dark", ) - -st.plotly_chart(sunburst, use_container_width=True) diff --git a/pages/landing.py b/pages/landing.py deleted file mode 100644 index e7987cde2a..0000000000 --- a/pages/landing.py +++ /dev/null @@ -1,328 +0,0 @@ -import random -from datetime import datetime, timedelta - -import pandas as pd -import plotly.graph_objects as go -import streamlit as st - -from frontend.st_utils import initialize_st_page - -initialize_st_page( - layout="wide", - show_readme=False -) - -# Custom CSS for enhanced styling -st.markdown(""" - -""", unsafe_allow_html=True) - -# Hero Section -st.markdown(""" -
-

🤖 Hummingbot Dashboard

-

- Your Command Center for Algorithmic Trading Excellence -

-
-""", unsafe_allow_html=True) - -# Generate sample data for demonstration -def generate_sample_data(): - """Generate sample trading data for visualization""" - dates = pd.date_range(start=datetime.now() - timedelta(days=30), end=datetime.now(), freq='D') - - # Sample portfolio performance - portfolio_values = [] - base_value = 10000 - for i in range(len(dates)): - change = random.uniform(-0.02, 0.03) # -2% to +3% daily change - base_value *= (1 + change) - portfolio_values.append(base_value) - - return pd.DataFrame({ - 'date': dates, - 'portfolio_value': portfolio_values, - 'daily_return': [random.uniform(-0.05, 0.08) for _ in range(len(dates))] - }) - -# Quick Stats Dashboard -st.markdown("## 📊 Live Dashboard Overview") - -# Mock data warning -st.warning(""" -⚠️ **Demo Data Notice**: The metrics, charts, and statistics shown below are simulated/mocked data for demonstration purposes. -This showcases how real trading data would be presented in the dashboard once connected to live trading bots. -""") - -col1, col2, col3, col4 = st.columns(4) - -with col1: - st.markdown(""" -
-

🔄 Active Bots

-
3
-

Currently Trading

-
- """, unsafe_allow_html=True) - -with col2: - st.markdown(""" -
-

💰 Total Portfolio

-
$12,847
-

+2.3% Today

-
- """, unsafe_allow_html=True) - -with col3: - st.markdown(""" -
-

📈 Win Rate

-
74.2%
-

Last 30 Days

-
- """, unsafe_allow_html=True) - -with col4: - st.markdown(""" -
-

⚡ Total Trades

-
1,247
-

This Month

-
- """, unsafe_allow_html=True) - -st.divider() - -# Performance Chart -col1, col2 = st.columns([2, 1]) - -with col1: - st.markdown("### 📈 Portfolio Performance (30 Days)") - - # Generate and display sample performance chart - df = generate_sample_data() - - fig = go.Figure() - fig.add_trace(go.Scatter( - x=df['date'], - y=df['portfolio_value'], - mode='lines+markers', - line=dict(color='#4CAF50', width=3), - fill='tonexty', - fillcolor='rgba(76, 175, 80, 0.1)', - name='Portfolio Value' - )) - - fig.update_layout( - template='plotly_dark', - height=400, - showlegend=False, - margin=dict(l=0, r=0, t=0, b=0), - xaxis=dict(showgrid=False), - yaxis=dict(showgrid=True, gridcolor='rgba(255,255,255,0.1)') - ) - - st.plotly_chart(fig, use_container_width=True) - -with col2: - st.markdown("### 🎯 Strategy Status") - - strategies = [ - {"name": "Market Making", "status": "active", "pnl": "+$342"}, - {"name": "Arbitrage", "status": "active", "pnl": "+$156"}, - {"name": "Grid Trading", "status": "active", "pnl": "+$89"}, - {"name": "DCA Bot", "status": "inactive", "pnl": "+$234"}, - ] - - for strategy in strategies: - status_class = "status-active" if strategy["status"] == "active" else "status-inactive" - status_icon = "🟢" if strategy["status"] == "active" else "🔴" - - st.markdown(f""" -
-
-
- {strategy['name']}
- {status_icon} {strategy['status'].title()} -
-
- {strategy['pnl']} -
-
-
- """, unsafe_allow_html=True) - -st.divider() - -# Feature Showcase -st.markdown("## 🚀 Platform Features") - -col1, col2, col3 = st.columns(3) - -with col1: - st.markdown(""" -
-
-
🎯
-

Strategy Development

-
- -
- """, unsafe_allow_html=True) - -with col2: - st.markdown(""" -
-
-
📊
-

Analytics & Insights

-
- -
- """, unsafe_allow_html=True) - -with col3: - st.markdown(""" -
-
-
-

Live Trading

-
- -
- """, unsafe_allow_html=True) - -st.divider() - -# Quick Actions -st.markdown("## ⚡ Quick Actions") - -# Alert for mocked navigation -st.info("ℹ️ **Note**: This is a mocked landing page. The Quick Actions buttons below are for demonstration purposes and the page navigation is not functional.") - -col1, col2, col3, col4 = st.columns(4) - -with col1: - if st.button("🚀 Deploy Strategy", use_container_width=True, type="primary"): - st.error("🚫 Navigation unavailable - This is a mocked landing page for demonstration purposes.") - -with col2: - if st.button("📊 View Performance", use_container_width=True): - st.error("🚫 Navigation unavailable - This is a mocked landing page for demonstration purposes.") - -with col3: - if st.button("🔍 Backtesting", use_container_width=True): - st.error("🚫 Navigation unavailable - This is a mocked landing page for demonstration purposes.") - -with col4: - if st.button("🗃️ Archived Bots", use_container_width=True): - st.error("🚫 Navigation unavailable - This is a mocked landing page for demonstration purposes.") - -st.divider() - -# Community & Resources -col1, col2 = st.columns([2, 1]) - -with col1: - st.markdown("### 🎬 Learn & Explore") - - st.video("https://youtu.be/7eHiMPRBQLQ?si=PAvCq0D5QDZz1h1D") - -with col2: - st.markdown("### 💬 Join Our Community") - - st.markdown(""" -
-

🌟 Connect with Traders

-

Join thousands of algorithmic traders sharing strategies and insights!

-
- - 💬 Join Discord - -

- - 🐛 Report Issues - -
- """, unsafe_allow_html=True) - -# Footer stats -st.markdown("---") -col1, col2, col3, col4 = st.columns(4) - -with col1: - st.metric("🌍 Global Users", "10,000+") - -with col2: - st.metric("💱 Exchanges", "20+") - -with col3: - st.metric("🔄 Daily Volume", "$2.5M+") - -with col4: - st.metric("⭐ GitHub Stars", "7,800+") diff --git a/pages/orchestration/__init__.py b/pages/orchestration/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/pages/orchestration/archived_bots/README.md b/pages/orchestration/archived_bots/README.md deleted file mode 100644 index debf28fe86..0000000000 --- a/pages/orchestration/archived_bots/README.md +++ /dev/null @@ -1,120 +0,0 @@ -# Archived Bots - -## Overview -The Archived Bots page provides comprehensive access to historical bot database files, enabling users to analyze past trading performance, review strategies, and extract insights from archived bot data. - -## Key Features - -### Database Management -- **Database Discovery**: Automatically lists all available database files in the system -- **Database Status**: Shows connection status and basic information for each database -- **Database Summary**: Provides overview statistics and metadata for each database - -### Historical Data Analysis -- **Performance Metrics**: Detailed trade-based performance analysis including PnL, win/loss ratios, and key statistics -- **Trade History**: Complete record of all trades with filtering and pagination -- **Order History**: Comprehensive order book data with status filtering -- **Position Tracking**: Historical position data with timeline analysis - -### Strategy Insights -- **Executor Analysis**: Review strategy executor performance and configuration -- **Controller Data**: Access to controller configurations and their historical performance -- **Strategy Comparison**: Compare different strategy implementations and their results - -### Data Export & Visualization -- **Export Functionality**: Download historical data in various formats (CSV, JSON) -- **Performance Charts**: Interactive visualizations of trading performance over time -- **Comparative Analysis**: Side-by-side comparison of different archived strategies - -## Usage Instructions - -### 1. Database Selection -- View the list of available archived databases -- Select a database to explore its contents -- Check database status and connection health - -### 2. Performance Analysis -- Navigate to the Performance tab to view trading metrics -- Review key performance indicators (KPIs) -- Analyze profit/loss trends and trading patterns - -### 3. Historical Data Review -- Browse trade history with pagination controls -- Filter orders by status, date range, or trading pair -- Review position data and timeline - -### 4. Strategy Analysis -- Examine executor configurations and performance -- Review controller settings and their impact -- Compare different strategy implementations - -### 5. Data Export -- Select desired data range and format -- Export historical data for external analysis -- Download performance reports - -## Technical Implementation - -### Architecture -- **Async API Integration**: Uses nest_asyncio for async database operations -- **Database Connections**: Manages multiple database connections efficiently -- **Pagination**: Implements efficient pagination for large datasets -- **Error Handling**: Comprehensive error handling for database operations - -### Components -- **Database Browser**: Interactive database selection and status display -- **Performance Dashboard**: Real-time performance metrics and charts -- **Data Grid**: Efficient display of large datasets with filtering -- **Export Manager**: Handles data export in multiple formats - -### State Management -- **Database Selection**: Tracks currently selected database -- **Filter States**: Maintains filter settings across page navigation -- **Pagination State**: Manages pagination across different data views -- **Export Settings**: Remembers export preferences - -### API Integration -- **ArchivedBotsRouter**: Async router for database operations -- **Batch Operations**: Efficient bulk data retrieval -- **Connection Pooling**: Manages database connections efficiently -- **Error Recovery**: Automatic retry mechanisms for failed operations - -## Best Practices - -### Performance Optimization -- Use pagination for large datasets -- Implement efficient filtering on the backend -- Cache frequently accessed data -- Use async operations for database queries - -### User Experience -- Provide clear status indicators -- Show loading states for long operations -- Implement progressive data loading -- Offer keyboard shortcuts for navigation - -### Data Integrity -- Validate database connections before operations -- Handle missing or corrupted data gracefully -- Provide clear error messages -- Implement data consistency checks - -## File Structure -``` -archived_bots/ -├── __init__.py -├── README.md -├── app.py # Main application file -├── utils.py # Utility functions -└── components/ # Page-specific components - ├── database_browser.py - ├── performance_dashboard.py - ├── data_grid.py - └── export_manager.py -``` - -## Dependencies -- **Backend**: ArchivedBotsRouter from hummingbot-api-client -- **Frontend**: Streamlit components, plotly for visualization -- **Utils**: nest_asyncio for async operations, pandas for data manipulation -- **Components**: Custom styling components for consistent UI \ No newline at end of file diff --git a/pages/orchestration/archived_bots/__init__.py b/pages/orchestration/archived_bots/__init__.py deleted file mode 100644 index 40e295ae67..0000000000 --- a/pages/orchestration/archived_bots/__init__.py +++ /dev/null @@ -1 +0,0 @@ -# Archived Bots Page Module \ No newline at end of file diff --git a/pages/orchestration/archived_bots/app.py b/pages/orchestration/archived_bots/app.py deleted file mode 100644 index 377ead27c0..0000000000 --- a/pages/orchestration/archived_bots/app.py +++ /dev/null @@ -1,1093 +0,0 @@ -import asyncio -import json -from datetime import datetime, timedelta -from typing import Any, Dict, List, Optional - -import nest_asyncio -import numpy as np -import pandas as pd -import plotly.express as px -import plotly.graph_objects as go -import streamlit as st -from plotly.subplots import make_subplots - -from frontend.st_utils import get_backend_api_client, initialize_st_page - -# Enable nested async -nest_asyncio.apply() - -# Initialize page -initialize_st_page( - layout="wide", - show_readme=False -) - -# Session state initialization -if "selected_database" not in st.session_state: - st.session_state.selected_database = None -if "databases_list" not in st.session_state: - st.session_state.databases_list = [] -if "databases_status" not in st.session_state: - st.session_state.databases_status = {} -if "show_database_status" not in st.session_state: - st.session_state.show_database_status = False -if "db_summary" not in st.session_state: - st.session_state.db_summary = {} -if "db_performance" not in st.session_state: - st.session_state.db_performance = {} -if "trades_data" not in st.session_state: - st.session_state.trades_data = [] -if "orders_data" not in st.session_state: - st.session_state.orders_data = [] -if "positions_data" not in st.session_state: - st.session_state.positions_data = [] -if "executors_data" not in st.session_state: - st.session_state.executors_data = [] -if "controllers_data" not in st.session_state: - st.session_state.controllers_data = [] -if "page_offset" not in st.session_state: - st.session_state.page_offset = 0 -if "page_limit" not in st.session_state: - st.session_state.page_limit = 100 -if "trade_analysis" not in st.session_state: - st.session_state.trade_analysis = {} -if "historical_candles" not in st.session_state: - st.session_state.historical_candles = [] -if "bot_runs" not in st.session_state: - st.session_state.bot_runs = [] - -# Get backend client -backend_client = get_backend_api_client() - - -# Helper functions - -def detect_timestamp_unit(timestamps): - """Detect if timestamps are in seconds or milliseconds""" - if hasattr(timestamps, 'empty') and timestamps.empty: - return 'ms' # default to milliseconds - if not hasattr(timestamps, '__iter__') or len(timestamps) == 0: - return 'ms' # default to milliseconds - - # Take a sample timestamp - sample_ts = timestamps[0] if hasattr(timestamps, '__iter__') else timestamps - - # If timestamp is greater than 1e10, it's likely milliseconds - # (1e10 corresponds to year 2001 in seconds, year 1970 in milliseconds) - if sample_ts > 1e10: - return 'ms' - else: - return 's' - -def safe_to_datetime(timestamps, default_unit='ms'): - """Safely convert timestamps to datetime, auto-detecting unit""" - if pd.isna(timestamps).all(): - return timestamps - - # Check if already datetime - if hasattr(timestamps, 'dtype') and pd.api.types.is_datetime64_any_dtype(timestamps): - return timestamps - - # Check if Series contains datetime objects - if hasattr(timestamps, '__iter__') and not isinstance(timestamps, str): - first_valid = timestamps.dropna().iloc[0] if hasattr(timestamps, 'dropna') else next((t for t in timestamps if pd.notna(t)), None) - if first_valid is not None and isinstance(first_valid, (pd.Timestamp, datetime)): - return timestamps - - # Handle series/array - if hasattr(timestamps, '__iter__') and not isinstance(timestamps, str): - non_null_ts = timestamps.dropna() if hasattr(timestamps, 'dropna') else [t for t in timestamps if pd.notna(t)] - if len(non_null_ts) == 0: - return pd.to_datetime(timestamps) - unit = detect_timestamp_unit(non_null_ts) - else: - unit = detect_timestamp_unit([timestamps]) - - return pd.to_datetime(timestamps, unit=unit) - -def load_databases(): - """Load available databases""" - try: - databases = backend_client.archived_bots.list_databases() - st.session_state.databases_list = databases - return databases - except Exception as e: - st.error(f"Failed to load databases: {str(e)}") - return [] - -def load_database_status(db_path: str): - """Load status for a specific database""" - try: - status = backend_client.archived_bots.get_database_status(db_path) - return status - except Exception as e: - return {"status": "error", "error": str(e)} - -def load_all_databases_status(): - """Load status for all databases""" - if not st.session_state.databases_list: - return - - status_dict = {} - for db_path in st.session_state.databases_list: - status_dict[db_path] = load_database_status(db_path) - - st.session_state.databases_status = status_dict - return status_dict - -def get_healthy_databases(): - """Get list of databases that are not corrupted""" - healthy_dbs = [] - for db_path, status in st.session_state.databases_status.items(): - # Check if 'healthy' field exists at the top level - if status.get("healthy") == True: - healthy_dbs.append(db_path) - # Handle different status formats as fallback - elif status.get("status") == "healthy" or status.get("status") == "ok": - healthy_dbs.append(db_path) - elif "status" in status and isinstance(status["status"], dict): - # Check if general_status is true in nested status - if status["status"].get("general_status") == True: - healthy_dbs.append(db_path) - return healthy_dbs - -def load_database_summary(db_path: str): - """Load database summary""" - try: - summary = backend_client.archived_bots.get_database_summary(db_path) - st.session_state.db_summary = summary - return summary - except Exception as e: - st.error(f"Failed to load database summary: {str(e)}") - return {} - -def load_database_performance(db_path: str): - """Load database performance data""" - try: - performance = backend_client.archived_bots.get_database_performance(db_path) - st.session_state.db_performance = performance - return performance - except Exception as e: - st.error(f"Failed to load performance data: {str(e)}") - return {} - -def load_trades_data(db_path: str, limit: int = 100, offset: int = 0): - """Load trades data with pagination""" - try: - trades = backend_client.archived_bots.get_database_trades(db_path, limit, offset) - st.session_state.trades_data = trades - return trades - except Exception as e: - st.error(f"Failed to load trades data: {str(e)}") - return {"trades": [], "total": 0} - -def load_orders_data(db_path: str, limit: int = 100, offset: int = 0, status: str = None): - """Load orders data with pagination""" - try: - orders = backend_client.archived_bots.get_database_orders(db_path, limit, offset, status) - st.session_state.orders_data = orders - return orders - except Exception as e: - st.error(f"Failed to load orders data: {str(e)}") - return {"orders": [], "total": 0} - -def load_positions_data(db_path: str, limit: int = 100, offset: int = 0): - """Load positions data""" - try: - positions = backend_client.archived_bots.get_database_positions(db_path, limit, offset) - st.session_state.positions_data = positions - return positions - except Exception as e: - st.error(f"Failed to load positions data: {str(e)}") - return {"positions": [], "total": 0} - -def load_executors_data(db_path: str): - """Load executors data""" - try: - executors = backend_client.archived_bots.get_database_executors(db_path) - st.session_state.executors_data = executors - return executors - except Exception as e: - st.error(f"Failed to load executors data: {str(e)}") - return {"executors": []} - -def load_controllers_data(db_path: str): - """Load controllers data""" - try: - controllers = backend_client.archived_bots.get_database_controllers(db_path) - st.session_state.controllers_data = controllers - return controllers - except Exception as e: - st.error(f"Failed to load controllers data: {str(e)}") - return {"controllers": []} - -def get_trade_analysis(db_path: str): - """Get trade analysis including exchanges and trading pairs""" - try: - trades = backend_client.archived_bots.get_database_trades(db_path, limit=10000, offset=0) - if not trades or "trades" not in trades: - return {"exchanges": [], "trading_pairs": [], "start_time": None, "end_time": None} - - trades_data = trades["trades"] - if not trades_data: - return {"exchanges": [], "trading_pairs": [], "start_time": None, "end_time": None} - - df = pd.DataFrame(trades_data) - - # Extract exchanges and trading pairs - exchanges = df["connector_name"].unique().tolist() if "connector_name" in df.columns else [] - trading_pairs = df["trading_pair"].unique().tolist() if "trading_pair" in df.columns else [] - - # Get time range - if "timestamp" in df.columns: - # Convert timestamps to datetime with auto-detection - df["timestamp"] = safe_to_datetime(df["timestamp"]) - start_time = df["timestamp"].min() - end_time = df["timestamp"].max() - else: - start_time = None - end_time = None - - return { - "exchanges": exchanges, - "trading_pairs": trading_pairs, - "start_time": start_time, - "end_time": end_time, - "trades_df": df - } - except Exception as e: - st.error(f"Failed to analyze trades: {str(e)}") - return {"exchanges": [], "trading_pairs": [], "start_time": None, "end_time": None} - -def load_bot_runs(): - """Load bot runs data""" - try: - bot_runs = backend_client.bot_orchestration.get_bot_runs(limit=1000, offset=0) - if bot_runs and "data" in bot_runs: - st.session_state.bot_runs = bot_runs["data"] - return bot_runs["data"] - else: - st.session_state.bot_runs = [] - return [] - except Exception as e: - st.warning(f"Could not load bot runs: {str(e)}") - return [] - -def find_matching_bot_run(db_path: str, bot_runs: List[Dict]): - """Find the bot run that matches the database file""" - if not bot_runs: - return None - - # Extract bot name from database path - # Format: "bots/archived/askarabuuut-20250710-0013/data/askarabuuut-20250710-0013-20250710-001318.sqlite" - try: - filename = db_path.split("/")[-1] # Get filename - bot_name = filename.split("-20")[0] # Extract bot name before timestamp - - # Find matching bot run - for run in bot_runs: - if run.get("bot_name", "").startswith(bot_name): - return run - - return None - except Exception: - return None - -def create_bot_runs_scatterplot(bot_runs: List[Dict], healthy_databases: List[str]): - """Create a scatterplot visualization of bot runs with performance data""" - if not bot_runs: - return None - - # Prepare data for plotting - plot_data = [] - - for run in bot_runs: - try: - # Parse final status to get performance data - final_status = json.loads(run.get("final_status", "{}")) - performance = final_status.get("performance", {}) - - # Extract performance metrics - global_pnl = 0 - volume_traded = 0 - realized_pnl = 0 - unrealized_pnl = 0 - - for controller_name, controller_perf in performance.items(): - if isinstance(controller_perf, dict) and "performance" in controller_perf: - perf_data = controller_perf["performance"] - global_pnl += perf_data.get("global_pnl_quote", 0) - volume_traded += perf_data.get("volume_traded", 0) - realized_pnl += perf_data.get("realized_pnl_quote", 0) - unrealized_pnl += perf_data.get("unrealized_pnl_quote", 0) - - # Calculate duration - deployed_at = pd.to_datetime(run.get("deployed_at", "")) - stopped_at = pd.to_datetime(run.get("stopped_at", "")) - duration_hours = (stopped_at - deployed_at).total_seconds() / 3600 if deployed_at and stopped_at else 0 - - # Check if database is available - has_database = any(run.get("bot_name", "") in db for db in healthy_databases) - - plot_data.append({ - "bot_name": run.get("bot_name", "Unknown"), - "strategy": run.get("strategy_name", "Unknown"), - "global_pnl": global_pnl, - "volume_traded": volume_traded, - "realized_pnl": realized_pnl, - "unrealized_pnl": unrealized_pnl, - "duration_hours": duration_hours, - "deployed_at": deployed_at, - "stopped_at": stopped_at, - "run_status": run.get("run_status", "Unknown"), - "deployment_status": run.get("deployment_status", "Unknown"), - "account": run.get("account_name", "Unknown"), - "has_database": has_database, - "bot_id": run.get("id", 0) - }) - - except Exception as e: - continue - - if not plot_data: - return None - - df = pd.DataFrame(plot_data) - - # Create scatter plot - fig = go.Figure() - - # Add traces for runs with and without databases - for has_db, label, color, symbol in [(True, "With Database", "#4CAF50", "circle"), - (False, "No Database", "#9E9E9E", "circle-open")]: - subset = df[df["has_database"] == has_db] - if not subset.empty: - fig.add_trace(go.Scatter( - x=subset["volume_traded"], - y=subset["global_pnl"], - mode="markers", - name=label, - marker=dict( - size=subset["duration_hours"].apply(lambda x: max(8, min(x * 2, 50))), - color=color, - symbol=symbol, - line=dict(width=2, color="white"), - opacity=0.8 - ), - hovertemplate=( - "%{customdata[0]}
" + - "Strategy: %{customdata[1]}
" + - "Global PnL: $%{y:.4f}
" + - "Volume: $%{x:,.0f}
" + - "Realized PnL: $%{customdata[2]:.4f}
" + - "Unrealized PnL: $%{customdata[3]:.4f}
" + - "Duration: %{customdata[4]:.1f}h
" + - "Deployed: %{customdata[5]}
" + - "Stopped: %{customdata[6]}
" + - "Status: %{customdata[7]} / %{customdata[8]}
" + - "Account: %{customdata[9]}
" + - "" - ), - customdata=subset[["bot_name", "strategy", "realized_pnl", "unrealized_pnl", - "duration_hours", "deployed_at", "stopped_at", "run_status", - "deployment_status", "account"]].values - )) - - # Update layout - fig.update_layout( - title="Bot Runs Performance Overview", - xaxis_title="Volume Traded ($)", - yaxis_title="Global PnL ($)", - template="plotly_dark", - plot_bgcolor='rgba(0, 0, 0, 0)', - paper_bgcolor='rgba(0, 0, 0, 0.1)', - font=dict(color='white', size=12), - height=600, - hovermode="closest", - showlegend=True, - legend=dict( - orientation="h", - yanchor="bottom", - y=1.02, - xanchor="right", - x=1 - ) - ) - - # Add quadrant lines - fig.add_hline(y=0, line=dict(color="gray", width=1, dash="dash"), opacity=0.5) - fig.add_vline(x=0, line=dict(color="gray", width=1, dash="dash"), opacity=0.5) - - return fig, df - -def get_historical_candles(connector_name: str, trading_pair: str, start_time: datetime, end_time: datetime, interval: str = "5m"): - """Get historical candle data for the specified period""" - try: - # Add buffer time for candles - buffer_time = timedelta(hours=1) # 2 hours buffer for 20 candles at 5min = 100 minutes - extended_start = start_time - buffer_time - extended_end = end_time + buffer_time - - # Call backend API to get historical candles using market_data service - candles = backend_client.market_data.get_historical_candles( - connector_name=connector_name, - trading_pair=trading_pair, - interval=interval, - start_time=int(extended_start.timestamp()), - end_time=int(extended_end.timestamp()) - ) - - return candles - except Exception as e: - st.error(f"Failed to get historical candles: {str(e)}") - return [] - -def create_performance_chart(performance_data: Dict[str, Any]): - """Create performance visualization chart""" - if not performance_data or "performance_data" not in performance_data: - return None - - perf_data = performance_data["performance_data"] - if not perf_data: - return None - - df = pd.DataFrame(perf_data) - - # Convert timestamp to datetime with auto-detection - df["timestamp"] = safe_to_datetime(df["timestamp"]) - - fig = go.Figure() - - # Add net PnL line - fig.add_trace(go.Scatter( - x=df["timestamp"], - y=df["net_pnl_quote"], - mode="lines+markers", - name="Net PnL", - line=dict(width=2, color='#4CAF50'), - marker=dict(size=4) - )) - - # Add realized PnL line - if "realized_trade_pnl_quote" in df.columns: - # Calculate cumulative realized PnL - df["cumulative_realized_pnl"] = df["realized_trade_pnl_quote"].cumsum() - fig.add_trace(go.Scatter( - x=df["timestamp"], - y=df["cumulative_realized_pnl"], - mode="lines", - name="Cumulative Realized PnL", - line=dict(width=2, color='#2196F3') - )) - - # Add unrealized PnL line - if "unrealized_trade_pnl_quote" in df.columns: - fig.add_trace(go.Scatter( - x=df["timestamp"], - y=df["unrealized_trade_pnl_quote"], - mode="lines", - name="Unrealized PnL", - line=dict(width=1, color='#FF9800') - )) - - fig.update_layout( - title="Trading Performance Over Time", - height=400, - template='plotly_dark', - xaxis_title="Time", - yaxis_title="PnL (Quote)", - showlegend=True - ) - return fig - -def create_trades_chart(trades_data: List[Dict[str, Any]]): - """Create trades visualization""" - if not trades_data: - return None - - df = pd.DataFrame(trades_data) - - fig = go.Figure() - - # Group by date and sum volume - df["date"] = safe_to_datetime(df["timestamp"]).dt.date - daily_volume = df.groupby("date")["amount"].sum().reset_index() - - fig.add_trace(go.Bar( - x=daily_volume["date"], - y=daily_volume["amount"], - name="Daily Volume" - )) - - fig.update_layout(title="Trade Volume Over Time", height=400) - return fig - -def get_default_layout(title=None, height=800, width=1200): - """Get default layout inspired by backtesting result""" - layout = { - "template": "plotly_dark", - "plot_bgcolor": 'rgba(0, 0, 0, 0)', - "paper_bgcolor": 'rgba(0, 0, 0, 0.1)', - "font": {"color": 'white', "size": 12}, - "height": height, - "width": width, - "margin": {"l": 20, "r": 20, "t": 50, "b": 20}, - "xaxis_rangeslider_visible": False, - "hovermode": "x unified", - "showlegend": False, - } - if title: - layout["title"] = title - return layout - -def add_trades_to_chart(fig, trades_data: List[Dict[str, Any]]): - """Add trade lines to chart inspired by backtesting result""" - if not trades_data: - return fig - - trades_df = pd.DataFrame(trades_data) - trades_df["timestamp"] = safe_to_datetime(trades_df["timestamp"]) - - # Calculate cumulative PnL for each trade - if "pnl" in trades_df.columns: - trades_df["cumulative_pnl"] = trades_df["pnl"].cumsum() - else: - trades_df["cumulative_pnl"] = 0 - - # Group trades by time intervals to show trade lines - for idx, trade in trades_df.iterrows(): - trade_time = trade["timestamp"] - trade_price = trade["price"] - - # Determine trade type - is_buy = False - if "trade_type" in trade: - is_buy = trade["trade_type"].upper() == "BUY" - elif "side" in trade: - is_buy = trade["side"].upper() == "BUY" - elif "order_type" in trade: - is_buy = trade["order_type"].upper() == "BUY" - - # Use trade type for color - Buy=green, Sell=red - color = "green" if is_buy else "red" - - # Add trade marker - fig.add_trace(go.Scatter( - x=[trade_time], - y=[trade_price], - mode="markers", - marker=dict( - symbol="triangle-up" if is_buy else "triangle-down", - size=10, - color=color, - line=dict(width=1, color=color) - ), - name=f"{'Buy' if is_buy else 'Sell'} Trade", - showlegend=False, - hovertemplate=f"{'Buy' if is_buy else 'Sell'} Trade
" + - f"Time: %{{x}}
" + - f"Price: ${trade_price:.4f}
" + - f"Amount: {trade.get('amount', 0):.4f}
" + - "" - )) - - return fig - -def get_pnl_trace(trades_data: List[Dict[str, Any]]): - """Get PnL trace for trades""" - if not trades_data: - return None - - trades_df = pd.DataFrame(trades_data) - trades_df["timestamp"] = safe_to_datetime(trades_df["timestamp"]) - - # Calculate cumulative PnL - if "pnl" in trades_df.columns: - pnl_values = trades_df["pnl"].values - else: - pnl_values = [0] * len(trades_df) - - cum_pnl = np.cumsum(pnl_values) - - return go.Scatter( - x=trades_df["timestamp"], - y=cum_pnl, - mode='lines', - line=dict(color='gold', width=2), - name='Cumulative PnL' - ) - -def create_comprehensive_dashboard(candles_data: List[Dict[str, Any]], trades_data: List[Dict[str, Any]], performance_data: Dict[str, Any], trading_pair: str = ""): - """Create comprehensive trading dashboard with multiple panels""" - if not candles_data or not performance_data: - return None - - # Create subplots with shared x-axis - fig = make_subplots( - rows=2, cols=1, - shared_xaxes=True, - vertical_spacing=0.05, - subplot_titles=( - f'{trading_pair} Price & Trades', - 'PnL & Fees vs Position' - ), - row_heights=[0.75, 0.25], - specs=[[{"secondary_y": False}], - [{"secondary_y": True}]] - ) - - # Prepare data - candles_df = pd.DataFrame(candles_data) - candles_df["timestamp"] = safe_to_datetime(candles_df["timestamp"]) - - perf_data = performance_data.get("performance_data", []) - perf_df = None - if perf_data: - perf_df = pd.DataFrame(perf_data) - perf_df["timestamp"] = safe_to_datetime(perf_df["timestamp"]) - - # Row 1: Candlestick chart with trades - fig.add_trace(go.Candlestick( - x=candles_df["timestamp"], - open=candles_df["open"], - high=candles_df["high"], - low=candles_df["low"], - close=candles_df["close"], - name="Price", - showlegend=False - ), row=1, col=1) - - # Add trades to price chart and average price lines from performance data - if trades_data: - trades_df = pd.DataFrame(trades_data) - trades_df["timestamp"] = safe_to_datetime(trades_df["timestamp"]) - - # Add individual trade markers - for idx, trade in trades_df.iterrows(): - is_buy = False - if "trade_type" in trade: - is_buy = trade["trade_type"].upper() == "BUY" - elif "side" in trade: - is_buy = trade["side"].upper() == "BUY" - - color = "green" if is_buy else "red" - fig.add_trace(go.Scatter( - x=[trade["timestamp"]], - y=[trade["price"]], - mode="markers", - marker=dict( - symbol="triangle-up" if is_buy else "triangle-down", - size=8, - color=color, - line=dict(width=1, color=color) - ), - name=f"{'Buy' if is_buy else 'Sell'} Trade", - showlegend=False, - hovertemplate=f"{'Buy' if is_buy else 'Sell'}
Price: ${trade['price']:.4f}
Amount: {trade.get('amount', 0):.4f}" - ), row=1, col=1) - - # Add dynamic average price lines from performance data - if perf_data and perf_df is not None: - # Filter performance data to only include rows with valid average prices - buy_avg_data = perf_df[perf_df["buy_avg_price"] > 0].copy() - sell_avg_data = perf_df[perf_df["sell_avg_price"] > 0].copy() - - # Add buy average price line (evolving over time) - if not buy_avg_data.empty: - fig.add_trace(go.Scatter( - x=buy_avg_data["timestamp"], - y=buy_avg_data["buy_avg_price"], - mode="lines", - name="Buy Avg Price", - line=dict(color="green", width=2, dash="dash"), - showlegend=True, - hovertemplate="Buy Avg Price
Time: %{x}
Price: $%{y:.4f}" - ), row=1, col=1) - - # Add final buy average price as horizontal line - final_buy_avg = buy_avg_data["buy_avg_price"].iloc[-1] - fig.add_hline( - y=final_buy_avg, - line=dict(color="green", width=1, dash="dot"), - annotation_text=f"Final Buy Avg: ${final_buy_avg:.4f}", - annotation_position="bottom right", - annotation_font_color="green", - annotation_font_size=10, - row=1, col=1 - ) - - # Add sell average price line (evolving over time) - if not sell_avg_data.empty: - fig.add_trace(go.Scatter( - x=sell_avg_data["timestamp"], - y=sell_avg_data["sell_avg_price"], - mode="lines", - name="Sell Avg Price", - line=dict(color="red", width=2, dash="dash"), - showlegend=True, - hovertemplate="Sell Avg Price
Time: %{x}
Price: $%{y:.4f}" - ), row=1, col=1) - - # Add final sell average price as horizontal line - final_sell_avg = sell_avg_data["sell_avg_price"].iloc[-1] - fig.add_hline( - y=final_sell_avg, - line=dict(color="red", width=1, dash="dot"), - annotation_text=f"Final Sell Avg: ${final_sell_avg:.4f}", - annotation_position="top right", - annotation_font_color="red", - annotation_font_size=10, - row=1, col=1 - ) - - if perf_data and perf_df is not None: - # Row 2: Net PnL, Unrealized PnL, and Fees (left y-axis) + Position (right y-axis) - fig.add_trace(go.Scatter( - x=perf_df["timestamp"], - y=perf_df["net_pnl_quote"], - mode="lines", - name="Net PnL", - line=dict(color='#4CAF50', width=2), - showlegend=True - ), row=2, col=1, secondary_y=False) - - fig.add_trace(go.Scatter( - x=perf_df["timestamp"], - y=perf_df["unrealized_trade_pnl_quote"], - mode="lines", - name="Unrealized PnL", - line=dict(color='#FF9800', width=2), - showlegend=True - ), row=2, col=1, secondary_y=False) - - fig.add_trace(go.Scatter( - x=perf_df["timestamp"], - y=perf_df["fees_quote"].cumsum(), - mode="lines", - name="Cumulative Fees", - line=dict(color='#F44336', width=2), - showlegend=True - ), row=2, col=1, secondary_y=False) - - fig.add_trace(go.Scatter( - x=perf_df["timestamp"], - y=perf_df["net_position"], - mode="lines", - name="Net Position", - line=dict(color='#2196F3', width=2), - showlegend=True - ), row=2, col=1, secondary_y=True) - - # Update layout - fig.update_layout( - height=700, - template='plotly_dark', - plot_bgcolor='rgba(0, 0, 0, 0)', - paper_bgcolor='rgba(0, 0, 0, 0.1)', - font=dict(color='white', size=12), - margin=dict(l=20, r=20, t=50, b=20), - hovermode="x unified", - legend=dict( - orientation="h", - yanchor="bottom", - y=1.02, - xanchor="right", - x=1 - ) - ) - - # Update axis properties - fig.update_xaxes(rangeslider_visible=False) - fig.update_yaxes(title_text="Price ($)", row=1, col=1) - fig.update_yaxes(title_text="PnL & Fees ($)", row=2, col=1, secondary_y=False) - fig.update_yaxes(title_text="Position", row=2, col=1, secondary_y=True) - - return fig - -# Page header -st.title("🗃️ Archived Bots") - -# Load databases and bot runs on first run -if not st.session_state.databases_list: - with st.spinner("Loading databases..."): - load_databases() - load_all_databases_status() - load_bot_runs() - -# Bot Runs Overview Section -st.subheader("📊 Bot Runs Overview") - -# Get healthy databases for scatterplot -if st.session_state.databases_list: - # Load status if not already loaded (needed for filtering) - if not st.session_state.databases_status: - with st.spinner("Loading database status..."): - load_all_databases_status() - - healthy_databases = get_healthy_databases() -else: - healthy_databases = [] - -# Create and display scatterplot -if st.session_state.bot_runs: - scatterplot_result = create_bot_runs_scatterplot(st.session_state.bot_runs, healthy_databases) - if scatterplot_result: - fig, runs_df = scatterplot_result - st.plotly_chart(fig, use_container_width=True) - - # Summary statistics - col1, col2, col3, col4 = st.columns(4) - with col1: - total_runs = len(runs_df) - st.metric("Total Runs", total_runs) - - with col2: - profitable_runs = len(runs_df[runs_df["global_pnl"] > 0]) - profit_rate = (profitable_runs / total_runs * 100) if total_runs > 0 else 0 - st.metric("Profitable Runs", f"{profitable_runs} ({profit_rate:.1f}%)") - - with col3: - total_pnl = runs_df["global_pnl"].sum() - st.metric("Total PnL", f"${total_pnl:,.2f}") - - with col4: - total_volume = runs_df["volume_traded"].sum() - st.metric("Total Volume", f"${total_volume:,.0f}") - else: - st.warning("No bot runs data available for visualization.") -else: - st.info("Loading bot runs data...") - -st.divider() - -# Database Analysis Section -st.subheader("🔍 Database Analysis") - -# Database selection and controls in one row -col1, col2, col3 = st.columns([2, 1, 1]) - -with col1: - if healthy_databases: - selected_db = st.selectbox( - "Select Database", - options=healthy_databases, - key="db_selector", - help="Choose a database to analyze in detail" - ) - - if selected_db and selected_db != st.session_state.selected_database: - st.session_state.selected_database = selected_db - # Reset data when database changes - st.session_state.db_summary = {} - st.session_state.db_performance = {} - st.session_state.trades_data = [] - st.session_state.orders_data = [] - st.session_state.positions_data = [] - st.session_state.executors_data = [] - st.session_state.controllers_data = [] - st.session_state.page_offset = 0 - st.session_state.trade_analysis = {} - st.session_state.historical_candles = [] - st.rerun() - else: - st.warning("No healthy databases found.") - -with col2: - if st.button("🔄 Refresh", use_container_width=True): - with st.spinner("Refreshing..."): - load_databases() - load_all_databases_status() - load_bot_runs() - st.rerun() - -with col3: - load_dashboard_btn = st.button("📊 Load Dashboard", use_container_width=True, type="primary", disabled=not st.session_state.selected_database) - -# Main content - only show if database is selected -if st.session_state.selected_database: - db_path = st.session_state.selected_database - - # Load database summary if not already loaded - if not st.session_state.db_summary: - with st.spinner("Loading database summary..."): - load_database_summary(db_path) - - # Find matching bot run - matching_bot_run = find_matching_bot_run(db_path, st.session_state.bot_runs) - - # Compact summary in one row - if st.session_state.db_summary and matching_bot_run: - summary = st.session_state.db_summary - bot_name = matching_bot_run.get('bot_name', 'N/A') - strategy = matching_bot_run.get('strategy_name', 'N/A') - deployed_date = pd.to_datetime(matching_bot_run.get('deployed_at', '')).strftime('%m/%d %H:%M') if matching_bot_run.get('deployed_at') else 'N/A' - - st.info(f"🤖 **{bot_name}** | {strategy} | Deployed: {deployed_date} | {summary.get('total_trades', 0)} trades on {summary.get('exchanges', ['N/A'])[0]} {summary.get('trading_pairs', ['N/A'])[0]}") - - # Handle Load Dashboard button click - if load_dashboard_btn: - with st.spinner("Loading comprehensive dashboard..."): - try: - # Load all necessary data - load_database_performance(db_path) - load_trades_data(db_path, 10000, 0) # Load more trades for analysis - st.session_state.trade_analysis = get_trade_analysis(db_path) - - st.success("✅ Dashboard loaded!") - st.rerun() - - except Exception as e: - st.error(f"❌ Failed to load dashboard: {str(e)}") - - st.divider() - - # Main Dashboard - if st.session_state.db_performance and st.session_state.trades_data and st.session_state.trade_analysis: - performance = st.session_state.db_performance - summary = performance.get("summary", {}) - analysis = st.session_state.trade_analysis - - # Performance metrics - col1, col2, col3, col4 = st.columns(4) - - with col1: - net_pnl = summary.get('final_net_pnl_quote', 0) - st.metric( - "Net PnL (Quote)", - value=f"${net_pnl:,.6f}", - delta=f"{net_pnl:+.6f}" if net_pnl != 0 else None - ) - - with col2: - fees = summary.get('total_fees_quote', 0) - st.metric( - "Total Fees (Quote)", - value=f"${fees:,.4f}" - ) - - with col3: - realized_pnl = summary.get('final_realized_pnl_quote', 0) - st.metric( - "Realized PnL", - value=f"${realized_pnl:,.6f}", - delta=f"{realized_pnl:+.6f}" if realized_pnl != 0 else None - ) - - with col4: - volume = summary.get('total_volume_quote', 0) - st.metric( - "Total Volume", - value=f"${volume:,.2f}" - ) - - st.divider() - - if analysis.get("exchanges") and analysis.get("trading_pairs"): - # Simple controls - col1, col2 = st.columns([3, 1]) - with col1: - selected_exchange = analysis["exchanges"][0] if len(analysis["exchanges"]) == 1 else st.selectbox( - "Exchange", options=analysis["exchanges"] - ) - - selected_pair = analysis["trading_pairs"][0] if len(analysis["trading_pairs"]) == 1 else st.selectbox( - "Trading Pair", options=analysis["trading_pairs"] - ) - - with col2: - candle_interval = st.selectbox( - "Candle Interval", - options=["1m", "5m", "15m", "1h"], - index=1, # Default to 5m - ) - - # Auto-load candles when interval changes - candle_key = f"{selected_exchange}_{selected_pair}_{candle_interval}" - if st.session_state.get("candle_key") != candle_key and analysis.get("start_time") and analysis.get("end_time"): - with st.spinner("Loading historical candles..."): - candles = get_historical_candles( - selected_exchange, - selected_pair, - analysis["start_time"], - analysis["end_time"], - candle_interval - ) - st.session_state.historical_candles = candles - st.session_state.candle_key = candle_key - - # Display comprehensive dashboard - if st.session_state.historical_candles: - trades_data = analysis.get("trades_df", pd.DataFrame()) - trades_list = trades_data.to_dict("records") if not trades_data.empty else [] - - dashboard = create_comprehensive_dashboard( - st.session_state.historical_candles, - trades_list, - performance, - selected_pair - ) - - if dashboard: - st.plotly_chart(dashboard, use_container_width=True) - else: - st.warning("Unable to create dashboard. Check data format.") - else: - st.info("Loading candles data...") - else: - st.warning("⚠️ No trading data found for analysis.") - else: - st.info("Click 'Load Dashboard' to view comprehensive trading analysis.") - -else: - st.info("Please select a database to begin analysis.") - -# Auto-refresh fragment for database list -@st.fragment(run_every=30) -def auto_refresh_databases(): - """Auto-refresh database list every 30 seconds""" - try: - if st.session_state.get("auto_refresh_enabled", False): - load_databases() - except Exception: - # Gracefully handle fragment lifecycle issues - pass - -# Auto-refresh toggle -st.sidebar.markdown("### ⚙️ Settings") -auto_refresh = st.sidebar.checkbox( - "Auto-refresh database list", - value=st.session_state.get("auto_refresh_enabled", False), - help="Automatically refresh the database list every 30 seconds" -) -st.session_state.auto_refresh_enabled = auto_refresh - -if auto_refresh: - auto_refresh_databases() - -# Export functionality -if st.session_state.selected_database: - st.sidebar.markdown("### 📤 Export Data") - - export_format = st.sidebar.selectbox( - "Export Format", - options=["CSV", "JSON", "Excel"], - help="Choose the format for data export" - ) - - if st.sidebar.button("📥 Export Current Data", use_container_width=True): - try: - # Implementation would depend on the specific data to export - st.sidebar.success("Export functionality would be implemented here") - except Exception as e: - st.sidebar.error(f"Export failed: {str(e)}") - -# Help section -st.sidebar.markdown("### ❓ Help") -st.sidebar.info( - "💡 **Tips:**\n" - "- Select a database to start analyzing\n" - "- Use tabs to navigate different data views\n" - "- Enable auto-refresh for real-time updates\n" - "- Use pagination for large datasets\n" - "- Export data for external analysis" -) \ No newline at end of file diff --git a/pages/orchestration/credentials/README.md b/pages/orchestration/credentials/README.md deleted file mode 100644 index 18f4d94fd9..0000000000 --- a/pages/orchestration/credentials/README.md +++ /dev/null @@ -1,19 +0,0 @@ -### Description - -This page helps you deploy and manage Hummingbot instances: - -- Starting and stopping Hummingbot Broker -- Creating, starting and stopping bot instances -- Managing strategy and script files that instances run -- Fetching status of running instances - -### Maintainers - -This page is maintained by Hummingbot Foundation as a template other pages: - -* [cardosfede](https://github.com/cardosfede) -* [fengtality](https://github.com/fengtality) - -### Wiki - -See the [wiki](https://github.com/hummingbot/dashboard/wiki/%F0%9F%90%99-Bot-Orchestration) for more information. \ No newline at end of file diff --git a/pages/orchestration/credentials/__init__.py b/pages/orchestration/credentials/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/pages/orchestration/credentials/app.py b/pages/orchestration/credentials/app.py deleted file mode 100644 index 25c9382cc0..0000000000 --- a/pages/orchestration/credentials/app.py +++ /dev/null @@ -1,194 +0,0 @@ -import nest_asyncio -import streamlit as st - -from frontend.st_utils import get_backend_api_client, initialize_st_page - -nest_asyncio.apply() - -initialize_st_page(title="Credentials", icon="🔑") - -# Page content -client = get_backend_api_client() -NUM_COLUMNS = 4 - - -def get_all_connectors_config_map(): - # Get fresh client instance inside cached function - connectors = client.connectors.list_connectors() - config_map_dict = {} - for connector_name in connectors: - try: - config_map = client.connectors.get_config_map(connector_name=connector_name) - config_map_dict[connector_name] = config_map - except Exception as e: - st.warning(f"Could not get config map for {connector_name}: {e}") - config_map_dict[connector_name] = [] - return config_map_dict - - -all_connector_config_map = get_all_connectors_config_map() - - -@st.fragment -def accounts_section(): - # Get fresh accounts list - accounts = client.accounts.list_accounts() - - if accounts: - n_accounts = len(accounts) - # Ensure master_account is first, but handle if it doesn't exist - if "master_account" in accounts: - accounts.remove("master_account") - accounts.insert(0, "master_account") - for i in range(0, n_accounts, NUM_COLUMNS): - cols = st.columns(NUM_COLUMNS) - for j, account in enumerate(accounts[i:i + NUM_COLUMNS]): - with cols[j]: - st.subheader(f"🏦 {account}") - credentials = client.accounts.list_account_credentials(account) - st.json(credentials) - else: - st.write("No accounts available.") - - st.markdown("---") - - # Account management actions - c1, c2, c3 = st.columns([1, 1, 1]) - with c1: - # Section to create a new account - st.header("Create a New Account") - new_account_name = st.text_input("New Account Name") - if st.button("Create Account"): - new_account_name = new_account_name.replace(" ", "_") - if new_account_name: - if new_account_name in accounts: - st.warning(f"Account {new_account_name} already exists.") - st.stop() - elif new_account_name == "" or all(char == "_" for char in new_account_name): - st.warning("Please enter a valid account name.") - st.stop() - response = client.accounts.add_account(new_account_name) - st.write(response) - try: - st.rerun(scope="fragment") - except Exception: - st.rerun() - else: - st.write("Please enter an account name.") - - with c2: - # Section to delete an existing account - st.header("Delete an Account") - delete_account_name = st.selectbox("Select Account to Delete", - options=accounts if accounts else ["No accounts available"], ) - if st.button("Delete Account"): - if delete_account_name and delete_account_name != "No accounts available": - response = client.accounts.delete_account(delete_account_name) - st.warning(response) - try: - st.rerun(scope="fragment") - except Exception: - st.rerun() - else: - st.write("Please select a valid account.") - - with c3: - # Section to delete a credential from an existing account - st.header("Delete Credential") - delete_account_cred_name = st.selectbox("Select the credentials account", - options=accounts if accounts else ["No accounts available"], ) - credentials_data = client.accounts.list_account_credentials(delete_account_cred_name) - # Handle different possible return formats - if isinstance(credentials_data, list): - # If it's a list of strings in format "connector.key" - if credentials_data and isinstance(credentials_data[0], str): - creds_for_account = [credential.split(".")[0] for credential in credentials_data] - # If it's a list of dicts, extract connector names - elif credentials_data and isinstance(credentials_data[0], dict): - creds_for_account = list( - set([cred.get('connector', cred.get('connector_name', '')) for cred in credentials_data if - cred.get('connector') or cred.get('connector_name')])) - else: - creds_for_account = [] - elif isinstance(credentials_data, dict): - # If it's a dict with connectors as keys - creds_for_account = list(credentials_data.keys()) - else: - creds_for_account = [] - delete_cred_name = st.selectbox("Select a Credential to Delete", - options=creds_for_account if creds_for_account else [ - "No credentials available"]) - if st.button("Delete Credential"): - if (delete_account_cred_name and delete_account_cred_name != "No accounts available") and \ - (delete_cred_name and delete_cred_name != "No credentials available"): - response = client.accounts.delete_credential(delete_account_cred_name, delete_cred_name) - st.warning(response) - try: - st.rerun(scope="fragment") - except Exception: - st.rerun() - else: - st.write("Please select a valid account.") - - return accounts - - -accounts = accounts_section() - -st.markdown("---") - - -# Section to add credentials -@st.fragment -def add_credentials_section(): - st.header("Add Credentials") - c1, c2 = st.columns([1, 1]) - with c1: - account_name = st.selectbox("Select Account", options=accounts if accounts else ["No accounts available"]) - with c2: - all_connectors = list(all_connector_config_map.keys()) - binance_perpetual_index = all_connectors.index( - "binance_perpetual") if "binance_perpetual" in all_connectors else None - connector_name = st.selectbox("Select Connector", options=all_connectors, index=binance_perpetual_index) - config_map = all_connector_config_map.get(connector_name, []) - - st.write(f"Configuration Map for {connector_name}:") - config_inputs = {} - - # Custom logic for XRPL connector - if connector_name == "xrpl": - # Define custom XRPL fields with default values - xrpl_fields = { - "xrpl_secret_key": "", - "wss_node_urls": "wss://xrplcluster.com,wss://s1.ripple.com,wss://s2.ripple.com", - } - - # Display XRPL-specific fields - for field, default_value in xrpl_fields.items(): - if field == "xrpl_secret_key": - config_inputs[field] = st.text_input(field, type="password", key=f"{connector_name}_{field}") - else: - config_inputs[field] = st.text_input(field, value=default_value, key=f"{connector_name}_{field}") - - if st.button("Submit Credentials"): - response = client.accounts.add_credential(account_name, connector_name, config_inputs) - if response: - st.success(f"✅ Successfully added {connector_name} connector to {account_name}!") - try: - st.rerun(scope="fragment") - except Exception: - st.rerun() - else: - # Default behavior for other connectors - cols = st.columns(NUM_COLUMNS) - for i, config in enumerate(config_map): - with cols[i % (NUM_COLUMNS - 1)]: - config_inputs[config] = st.text_input(config, type="password", key=f"{connector_name}_{config}") - - with cols[-1]: - if st.button("Submit Credentials"): - response = client.accounts.add_credential(account_name, connector_name, config_inputs) - st.write(response) - - -add_credentials_section() diff --git a/pages/orchestration/file_manager/README.md b/pages/orchestration/file_manager/README.md deleted file mode 100644 index d82ed9e209..0000000000 --- a/pages/orchestration/file_manager/README.md +++ /dev/null @@ -1,13 +0,0 @@ -### Description - -This page helps you manage and edit script files to run with Hummingbot instances: - -- Selecting files -- Editing and saving files - -### Maintainers - -This page is maintained by Hummingbot Foundation: - -* [cardosfede](https://github.com/cardosfede) -* [fengtality](https://github.com/fengtality) diff --git a/pages/orchestration/file_manager/__init__.py b/pages/orchestration/file_manager/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/pages/orchestration/file_manager/app.py b/pages/orchestration/file_manager/app.py deleted file mode 100644 index f771d544e5..0000000000 --- a/pages/orchestration/file_manager/app.py +++ /dev/null @@ -1,40 +0,0 @@ -from types import SimpleNamespace - -import streamlit as st -from streamlit_elements import elements, mui - -from frontend.components.bots_file_explorer import BotsFileExplorer -from frontend.components.dashboard import Dashboard -from frontend.components.editor import Editor -from frontend.st_utils import initialize_st_page - -initialize_st_page(title="Strategy Configs", icon="🗂️") - - -if "fe_board" not in st.session_state: - board = Dashboard() - fe_board = SimpleNamespace( - dashboard=board, - file_explorer=BotsFileExplorer(board, 0, 0, 3, 7), - editor=Editor(board, 4, 0, 9, 7), - ) - st.session_state.fe_board = fe_board - -else: - fe_board = st.session_state.fe_board - -# Add new tabs -for tab_name, content in fe_board.file_explorer.tabs.items(): - if tab_name not in fe_board.editor.tabs: - fe_board.editor.add_tab(tab_name, content["content"], content["language"]) - -# Remove deleted tabs -for tab_name in list(fe_board.editor.tabs.keys()): - if tab_name not in fe_board.file_explorer.tabs: - fe_board.editor.remove_tab(tab_name) - -with elements("file_manager"): - with mui.Paper(elevation=3, style={"padding": "2rem"}, spacing=[2, 2], container=True): - with fe_board.dashboard(): - fe_board.file_explorer() - fe_board.editor() diff --git a/pages/orchestration/instances/README.md b/pages/orchestration/instances/README.md deleted file mode 100644 index 6d5e25dfeb..0000000000 --- a/pages/orchestration/instances/README.md +++ /dev/null @@ -1,137 +0,0 @@ -# Bot Instances Management - -The Bot Instances page provides centralized control for deploying, managing, and monitoring Hummingbot trading bot instances across your infrastructure. - -## Features - -### 🤖 Instance Management -- **Create Bot Instances**: Deploy new Hummingbot instances with custom configurations -- **Start/Stop Control**: Manage instance lifecycle with one-click controls -- **Status Monitoring**: Real-time health checks and status updates -- **Multi-Instance Support**: Manage multiple bots running different strategies simultaneously - -### 📁 Configuration Management -- **Strategy File Upload**: Deploy strategy Python files to instances -- **Script Management**: Upload and manage custom scripts -- **Configuration Templates**: Save and reuse bot configurations -- **Hot Reload**: Update strategies without restarting instances - -### 🔧 Broker Management -- **Hummingbot Broker**: Start and stop the broker service -- **Connection Status**: Monitor broker health and connectivity -- **Resource Usage**: Track CPU and memory consumption -- **Log Access**: View broker logs for debugging - -### 📊 Instance Monitoring -- **Performance Metrics**: Real-time P&L, trade count, and volume -- **Active Orders**: View open orders across all instances -- **Error Tracking**: Centralized error logs and alerts -- **Resource Monitoring**: CPU, memory, and network usage per instance - -## Usage Instructions - -### 1. Start Hummingbot Broker -- Click "Start Broker" to initialize the Hummingbot broker service -- Wait for the broker to reach "Running" status -- Verify connection by checking the status indicator - -### 2. Create Bot Instance -- Click "Create New Instance" button -- Configure instance settings: - - **Instance Name**: Unique identifier for the bot - - **Image**: Select Hummingbot version/image - - **Strategy**: Choose strategy file to run - - **Credentials**: Select API keys to use -- Click "Create" to deploy the instance - -### 3. Manage Strategies -- **Upload Strategy**: Use the file uploader to add new strategy files -- **Select Active Strategy**: Choose which strategy the instance should run -- **Edit Strategy**: Modify strategy parameters through the editor -- **Version Control**: Track strategy changes and rollback if needed - -### 4. Control Instances -- **Start**: Launch a stopped instance -- **Stop**: Gracefully shutdown a running instance -- **Restart**: Stop and start an instance -- **Delete**: Remove an instance and its configuration - -### 5. Monitor Performance -- View real-time status in the instances table -- Click on an instance for detailed metrics -- Access logs for troubleshooting -- Export performance data for analysis - -## Technical Notes - -### Architecture -- **Docker-based**: Each instance runs in an isolated Docker container -- **RESTful API**: Communication via Backend API Client -- **WebSocket Updates**: Real-time status updates -- **Persistent Storage**: Configurations and logs stored on disk - -### Instance Lifecycle -1. **Created**: Instance configured but not running -2. **Starting**: Docker container launching -3. **Running**: Bot actively trading -4. **Stopping**: Graceful shutdown in progress -5. **Stopped**: Instance halted but configuration preserved -6. **Error**: Instance encountered fatal error - -### Resource Management -- **CPU Limits**: Configurable CPU allocation per instance -- **Memory Limits**: Set maximum memory usage -- **Network Isolation**: Instances communicate only through broker -- **Storage Quotas**: Limit log and data storage per instance - -## Component Structure - -``` -instances/ -├── app.py # Main instances management page -├── components/ -│ ├── instance_table.py # Instance list and status display -│ ├── instance_controls.py # Start/stop/delete controls -│ ├── broker_panel.py # Broker management interface -│ └── strategy_uploader.py # Strategy file management -└── utils/ - ├── docker_manager.py # Docker container operations - ├── instance_monitor.py # Status polling and updates - └── resource_tracker.py # Resource usage monitoring -``` - -## Best Practices - -### Instance Naming -- Use descriptive names (e.g., "btc_market_maker_01") -- Include strategy type in the name -- Add exchange identifier if running multiple exchanges -- Use consistent naming conventions - -### Strategy Management -- Test strategies in paper trading first -- Keep backups of working configurations -- Document strategy parameters -- Use version control for strategy files - -### Performance Optimization -- Limit instances per broker (recommended: 5-10) -- Monitor resource usage regularly -- Restart instances weekly for stability -- Clear old logs to save disk space - -## Error Handling - -The instances page handles various error scenarios: -- **Broker Connection Lost**: Automatic reconnection attempts -- **Instance Crashes**: Auto-restart with configurable retry limits -- **Resource Exhaustion**: Graceful degradation and alerts -- **Strategy Errors**: Detailed error logs and stack traces -- **Network Issues**: Offline mode with cached status - -## Security Considerations - -- **API Key Isolation**: Each instance has access only to assigned credentials -- **Network Segmentation**: Instances cannot communicate directly -- **Resource Limits**: Prevent runaway processes from affecting system -- **Audit Logging**: All actions are logged for compliance \ No newline at end of file diff --git a/pages/orchestration/instances/__init__.py b/pages/orchestration/instances/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/pages/orchestration/instances/app.py b/pages/orchestration/instances/app.py deleted file mode 100644 index d5d3eedf65..0000000000 --- a/pages/orchestration/instances/app.py +++ /dev/null @@ -1,384 +0,0 @@ -import time - -import pandas as pd -import streamlit as st - -from frontend.st_utils import get_backend_api_client, initialize_st_page - -initialize_st_page(icon="🦅", show_readme=False) - -# Initialize backend client -backend_api_client = get_backend_api_client() - -# Initialize session state for auto-refresh -if "auto_refresh_enabled" not in st.session_state: - st.session_state.auto_refresh_enabled = True - -# Set refresh interval -REFRESH_INTERVAL = 10 # seconds - - -def stop_bot(bot_name): - """Stop a running bot.""" - try: - backend_api_client.bot_orchestration.stop_and_archive_bot(bot_name) - st.success(f"Bot {bot_name} stopped and archived successfully") - time.sleep(2) # Give time for the backend to process - except Exception as e: - st.error(f"Failed to stop bot {bot_name}: {e}") - - -def archive_bot(bot_name): - """Archive a stopped bot.""" - try: - backend_api_client.docker.stop_container(bot_name) - backend_api_client.docker.remove_container(bot_name) - st.success(f"Bot {bot_name} archived successfully") - time.sleep(1) - except Exception as e: - st.error(f"Failed to archive bot {bot_name}: {e}") - - -def stop_controllers(bot_name, controllers): - """Stop selected controllers.""" - success_count = 0 - for controller in controllers: - try: - backend_api_client.controllers.update_bot_controller_config( - bot_name, - controller, - {"manual_kill_switch": True} - ) - success_count += 1 - except Exception as e: - st.error(f"Failed to stop controller {controller}: {e}") - - if success_count > 0: - st.success(f"Successfully stopped {success_count} controller(s)") - # Temporarily disable auto-refresh to prevent immediate state reset - st.session_state.auto_refresh_enabled = False - - return success_count > 0 - - -def start_controllers(bot_name, controllers): - """Start selected controllers.""" - success_count = 0 - for controller in controllers: - try: - backend_api_client.controllers.update_bot_controller_config( - bot_name, - controller, - {"manual_kill_switch": False} - ) - success_count += 1 - except Exception as e: - st.error(f"Failed to start controller {controller}: {e}") - - if success_count > 0: - st.success(f"Successfully started {success_count} controller(s)") - # Temporarily disable auto-refresh to prevent immediate state reset - st.session_state.auto_refresh_enabled = False - - return success_count > 0 - - -def render_bot_card(bot_name): - """Render a bot performance card using native Streamlit components.""" - try: - # Get bot status first - bot_status = backend_api_client.bot_orchestration.get_bot_status(bot_name) - - # Only try to get controller configs if bot exists and is running - controller_configs = [] - if bot_status.get("status") == "success": - bot_data = bot_status.get("data", {}) - is_running = bot_data.get("status") == "running" - if is_running: - try: - controller_configs = backend_api_client.controllers.get_bot_controller_configs(bot_name) - controller_configs = controller_configs if controller_configs else [] - except Exception as e: - # If controller configs fail, continue without them - st.warning(f"Could not fetch controller configs for {bot_name}: {e}") - controller_configs = [] - - with st.container(border=True): - - if bot_status.get("status") == "error": - # Error state - col1, col2 = st.columns([3, 1]) - with col1: - st.error(f"🤖 **{bot_name}** - Not Available") - st.error(f"An error occurred while fetching bot status of {bot_name}. Please check the bot client.") - else: - bot_data = bot_status.get("data", {}) - is_running = bot_data.get("status") == "running" - performance = bot_data.get("performance", {}) - error_logs = bot_data.get("error_logs", []) - general_logs = bot_data.get("general_logs", []) - - # Bot header - col1, col2, col3 = st.columns([2, 1, 1]) - with col1: - if is_running: - st.success(f"🤖 **{bot_name}** - Running") - else: - st.warning(f"🤖 **{bot_name}** - Stopped") - - with col3: - if is_running: - if st.button("⏹️ Stop", key=f"stop_{bot_name}", use_container_width=True): - stop_bot(bot_name) - else: - if st.button("📦 Archive", key=f"archive_{bot_name}", use_container_width=True): - archive_bot(bot_name) - - if is_running: - # Calculate totals - active_controllers = [] - stopped_controllers = [] - error_controllers = [] - total_global_pnl_quote = 0 - total_volume_traded = 0 - total_unrealized_pnl_quote = 0 - - for controller, inner_dict in performance.items(): - controller_status = inner_dict.get("status") - if controller_status == "error": - error_controllers.append({ - "Controller": controller, - "Error": inner_dict.get("error", "Unknown error") - }) - continue - - controller_performance = inner_dict.get("performance", {}) - controller_config = next( - (config for config in controller_configs if config.get("id") == controller), {} - ) - - controller_name = controller_config.get("controller_name", controller) - - connector_name = controller_config.get("connector_name", "N/A") - trading_pair = controller_config.get("trading_pair", "N/A") - kill_switch_status = controller_config.get("manual_kill_switch", False) - - realized_pnl_quote = controller_performance.get("realized_pnl_quote", 0) - unrealized_pnl_quote = controller_performance.get("unrealized_pnl_quote", 0) - global_pnl_quote = controller_performance.get("global_pnl_quote", 0) - volume_traded = controller_performance.get("volume_traded", 0) - - close_types = controller_performance.get("close_type_counts", {}) - tp = close_types.get("CloseType.TAKE_PROFIT", 0) - sl = close_types.get("CloseType.STOP_LOSS", 0) - time_limit = close_types.get("CloseType.TIME_LIMIT", 0) - ts = close_types.get("CloseType.TRAILING_STOP", 0) - refreshed = close_types.get("CloseType.EARLY_STOP", 0) - failed = close_types.get("CloseType.FAILED", 0) - close_types_str = f"TP: {tp} | SL: {sl} | TS: {ts} | TL: {time_limit} | ES: {refreshed} | F: {failed}" - - controller_info = { - "Select": False, - "ID": controller_config.get("id"), - "Controller": controller_name, - "Connector": connector_name, - "Trading Pair": trading_pair, - "Realized PNL ($)": round(realized_pnl_quote, 2), - "Unrealized PNL ($)": round(unrealized_pnl_quote, 2), - "NET PNL ($)": round(global_pnl_quote, 2), - "Volume ($)": round(volume_traded, 2), - "Close Types": close_types_str, - "_controller_id": controller - } - - if kill_switch_status: - stopped_controllers.append(controller_info) - else: - active_controllers.append(controller_info) - - total_global_pnl_quote += global_pnl_quote - total_volume_traded += volume_traded - total_unrealized_pnl_quote += unrealized_pnl_quote - - total_global_pnl_pct = total_global_pnl_quote / total_volume_traded if total_volume_traded > 0 else 0 - - # Display metrics - col1, col2, col3, col4 = st.columns(4) - - with col1: - st.metric("🏦 NET PNL", f"${total_global_pnl_quote:.2f}") - with col2: - st.metric("💹 Unrealized PNL", f"${total_unrealized_pnl_quote:.2f}") - with col3: - st.metric("📊 NET PNL (%)", f"{total_global_pnl_pct:.2%}") - with col4: - st.metric("💸 Volume Traded", f"${total_volume_traded:.2f}") - - # Active Controllers - if active_controllers: - st.success("🚀 **Active Controllers:** Controllers currently running and trading") - active_df = pd.DataFrame(active_controllers) - - edited_active_df = st.data_editor( - active_df, - column_config={ - "Select": st.column_config.CheckboxColumn( - "Select", - help="Select controllers to stop", - default=False, - ), - "_controller_id": None, # Hide this column - }, - disabled=[col for col in active_df.columns if col != "Select"], - hide_index=True, - use_container_width=True, - key=f"active_table_{bot_name}" - ) - - selected_active = [ - row["_controller_id"] - for _, row in edited_active_df.iterrows() - if row["Select"] - ] - - if selected_active: - if st.button(f"⏹️ Stop Selected ({len(selected_active)})", - key=f"stop_active_{bot_name}", - type="secondary"): - with st.spinner(f"Stopping {len(selected_active)} controller(s)..."): - stop_controllers(bot_name, selected_active) - time.sleep(1) - - # Stopped Controllers - if stopped_controllers: - st.warning("💤 **Stopped Controllers:** Controllers that are paused or stopped") - stopped_df = pd.DataFrame(stopped_controllers) - - edited_stopped_df = st.data_editor( - stopped_df, - column_config={ - "Select": st.column_config.CheckboxColumn( - "Select", - help="Select controllers to start", - default=False, - ), - "_controller_id": None, # Hide this column - }, - disabled=[col for col in stopped_df.columns if col != "Select"], - hide_index=True, - use_container_width=True, - key=f"stopped_table_{bot_name}" - ) - - selected_stopped = [ - row["_controller_id"] - for _, row in edited_stopped_df.iterrows() - if row["Select"] - ] - - if selected_stopped: - if st.button(f"▶️ Start Selected ({len(selected_stopped)})", - key=f"start_stopped_{bot_name}", - type="primary"): - with st.spinner(f"Starting {len(selected_stopped)} controller(s)..."): - start_controllers(bot_name, selected_stopped) - time.sleep(1) - - # Error Controllers - if error_controllers: - st.error("💀 **Controllers with Errors:** Controllers that encountered errors") - error_df = pd.DataFrame(error_controllers) - st.dataframe(error_df, use_container_width=True, hide_index=True) - - # Logs sections (available for both running and stopped bots) - with st.expander("📋 Error Logs"): - if error_logs: - for log in error_logs[:50]: - timestamp = log.get("timestamp", "") - message = log.get("msg", "") - logger_name = log.get("logger_name", "") - st.text(f"{timestamp} - {logger_name}: {message}") - else: - st.info("No error logs available.") - - with st.expander("📝 General Logs"): - if general_logs: - for log in general_logs[:50]: - timestamp = pd.to_datetime(int(log.get("timestamp", 0)), unit="s") - message = log.get("msg", "") - logger_name = log.get("logger_name", "") - st.text(f"{timestamp} - {logger_name}: {message}") - else: - st.info("No general logs available.") - - except Exception as e: - with st.container(border=True): - st.error(f"🤖 **{bot_name}** - Error") - st.error(f"An error occurred while fetching bot status: {str(e)}") - - -# Page Header -st.title("🦅 Hummingbot Instances") - -# Auto-refresh controls -col1, col2, col3 = st.columns([3, 1, 1]) - -# Create placeholder for status message -status_placeholder = col1.empty() - -with col2: - if st.button("▶️ Start Auto-refresh" if not st.session_state.auto_refresh_enabled else "⏸️ Stop Auto-refresh", - use_container_width=True): - st.session_state.auto_refresh_enabled = not st.session_state.auto_refresh_enabled - -with col3: - if st.button("🔄 Refresh Now", use_container_width=True): - # Re-enable auto-refresh if it was temporarily disabled - if not st.session_state.auto_refresh_enabled: - st.session_state.auto_refresh_enabled = True - pass - - -@st.fragment(run_every=REFRESH_INTERVAL if st.session_state.auto_refresh_enabled else None) -def show_bot_instances(): - """Fragment to display bot instances with auto-refresh.""" - try: - active_bots_response = backend_api_client.bot_orchestration.get_active_bots_status() - - if active_bots_response.get("status") == "success": - active_bots = active_bots_response.get("data", {}) - - # Filter out any bots that might be in transitional state - truly_active_bots = {} - for bot_name, bot_info in active_bots.items(): - try: - bot_status = backend_api_client.bot_orchestration.get_bot_status(bot_name) - if bot_status.get("status") == "success": - bot_data = bot_status.get("data", {}) - if bot_data.get("status") in ["running", "stopped"]: - truly_active_bots[bot_name] = bot_info - except Exception: - continue - - if truly_active_bots: - # Show refresh status - if st.session_state.auto_refresh_enabled: - status_placeholder.info(f"🔄 Auto-refreshing every {REFRESH_INTERVAL} seconds") - else: - status_placeholder.warning("⏸️ Auto-refresh paused. Click 'Refresh Now' to resume.") - - # Render each bot - for bot_name in truly_active_bots.keys(): - render_bot_card(bot_name) - else: - status_placeholder.info("No active bot instances found. Deploy a bot to see it here.") - else: - st.error("Failed to fetch active bots status.") - - except Exception as e: - st.error(f"Failed to connect to backend: {e}") - st.info("Please make sure the backend is running and accessible.") - - -# Call the fragment -show_bot_instances() diff --git a/pages/orchestration/launch_bot_v2/README.md b/pages/orchestration/launch_bot_v2/README.md deleted file mode 100644 index 18f4d94fd9..0000000000 --- a/pages/orchestration/launch_bot_v2/README.md +++ /dev/null @@ -1,19 +0,0 @@ -### Description - -This page helps you deploy and manage Hummingbot instances: - -- Starting and stopping Hummingbot Broker -- Creating, starting and stopping bot instances -- Managing strategy and script files that instances run -- Fetching status of running instances - -### Maintainers - -This page is maintained by Hummingbot Foundation as a template other pages: - -* [cardosfede](https://github.com/cardosfede) -* [fengtality](https://github.com/fengtality) - -### Wiki - -See the [wiki](https://github.com/hummingbot/dashboard/wiki/%F0%9F%90%99-Bot-Orchestration) for more information. \ No newline at end of file diff --git a/pages/orchestration/launch_bot_v2/__init__.py b/pages/orchestration/launch_bot_v2/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/pages/orchestration/launch_bot_v2/app.py b/pages/orchestration/launch_bot_v2/app.py deleted file mode 100644 index 286fce3778..0000000000 --- a/pages/orchestration/launch_bot_v2/app.py +++ /dev/null @@ -1,301 +0,0 @@ -import re -import time - -import pandas as pd -import streamlit as st - -from frontend.st_utils import get_backend_api_client, initialize_st_page - -initialize_st_page(icon="🙌", show_readme=False) - -# Initialize backend client -backend_api_client = get_backend_api_client() - - -def get_controller_configs(): - """Get all controller configurations using the new API.""" - try: - return backend_api_client.controllers.list_controller_configs() - except Exception as e: - st.error(f"Failed to fetch controller configs: {e}") - return [] - - -def filter_hummingbot_images(images): - """Filter images to only show Hummingbot-related ones.""" - hummingbot_images = [] - pattern = r'.+/hummingbot:' - - for image in images: - try: - if re.match(pattern, image): - hummingbot_images.append(image) - except Exception: - continue - - return hummingbot_images - - -def launch_new_bot(bot_name, image_name, credentials, selected_controllers, max_global_drawdown, - max_controller_drawdown): - """Launch a new bot with the selected configuration.""" - if not bot_name: - st.warning("You need to define the bot name.") - return False - if not image_name: - st.warning("You need to select the hummingbot image.") - return False - if not selected_controllers: - st.warning("You need to select the controllers configs. Please select at least one controller " - "config by clicking on the checkbox.") - return False - - start_time_str = time.strftime("%Y%m%d-%H%M") - full_bot_name = f"{bot_name}-{start_time_str}" - - try: - # Use the new deploy_v2_controllers method - deploy_config = { - "instance_name": full_bot_name, - "credentials_profile": credentials, - "controllers_config": selected_controllers, - "image": image_name, - } - - # Add optional drawdown parameters if set - if max_global_drawdown is not None and max_global_drawdown > 0: - deploy_config["max_global_drawdown_quote"] = max_global_drawdown - if max_controller_drawdown is not None and max_controller_drawdown > 0: - deploy_config["max_controller_drawdown_quote"] = max_controller_drawdown - - backend_api_client.bot_orchestration.deploy_v2_controllers(**deploy_config) - st.success(f"Successfully deployed bot: {full_bot_name}") - time.sleep(3) - return True - - except Exception as e: - st.error(f"Failed to deploy bot: {e}") - return False - - -def delete_selected_configs(selected_controllers): - """Delete selected controller configurations.""" - if selected_controllers: - try: - for config in selected_controllers: - # Remove .yml extension if present - config_name = config.replace(".yml", "") - response = backend_api_client.controllers.delete_controller_config(config_name) - st.success(f"Deleted {config_name}") - return True - - except Exception as e: - st.error(f"Failed to delete configs: {e}") - return False - else: - st.warning("You need to select the controllers configs that you want to delete.") - return False - - -# Page Header -st.title("🚀 Deploy Trading Bot") -st.subheader("Configure and deploy your automated trading strategy") - -# Bot Configuration Section -with st.container(border=True): - st.info("🤖 **Bot Configuration:** Set up your bot instance with basic configuration") - - # Create three columns for the configuration inputs - col1, col2, col3 = st.columns(3) - - with col1: - bot_name = st.text_input( - "Instance Name", - placeholder="Enter a unique name for your bot instance", - key="bot_name_input" - ) - - with col2: - try: - available_credentials = backend_api_client.accounts.list_accounts() - credentials = st.selectbox( - "Credentials Profile", - options=available_credentials, - index=0, - key="credentials_select" - ) - except Exception as e: - st.error(f"Failed to fetch credentials: {e}") - credentials = st.text_input( - "Credentials Profile", - value="master_account", - key="credentials_input" - ) - - with col3: - try: - all_images = backend_api_client.docker.get_available_images("hummingbot") - available_images = filter_hummingbot_images(all_images) - - if not available_images: - # Fallback to default if no hummingbot images found - available_images = ["hummingbot/hummingbot:latest"] - - # Ensure default image is in the list - default_image = "hummingbot/hummingbot:latest" - if default_image not in available_images: - available_images.insert(0, default_image) - - image_name = st.selectbox( - "Hummingbot Image", - options=available_images, - index=0, - key="image_select" - ) - except Exception as e: - st.error(f"Failed to fetch available images: {e}") - image_name = st.text_input( - "Hummingbot Image", - value="hummingbot/hummingbot:latest", - key="image_input" - ) - -# Risk Management Section -with st.container(border=True): - st.warning("⚠️ **Risk Management:** Set maximum drawdown limits in USDT to protect your capital") - - col1, col2 = st.columns(2) - - with col1: - max_global_drawdown = st.number_input( - "Max Global Drawdown (USDT)", - min_value=0.0, - value=0.0, - step=100.0, - format="%.2f", - help="Maximum allowed drawdown across all controllers", - key="global_drawdown_input" - ) - - with col2: - max_controller_drawdown = st.number_input( - "Max Controller Drawdown (USDT)", - min_value=0.0, - value=0.0, - step=100.0, - format="%.2f", - help="Maximum allowed drawdown per controller", - key="controller_drawdown_input" - ) - -# Controllers Section -with st.container(border=True): - st.success("🎛️ **Controller Selection:** Select the trading controllers you want to deploy with this bot instance") - - # Get controller configs - all_controllers_config = get_controller_configs() - - # Prepare data for the table - data = [] - for config in all_controllers_config: - # Handle case where config might be a string instead of dict - if isinstance(config, str): - st.warning(f"Unexpected config format: {config}. Expected a dictionary.") - continue - - # Handle both old and new config format - config_name = config.get("id") - if not config_name: - # Skip configs without an ID - st.warning(f"Config missing 'id' field: {config}") - continue - - config_data = config.get("config", config) # New format has config nested - - connector_name = config_data.get("connector_name", "Unknown") - trading_pair = config_data.get("trading_pair", "Unknown") - total_amount_quote = float(config_data.get("total_amount_quote", 0)) - - # Extract controller info - controller_name = config_data.get("controller_name", config_name) - controller_type = config_data.get("controller_type", "generic") - - # Fix config base and version splitting - config_parts = config_name.split("_") - if len(config_parts) > 1: - version = config_parts[-1] - config_base = "_".join(config_parts[:-1]) - else: - config_base = config_name - version = "NaN" - - data.append({ - "Select": False, # Checkbox column - "Config Base": config_base, - "Version": version, - "Controller Name": controller_name, - "Controller Type": controller_type, - "Connector": connector_name, - "Trading Pair": trading_pair, - "Amount (USDT)": f"${total_amount_quote:,.2f}", - "_config_name": config_name # Hidden column for reference - }) - - # Display info and action buttons - if data: - # Create DataFrame - df = pd.DataFrame(data) - - # Use data_editor with checkbox column for selection - edited_df = st.data_editor( - df, - column_config={ - "Select": st.column_config.CheckboxColumn( - "Select", - help="Select controllers to deploy or delete", - default=False, - ), - "_config_name": None, # Hide this column - }, - disabled=[col for col in df.columns if col != "Select"], # Only allow editing the Select column - hide_index=True, - use_container_width=True, - key="controller_table" - ) - - # Get selected controllers from the edited dataframe - selected_controllers = [ - row["_config_name"] - for _, row in edited_df.iterrows() - if row["Select"] - ] - - # Display selected count - if selected_controllers: - st.success(f"✅ {len(selected_controllers)} controller(s) selected for deployment") - - # Display action buttons - st.divider() - col1, col2 = st.columns(2) - - with col1: - if st.button("🗑️ Delete Selected", type="secondary", use_container_width=True): - if selected_controllers: - if delete_selected_configs(selected_controllers): - st.rerun() - else: - st.warning("Please select at least one controller to delete") - - with col2: - deploy_button_style = "primary" if selected_controllers else "secondary" - if st.button("🚀 Deploy Bot", type=deploy_button_style, use_container_width=True): - if selected_controllers: - with st.spinner('🚀 Starting Bot... This process may take a few seconds'): - if launch_new_bot(bot_name, image_name, credentials, selected_controllers, - max_global_drawdown, max_controller_drawdown): - st.rerun() - else: - st.warning("Please select at least one controller to deploy") - - else: - st.warning("⚠️ No controller configurations available. Please create some configurations first.") diff --git a/pages/orchestration/portfolio/README.md b/pages/orchestration/portfolio/README.md deleted file mode 100644 index 89b5fcc3ab..0000000000 --- a/pages/orchestration/portfolio/README.md +++ /dev/null @@ -1,149 +0,0 @@ -# Portfolio Management - -The Portfolio Management page provides comprehensive oversight of your trading portfolio across multiple exchanges, accounts, and strategies. - -## Features - -### 💰 Multi-Exchange Portfolio -- **Unified Balance View**: Aggregate holdings across all connected exchanges -- **Real-time Valuation**: Live portfolio value updates in USD and BTC -- **Asset Distribution**: Visual breakdown of holdings by asset and exchange -- **Historical Performance**: Track portfolio value over time - -### 📊 Position Tracking -- **Open Positions**: Monitor all active positions across exchanges -- **P&L Analysis**: Real-time and realized profit/loss calculations -- **Risk Metrics**: Position sizing, leverage, and exposure analysis -- **Position History**: Complete record of closed positions - -### 🔄 Performance Analytics -- **ROI Calculation**: Return on investment by strategy and timeframe -- **Sharpe Ratio**: Risk-adjusted performance metrics -- **Win Rate Analysis**: Success rate of trades by strategy -- **Drawdown Tracking**: Maximum and current drawdown monitoring - -### 🎯 Risk Management -- **Exposure Limits**: Set and monitor position size limits -- **Correlation Analysis**: Identify correlated positions -- **VaR Calculation**: Value at Risk across the portfolio -- **Alert System**: Notifications for risk threshold breaches - -## Usage Instructions - -### 1. Connect Exchanges -- Navigate to the Credentials page to add exchange API keys -- Ensure API keys have read permissions for balances and positions -- Verify successful connection in the portfolio overview - -### 2. Portfolio Overview -- **Total Value**: View aggregate portfolio value in preferred currency -- **Asset Allocation**: Pie chart showing distribution across assets -- **Exchange Distribution**: Breakdown of holdings by exchange -- **24h Performance**: Daily change in portfolio value - -### 3. Position Management -- **Active Positions Tab**: Current open positions with live P&L -- **Position Details**: Click any position for detailed metrics -- **Quick Actions**: Close positions or adjust sizes -- **Export Data**: Download position data for external analysis - -### 4. Performance Analysis -- **Time Range Selection**: Choose analysis period (1D, 1W, 1M, 3M, 1Y) -- **Strategy Breakdown**: Performance attribution by strategy -- **Benchmark Comparison**: Compare against BTC or market indices -- **Custom Reports**: Generate detailed performance reports - -### 5. Risk Monitoring -- **Risk Dashboard**: Overview of key risk metrics -- **Position Sizing**: Ensure positions align with risk limits -- **Correlation Matrix**: Visualize position correlations -- **Stress Testing**: Simulate portfolio under various scenarios - -## Technical Notes - -### Data Architecture -- **Real-time Updates**: WebSocket connections for live data -- **Data Aggregation**: Efficient cross-exchange data consolidation -- **Historical Storage**: Time-series database for performance tracking -- **Cache Layer**: Redis caching for improved performance - -### Calculation Methods -- **Portfolio Value**: Sum of all holdings at current market prices -- **Unrealized P&L**: (Current Price - Entry Price) × Position Size -- **Realized P&L**: Actual profits from closed positions -- **ROI**: (Current Value - Initial Value) / Initial Value × 100 - -### Performance Optimization -- **Incremental Updates**: Only fetch changed data -- **Batch Processing**: Aggregate API calls across exchanges -- **Smart Caching**: Cache static data with TTL -- **Lazy Loading**: Load detailed data on demand - -## Component Structure - -``` -portfolio/ -├── app.py # Main portfolio page -├── components/ -│ ├── portfolio_overview.py # Summary cards and charts -│ ├── position_table.py # Active positions display -│ ├── performance_charts.py # Performance visualization -│ └── risk_dashboard.py # Risk metrics and alerts -├── services/ -│ ├── balance_aggregator.py # Multi-exchange balance fetching -│ ├── position_tracker.py # Position monitoring service -│ └── performance_calc.py # Performance calculations -└── utils/ - ├── currency_converter.py # FX rate conversions - ├── risk_metrics.py # Risk calculation functions - └── data_export.py # Export functionality -``` - -## Key Metrics Explained - -### Portfolio Metrics -- **Total Value**: Sum of all assets converted to base currency -- **Daily Change**: 24-hour change in portfolio value -- **All-Time P&L**: Total profit/loss since inception -- **Asset Count**: Number of unique assets held - -### Position Metrics -- **Entry Price**: Average price of position entry -- **Mark Price**: Current market price -- **Unrealized P&L**: Paper profit/loss on open position -- **ROI %**: Return on investment percentage - -### Risk Metrics -- **Sharpe Ratio**: Risk-adjusted return metric -- **Maximum Drawdown**: Largest peak-to-trough decline -- **Value at Risk (VaR)**: Potential loss at confidence level -- **Exposure**: Total position size relative to portfolio - -## Best Practices - -### Portfolio Management -- Diversify across multiple assets and strategies -- Set position size limits based on risk tolerance -- Regular rebalancing to maintain target allocations -- Monitor correlation between positions - -### Performance Tracking -- Record all trades for accurate P&L calculation -- Include fees in performance calculations -- Compare performance against relevant benchmarks -- Regular performance attribution analysis - -### Risk Control -- Set stop-loss levels for all positions -- Monitor leverage usage across accounts -- Regular stress testing of portfolio -- Maintain cash reserves for opportunities - -## Error Handling - -The portfolio page includes robust error handling: -- **API Failures**: Graceful degradation with cached data -- **Rate Limiting**: Intelligent request throttling -- **Data Inconsistencies**: Reconciliation mechanisms -- **Connection Issues**: Automatic reconnection with exponential backoff -- **Calculation Errors**: Fallback values with warning indicators \ No newline at end of file diff --git a/pages/orchestration/portfolio/__init__.py b/pages/orchestration/portfolio/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/pages/orchestration/portfolio/app.py b/pages/orchestration/portfolio/app.py deleted file mode 100644 index b8429f95c7..0000000000 --- a/pages/orchestration/portfolio/app.py +++ /dev/null @@ -1,361 +0,0 @@ -import pandas as pd -import plotly.express as px -import streamlit as st - -from frontend.st_utils import get_backend_api_client, initialize_st_page - -initialize_st_page(title="Portfolio", icon="💰") - -# Page content -client = get_backend_api_client() -NUM_COLUMNS = 4 - - -# Convert portfolio state to DataFrame for easier manipulation -def portfolio_state_to_df(portfolio_state): - data = [] - for account, exchanges in portfolio_state.items(): - for exchange, tokens_info in exchanges.items(): - for info in tokens_info: - data.append({ - "account": account, - "exchange": exchange, - "token": info["token"], - "price": info["price"], - "units": info["units"], - "value": info["value"], - "available_units": info["available_units"], - }) - return pd.DataFrame(data) - - -# Convert historical portfolio states to DataFrame -def portfolio_history_to_df(history): - data = [] - for record in history: - timestamp = record["timestamp"] - for account, exchanges in record["state"].items(): - for exchange, tokens_info in exchanges.items(): - for info in tokens_info: - data.append({ - "timestamp": timestamp, - "account": account, - "exchange": exchange, - "token": info["token"], - "price": info["price"], - "units": info["units"], - "value": info["value"], - "available_units": info["available_units"], - }) - return pd.DataFrame(data) - - -# Aggregate portfolio history by grouping nearby timestamps -def aggregate_portfolio_history(history_df, time_window_seconds=10): - """ - Aggregate portfolio history by grouping timestamps within a time window. - This solves the issue where different exchanges are logged at slightly different times. - """ - if len(history_df) == 0: - return history_df - - # Convert timestamp to pandas datetime if not already - history_df['timestamp'] = pd.to_datetime(history_df['timestamp']) - - # Sort by timestamp - history_df = history_df.sort_values('timestamp') - - # Create time groups by rounding timestamps to the nearest time window - history_df['time_group'] = history_df['timestamp'].dt.floor(f'{time_window_seconds}s') - - # For each time group, aggregate the data - aggregated_data = [] - - for time_group in history_df['time_group'].unique(): - group_data = history_df[history_df['time_group'] == time_group] - - # Aggregate by account, exchange, token within this time group - agg_group = group_data.groupby(['account', 'exchange', 'token']).agg({ - 'value': 'sum', - 'units': 'sum', - 'available_units': 'sum', - 'price': 'mean' # Use mean price for the time group - }).reset_index() - - # Add the time group as timestamp - agg_group['timestamp'] = time_group - - aggregated_data.append(agg_group) - - if aggregated_data: - return pd.concat(aggregated_data, ignore_index=True) - else: - return pd.DataFrame() - - -# Global filters (outside fragments to avoid duplication) -def get_portfolio_filters(): - """Get portfolio filters that are shared between fragments""" - # Get available accounts - try: - accounts_list = client.accounts.list_accounts() - except Exception as e: - st.error(f"Failed to fetch accounts: {e}") - return None, None, None - - if len(accounts_list) == 0: - st.warning("No accounts found.") - return None, None, None - - # Account selection - selected_accounts = st.multiselect("Select Accounts", accounts_list, accounts_list, key="main_accounts") - if len(selected_accounts) == 0: - st.warning("Please select at least one account.") - return None, None, None - - # Get portfolio state for available exchanges and tokens - try: - portfolio_state = client.portfolio.get_state(account_names=selected_accounts) - except Exception as e: - st.error(f"Failed to fetch portfolio state: {e}") - return None, None, None - - # Extract available exchanges - exchanges_available = [] - for account in selected_accounts: - if account in portfolio_state: - exchanges_available.extend(portfolio_state[account].keys()) - - exchanges_available = list(set(exchanges_available)) - - if len(exchanges_available) == 0: - st.warning("No exchanges found for selected accounts.") - return None, None, None - - selected_exchanges = st.multiselect("Select Exchanges", exchanges_available, exchanges_available, key="main_exchanges") - - # Extract available tokens - tokens_available = [] - for account in selected_accounts: - if account in portfolio_state: - for exchange in selected_exchanges: - if exchange in portfolio_state[account]: - tokens_available.extend([info["token"] for info in portfolio_state[account][exchange]]) - - tokens_available = list(set(tokens_available)) - selected_tokens = st.multiselect("Select Tokens", tokens_available, tokens_available, key="main_tokens") - - return selected_accounts, selected_exchanges, selected_tokens - - -# Get filters once at the top level -st.header("Portfolio Filters") -selected_accounts, selected_exchanges, selected_tokens = get_portfolio_filters() - -if not selected_accounts: - st.stop() - - -@st.fragment -def portfolio_overview(): - """Fragment for portfolio overview and metrics""" - st.markdown("---") - - # Get portfolio state and summary - try: - portfolio_state = client.portfolio.get_state(account_names=selected_accounts) - portfolio_summary = client.portfolio.get_portfolio_summary() - except Exception as e: - st.error(f"Failed to fetch portfolio data: {e}") - return - - # Filter portfolio state - filtered_portfolio_state = {} - for account in selected_accounts: - if account in portfolio_state: - filtered_portfolio_state[account] = {} - for exchange in selected_exchanges: - if exchange in portfolio_state[account]: - filtered_portfolio_state[account][exchange] = [ - token_info for token_info in portfolio_state[account][exchange] - if token_info["token"] in selected_tokens - ] - - if len(filtered_portfolio_state) == 0: - st.warning("No data available for selected filters.") - return - - # Convert to DataFrame - portfolio_df = portfolio_state_to_df(filtered_portfolio_state) - total_balance_usd = round(portfolio_df["value"].sum(), 2) - - # Display metrics - col1, col2, col3, col4 = st.columns(4) - - with col1: - st.metric("Total Balance (USD)", f"${total_balance_usd:,.2f}") - - with col2: - st.metric("Accounts", len(selected_accounts)) - - with col3: - st.metric("Exchanges", len(selected_exchanges)) - - with col4: - st.metric("Tokens", len(selected_tokens)) - - # Create visualizations - c1, c2 = st.columns([1, 1]) - - with c1: - # Portfolio allocation pie chart - portfolio_df['% Allocation'] = (portfolio_df['value'] / total_balance_usd) * 100 - portfolio_df['label'] = portfolio_df['token'] + ' ($' + portfolio_df['value'].apply( - lambda x: f'{x:,.2f}') + ')' - - fig = px.sunburst(portfolio_df, - path=['account', 'exchange', 'label'], - values='value', - hover_data={'% Allocation': ':.2f'}, - title='Portfolio Allocation', - color='account', - color_discrete_sequence=px.colors.qualitative.Vivid) - - fig.update_traces(textinfo='label+percent entry') - fig.update_layout(margin=dict(t=50, l=0, r=0, b=0), height=600) - st.plotly_chart(fig, use_container_width=True) - - with c2: - # Token distribution - token_distribution = portfolio_df.groupby('token')['value'].sum().reset_index() - token_distribution = token_distribution.sort_values('value', ascending=False) - - fig = px.bar(token_distribution, x='token', y='value', - title='Token Distribution', - color='value', - color_continuous_scale='Blues') - fig.update_layout(xaxis_title='Token', yaxis_title='Value (USD)', height=600) - st.plotly_chart(fig, use_container_width=True) - - # Portfolio details table - st.subheader("Portfolio Details") - st.dataframe( - portfolio_df[['account', 'exchange', 'token', 'units', 'price', 'value', 'available_units']], - use_container_width=True - ) - - -@st.fragment -def portfolio_history(): - """Fragment for portfolio history and charts""" - st.markdown("---") - st.subheader("Portfolio History") - - # Date range selection - col1, col2, col3 = st.columns(3) - with col1: - days_back = st.selectbox("Time Period", [7, 30, 90, 180, 365], index=1, key="history_days") - with col2: - limit = st.number_input("Max Records", min_value=10, max_value=1000, value=100, key="history_limit") - with col3: - time_window = st.selectbox("Time Aggregation Window", [5, 10, 30, 60, 300], index=1, key="time_window", - help="Seconds to group nearby timestamps (fixes exchange timing differences)") - - # Get portfolio history - try: - from datetime import datetime, timezone, timedelta - - # Calculate start time for filtering - start_time = datetime.now(timezone.utc) - timedelta(days=days_back) - - response = client.portfolio.get_history( - selected_accounts, # account_names - None, # connector_names - limit, # limit - None, # cursor - int(start_time.timestamp()), # start_time - None # end_time - ) - - # Extract data from response - history_data = response.get("data", []) - - except Exception as e: - st.error(f"Failed to fetch portfolio history: {e}") - return - - if not history_data or len(history_data) == 0: - st.warning("No historical data available.") - return - - # Convert to DataFrame - history_df = portfolio_history_to_df(history_data) - history_df['timestamp'] = pd.to_datetime(history_df['timestamp'], format='ISO8601') - - # Filter by selected exchanges and tokens - history_df = history_df[ - (history_df['exchange'].isin(selected_exchanges)) & - (history_df['token'].isin(selected_tokens)) - ] - - # Aggregate timestamps to solve the "electrocardiogram" issue - history_df = aggregate_portfolio_history(history_df, time_window_seconds=time_window) - - if len(history_df) == 0: - st.warning("No historical data available for selected filters.") - return - - # Portfolio evolution by account (area chart) - st.subheader("Portfolio Evolution by Account") - account_evolution_df = history_df.groupby(['timestamp', 'account'])['value'].sum().reset_index() - account_evolution_df = account_evolution_df.sort_values('timestamp') - - fig = px.area(account_evolution_df, x='timestamp', y='value', color='account', - title='Portfolio Value Evolution by Account', - color_discrete_sequence=px.colors.qualitative.Set3) - fig.update_layout(xaxis_title='Time', yaxis_title='Value (USD)', height=400) - st.plotly_chart(fig, use_container_width=True) - - # Portfolio evolution by token (area chart) - st.subheader("Portfolio Evolution by Token") - token_evolution_df = history_df.groupby(['timestamp', 'token'])['value'].sum().reset_index() - token_evolution_df = token_evolution_df.sort_values('timestamp') - - # Show only top 10 tokens by average value to avoid clutter - top_tokens = token_evolution_df.groupby('token')['value'].mean().sort_values(ascending=False).head(10).index - token_evolution_filtered = token_evolution_df[token_evolution_df['token'].isin(top_tokens)] - - fig = px.area(token_evolution_filtered, x='timestamp', y='value', color='token', - title='Portfolio Value Evolution by Token (Top 10)', - color_discrete_sequence=px.colors.qualitative.Vivid) - fig.update_layout(xaxis_title='Time', yaxis_title='Value (USD)', height=400) - st.plotly_chart(fig, use_container_width=True) - - # Portfolio evolution by exchange (area chart) - st.subheader("Portfolio Evolution by Exchange") - exchange_evolution_df = history_df.groupby(['timestamp', 'exchange'])['value'].sum().reset_index() - exchange_evolution_df = exchange_evolution_df.sort_values('timestamp') - - fig = px.area(exchange_evolution_df, x='timestamp', y='value', color='exchange', - title='Portfolio Value Evolution by Exchange', - color_discrete_sequence=px.colors.qualitative.Pastel) - fig.update_layout(xaxis_title='Time', yaxis_title='Value (USD)', height=400) - st.plotly_chart(fig, use_container_width=True) - - # Portfolio evolution table - total values - st.subheader("Portfolio Total Value Over Time") - total_evolution_df = history_df.groupby('timestamp')['value'].sum().reset_index() - total_evolution_df = total_evolution_df.sort_values('timestamp') - evolution_table = total_evolution_df.copy() - evolution_table['timestamp'] = evolution_table['timestamp'].dt.strftime('%Y-%m-%d %H:%M:%S') - evolution_table['value'] = evolution_table['value'].round(2) - evolution_table = evolution_table.rename(columns={'timestamp': 'Time', 'value': 'Total Value (USD)'}) - st.dataframe(evolution_table, use_container_width=True) - - -# Main portfolio page -st.header("Portfolio Overview") -portfolio_overview() - -st.header("Portfolio History") -portfolio_history() diff --git a/pages/orchestration/trading/README.md b/pages/orchestration/trading/README.md deleted file mode 100644 index 512afed4a2..0000000000 --- a/pages/orchestration/trading/README.md +++ /dev/null @@ -1,97 +0,0 @@ -# Trading Hub - -The Trading Hub provides a comprehensive interface for executing trades, monitoring positions, and analyzing markets in real-time. - -## Features - -### 🎯 Real-time Market Data -- **OHLC Candlestick Chart**: 5-minute interval price action with volume overlay -- **Live Order Book**: Real-time bid/ask levels with configurable depth (10-100 levels) -- **Current Price Display**: Live price updates with auto-refresh capability -- **Volume Analysis**: Trading volume visualization - -### ⚡ Quick Trading -- **Market Orders**: Instant buy/sell execution at current market price -- **Limit Orders**: Set specific price levels for order execution -- **Position Management**: Open/close positions for perpetual contracts -- **Multi-Exchange Support**: Trade across Binance, KuCoin, OKX, and more - -### 📊 Portfolio Monitoring -- **Open Positions**: Real-time P&L tracking with entry/mark prices -- **Active Orders**: Monitor pending orders with one-click cancellation -- **Account Overview**: Multi-account position and order management - -### 🔄 Real-time Performance -- **Memory-Cached Candles**: Ultra-fast updates from backend memory cache (typically <100ms) -- **Configurable Intervals**: 2-second auto-refresh for real-time trading experience -- **Performance Monitoring**: Live display of data fetch times -- **Optimized Updates**: Efficient data streaming for minimal latency - -## How to Use - -### Market Selection -1. **Choose Exchange**: Select from available connectors (binance_perpetual, binance, kucoin, okx_perpetual) -2. **Select Trading Pair**: Enter the trading pair (e.g., BTC-USDT, ETH-USDT) -3. **Set Order Book Depth**: Choose how many price levels to display (10-100) - -### Placing Orders -1. **Account Setup**: Specify the account name (default: master_account) -2. **Order Configuration**: - - **Side**: Choose BUY or SELL - - **Order Type**: Select MARKET, LIMIT, or LIMIT_MAKER - - **Amount**: Enter the quantity to trade - - **Price**: Set price for limit orders (auto-filled for market orders) - - **Position Action**: Choose OPEN or CLOSE for perpetual contracts - -### Managing Positions -- **View Open Positions**: Monitor unrealized P&L, entry prices, and position sizes -- **Track Performance**: Real-time updates of mark prices and P&L calculations -- **Multi-Account Support**: View positions across different trading accounts - -### Order Management -- **Active Orders**: View all pending orders with real-time status -- **Bulk Cancellation**: Select multiple orders for batch cancellation -- **Order History**: Track order execution and fill status - -## Technical Features - -### Market Data Integration -- **Memory-Cached Candles**: Real-time OHLC data from backend memory (1m, 3m, 5m, 15m, 1h intervals) -- **Ultra-Fast Updates**: Sub-100ms data fetching from cached candle streams -- **Order Book Depth**: Configurable bid/ask level display (10-100 levels) -- **Live Price Feeds**: Real-time price updates across multiple exchanges -- **Performance Metrics**: Live monitoring of data fetch speeds - -### Chart Visualization -- **Candlestick Chart**: Interactive price action with zoom and pan -- **Order Book Overlay**: Visualized bid/ask levels on the chart -- **Volume Bars**: Trading volume display below price chart -- **Dark Theme**: Futuristic styling optimized for trading environments - -### Auto-Refresh System -- **Streamlit Fragments**: Efficient real-time updates without full page refresh -- **Configurable Intervals**: Adjustable refresh rates (default: 5 seconds) -- **Manual Control**: Start/stop auto-refresh as needed -- **Error Handling**: Graceful handling of connection issues - -## Supported Exchanges - -- **Binance Spot**: Standard spot trading -- **Binance Perpetual**: Futures and perpetual contracts -- **KuCoin**: Spot and margin trading -- **OKX Perpetual**: Futures and perpetual contracts - -## Error Handling - -The trading interface includes comprehensive error handling: -- **Connection Errors**: Graceful handling of backend connectivity issues -- **Order Errors**: Clear error messages for failed order placement -- **Data Errors**: Fallback displays when market data is unavailable -- **Validation**: Input validation for trading parameters - -## Security Considerations - -- **Account Isolation**: Each account's positions and orders are tracked separately -- **Order Validation**: Server-side validation of all trading parameters -- **Error Recovery**: Automatic retry mechanisms for transient failures -- **Safe Defaults**: Conservative default values for trading parameters \ No newline at end of file diff --git a/pages/orchestration/trading/__init__.py b/pages/orchestration/trading/__init__.py deleted file mode 100644 index ad6806197e..0000000000 --- a/pages/orchestration/trading/__init__.py +++ /dev/null @@ -1 +0,0 @@ -# Trading page module \ No newline at end of file diff --git a/pages/orchestration/trading/app.py b/pages/orchestration/trading/app.py deleted file mode 100644 index c8eb0232b0..0000000000 --- a/pages/orchestration/trading/app.py +++ /dev/null @@ -1,1500 +0,0 @@ -import datetime -import time - -import nest_asyncio -import pandas as pd -import plotly.graph_objects as go -import streamlit as st -from plotly.subplots import make_subplots - -from frontend.st_utils import get_backend_api_client, initialize_st_page - -# Enable nested async -nest_asyncio.apply() - -initialize_st_page( - layout="wide", - show_readme=False -) - -# Initialize backend client -backend_api_client = get_backend_api_client() - -# Initialize session state -if "selected_account" not in st.session_state: - st.session_state.selected_account = None -if "selected_connector" not in st.session_state: - st.session_state.selected_connector = None -if "selected_market" not in st.session_state: - st.session_state.selected_market = {"connector": "binance_perpetual", "trading_pair": "BTC-USDT"} -if "candles_connector" not in st.session_state: - st.session_state.candles_connector = None -if "auto_refresh_enabled" not in st.session_state: - st.session_state.auto_refresh_enabled = False # Start with manual refresh -if "chart_interval" not in st.session_state: - st.session_state.chart_interval = "1m" -if "max_candles" not in st.session_state: - st.session_state.max_candles = 100 # Reduced for better performance -if "last_api_request" not in st.session_state: - st.session_state.last_api_request = 0 # Track last API request time -if "last_refresh_time" not in st.session_state: - st.session_state.last_refresh_time = 0 # Track last refresh time - -# Trading form session state -if "trade_custom_price" not in st.session_state: - st.session_state.trade_custom_price = None # User's custom price input -if "trade_price_set_by_user" not in st.session_state: - st.session_state.trade_price_set_by_user = False # Track if user set custom price -if "last_order_type" not in st.session_state: - st.session_state.last_order_type = "market" # Track order type changes - -# Set refresh interval for real-time updates -REFRESH_INTERVAL = 30 # seconds - - -def get_accounts_and_credentials(): - """Get available accounts and their credentials.""" - try: - accounts_list = backend_api_client.accounts.list_accounts() - credentials_list = {} - for account in accounts_list: - credentials = backend_api_client.accounts.list_account_credentials(account_name=account) - credentials_list[account] = credentials - return accounts_list, credentials_list - except Exception as e: - st.error(f"Failed to fetch accounts: {e}") - return [], {} - - -def get_candles_connectors(): - """Get available candles feed connectors.""" - try: - # For now, return a hardcoded list of known exchanges that provide candles - return ["binance", "binance_perpetual", "kucoin", "okx", "okx_perpetual", "gate_io"] - except Exception as e: - st.warning(f"Could not fetch candles feed connectors: {e}") - return [] - - -def get_positions(): - """Get current positions.""" - try: - response = backend_api_client.trading.get_positions(limit=100) - # Handle both response formats - if isinstance(response, list): - return response - elif isinstance(response, dict) and response.get("status") == "success": - return response.get("data", []) - elif isinstance(response, dict) and "data" in response: - # Handle the actual API response format - return response.get("data", []) - return [] - except Exception as e: - st.error(f"Failed to fetch positions: {e}") - return [] - - -def get_active_orders(): - """Get active orders.""" - try: - response = backend_api_client.trading.get_active_orders(limit=100) - # Handle both response formats - if isinstance(response, list): - return response - elif isinstance(response, dict): - # Check for different response formats - if response.get("status") == "success": - return response.get("data", []) - elif "data" in response: - # Handle response format like {"data": [...], "pagination": {...}} - return response.get("data", []) - return [] - except Exception as e: - st.error(f"Failed to fetch active orders: {e}") - return [] - - -def get_order_history(): - """Get recent order history.""" - try: - # Try to get orders instead of order_history since that method doesn't exist - response = backend_api_client.trading.search_orders(limit=50) - # Handle both response formats - if isinstance(response, list): - return response - elif isinstance(response, dict): - # Check for different response formats - if response.get("status") == "success": - return response.get("data", []) - elif "data" in response: - # Handle response format like {"data": [...], "pagination": {...}} - return response.get("data", []) - return [] - except Exception: - # If get_orders doesn't exist either, just return empty list without warning - return [] - - -def get_order_book(connector, trading_pair, depth=10): - """Get order book data for the selected trading pair.""" - try: - response = backend_api_client.market_data.get_order_book( - connector_name=connector, - trading_pair=trading_pair, - depth=depth - ) - - # Handle both response formats - if isinstance(response, dict): - if "status" in response and response.get("status") == "success": - return response.get("data", {}) - elif "bids" in response and "asks" in response: - return response - return {} - except Exception as e: - st.warning(f"Could not fetch order book: {e}") - return {} - - -def get_funding_rate(connector, trading_pair): - """Get funding rate for perpetual contracts.""" - try: - # Only try to get funding rate for perpetual connectors - if "perpetual" in connector.lower(): - response = backend_api_client.market_data.get_funding_info( - connector_name=connector, - trading_pair=trading_pair - ) - # Handle both response formats - if isinstance(response, dict): - if "status" in response and response.get("status") == "success": - return response.get("data", {}) - elif "funding_rate" in response: - return response - return {} - return {} - except Exception: - return {} - - -def get_trade_history(account_name, connector_name, trading_pair): - """Get trade history for the selected account and trading pair.""" - try: - # Try to get trades for this specific account/connector/pair - response = backend_api_client.trading.get_trades( - account_name=account_name, - connector_name=connector_name, - trading_pair=trading_pair, - limit=100 - ) - # Handle both response formats - if isinstance(response, list): - return response - elif isinstance(response, dict) and response.get("status") == "success": - return response.get("data", []) - return [] - except Exception: - # If method doesn't exist, try alternative approach - try: - # Get all orders and filter for filled ones - orders = get_order_history() - trades = [] - for order in orders: - if (order.get("status") == "FILLED" and - order.get("trading_pair") == trading_pair and - order.get("connector_name") == connector_name): - trades.append(order) - return trades - except Exception: - return [] - - -def get_market_data(connector, trading_pair, interval="1m", max_records=100, candles_connector=None): - """Get market data with proper error handling.""" - start_time = time.time() - try: - # Get candles - candles = [] - try: - # Use candles_connector if provided, otherwise use main connector - candles_conn = candles_connector if candles_connector else connector - candles_response = backend_api_client.market_data.get_candles( - connector_name=candles_conn, - trading_pair=trading_pair, - interval=interval, - max_records=max_records - ) - # Handle both response formats - if isinstance(candles_response, list): - # Direct list response - candles = candles_response - elif isinstance(candles_response, dict) and candles_response.get("status") == "success": - # Response object with status and data - candles = candles_response.get("data", []) - except Exception as e: - st.warning(f"Could not fetch candles: {e}") - - # Get current price - prices = {} - try: - price_response = backend_api_client.market_data.get_prices( - connector_name=connector, - trading_pairs=[trading_pair] - ) - # Handle both response formats - if isinstance(price_response, dict): - if "status" in price_response and price_response.get("status") == "success": - prices = price_response.get("data", {}) - elif "prices" in price_response: - # Response has a "prices" field containing the actual price data - prices = price_response.get("prices", {}) - else: - # Direct dict response with prices - prices = price_response - elif isinstance(price_response, list): - # If it's a list, try to convert to dict - prices = {item.get("trading_pair", "unknown"): item.get("price", 0) for item in price_response if - isinstance(item, dict)} - except Exception as e: - st.warning(f"Could not fetch prices: {e}") - - # Calculate fetch time for performance monitoring - fetch_time = (time.time() - start_time) * 1000 - st.session_state["last_fetch_time"] = fetch_time - st.session_state["last_fetch_timestamp"] = time.time() - - return candles, prices - except Exception as e: - st.error(f"Failed to fetch market data: {e}") - return [], {} - - -def place_order(order_data): - """Place a trading order.""" - try: - response = backend_api_client.trading.place_order(**order_data) - if response.get("status") == "submitted": - st.success(f"Order placed successfully! Order ID: {response.get('order_id')}") - return True - else: - st.error(f"Failed to place order: {response.get('message', 'Unknown error')}") - return False - except Exception as e: - st.error(f"Failed to place order: {e}") - return False - - -def cancel_order(account_name, connector_name, order_id): - """Cancel an order.""" - try: - response = backend_api_client.trading.cancel_order( - account_name=account_name, - connector_name=connector_name, - client_order_id=order_id - ) - if response.get("status") == "success": - st.success(f"Order {order_id} cancelled successfully!") - return True - else: - st.error(f"Failed to cancel order: {response.get('message', 'Unknown error')}") - return False - except Exception as e: - st.error(f"Failed to cancel order: {e}") - return False - - -def get_default_layout(title=None, height=800, width=1100): - layout = { - "template": "plotly_dark", - "plot_bgcolor": 'rgba(0, 0, 0, 0)', # Transparent background - "paper_bgcolor": 'rgba(0, 0, 0, 0.1)', # Lighter shade for the paper - "font": {"color": 'white', "size": 12}, # Consistent font color and size - "height": height, - "width": width, - "margin": {"l": 20, "r": 20, "t": 50, "b": 20}, - "xaxis_rangeslider_visible": False, - "hovermode": "x unified", - "showlegend": False, - } - if title: - layout["title"] = title - return layout - - -def create_candlestick_chart(candles_data, connector_name="", trading_pair="", interval="", trades_data=None): - """Create a candlestick chart with custom theme, trade markers, and volume bars.""" - if not candles_data: - fig = go.Figure() - fig.add_annotation( - text="No candle data available", - xref="paper", yref="paper", - x=0.5, y=0.5, - showarrow=False - ) - fig.update_layout(**get_default_layout(height=800)) - return fig - - try: - # Convert candles data to DataFrame - df = pd.DataFrame(candles_data) - if df.empty: - return go.Figure() - - # Convert timestamp to datetime for better display - if 'timestamp' in df.columns: - df['datetime'] = pd.to_datetime(df['timestamp'], unit='s') - - # Calculate quote volume (volume * close price) - if 'volume' in df.columns and 'close' in df.columns: - df['quote_volume'] = df['volume'] * df['close'] - else: - df['quote_volume'] = 0 - - # Create subplots with shared x-axis: candlestick chart on top, volume bars on bottom - fig = make_subplots( - rows=2, cols=1, - shared_xaxes=True, - vertical_spacing=0.01, - row_heights=[0.8, 0.2], - subplot_titles=(None, None) # No subplot titles - ) - - # Add candlestick trace to first subplot - fig.add_trace( - go.Candlestick( - x=df['datetime'] if 'datetime' in df.columns else df.index, - open=df['open'], - high=df['high'], - low=df['low'], - close=df['close'], - name="Candlesticks", - ), - row=1, col=1 - ) - - # Add volume bars to second subplot if volume data exists - if 'quote_volume' in df.columns and df['quote_volume'].sum() > 0: - # Color volume bars based on price movement (green for up, red for down) - colors = ['rgba(0, 255, 0, 0.5)' if close >= open_price else 'rgba(255, 0, 0, 0.5)' - for close, open_price in zip(df['close'], df['open'])] - - fig.add_trace( - go.Bar( - x=df['datetime'] if 'datetime' in df.columns else df.index, - y=df['quote_volume'], - name='Volume', - marker=dict(color=colors), - yaxis='y2', - hovertemplate='Volume: $%{y:,.0f}
%{x}' - ), - row=2, col=1 - ) - - # Add trade markers if trade data is provided (add to first subplot) - if trades_data: - try: - trades_df = pd.DataFrame(trades_data) - if not trades_df.empty: - # Convert trade timestamps to datetime - if 'timestamp' in trades_df.columns: - trades_df['datetime'] = pd.to_datetime(trades_df['timestamp'], unit='s') - elif 'created_at' in trades_df.columns: - trades_df['datetime'] = pd.to_datetime(trades_df['created_at']) - elif 'execution_time' in trades_df.columns: - trades_df['datetime'] = pd.to_datetime(trades_df['execution_time']) - - # Filter trades to chart time range if datetime column exists - if 'datetime' in trades_df.columns and 'datetime' in df.columns: - chart_start = df['datetime'].min() - chart_end = df['datetime'].max() - - trades_in_range = trades_df[ - (trades_df['datetime'] >= chart_start) & - (trades_df['datetime'] <= chart_end) - ] - - if not trades_in_range.empty: - # Separate buy and sell trades - buy_trades = trades_in_range[ - trades_in_range.get('trade_type', trades_in_range.get('side', '')) == 'buy'] - sell_trades = trades_in_range[ - trades_in_range.get('trade_type', trades_in_range.get('side', '')) == 'sell'] - - # Add buy markers (green triangles pointing up) to first subplot - if not buy_trades.empty: - fig.add_trace( - go.Scatter( - x=buy_trades['datetime'], - y=buy_trades.get('price', buy_trades.get('avg_price', 0)), - mode='markers', - marker=dict( - symbol='triangle-up', - size=10, - line=dict(width=1, color='white') - ), - name='Buy Trades', - hovertemplate='BUY
Price: $%{y:.4f}
Time: %{x}' - ), - row=1, col=1 - ) - - # Add sell markers (red triangles pointing down) to first subplot - if not sell_trades.empty: - fig.add_trace( - go.Scatter( - x=sell_trades['datetime'], - y=sell_trades.get('price', sell_trades.get('avg_price', 0)), - mode='markers', - marker=dict( - symbol='triangle-down', - size=10, - line=dict(width=1, color='white') - ), - name='Sell Trades', - hovertemplate='SELL
Price: $%{y:.4f}
Time: %{x}' - ), - row=1, col=1 - ) - except Exception: - # If trade markers fail, continue without them - pass - - # Create title - title = f"{connector_name}: {trading_pair} ({interval})" if connector_name else "Price Chart" - - # Get base layout and customize for subplots - layout = get_default_layout(title=title, height=700) # Increased height for two subplots - - # Update specific layout options for subplots - layout.update({ - "xaxis": { - "rangeslider": {"visible": False}, - "showgrid": True, - "gridcolor": "rgba(255,255,255,0.1)", - "color": "white" - }, - "yaxis": { - "title": "Price ($)", - "showgrid": True, - "gridcolor": "rgba(255,255,255,0.1)", - "color": "white" - }, - "xaxis2": { - "showgrid": True, - "gridcolor": "rgba(255,255,255,0.1)", - "color": "white" - }, - "yaxis2": { - "title": "Volume (Quote)", - "showgrid": True, - "gridcolor": "rgba(255,255,255,0.1)", - "color": "white" - } - }) - - fig.update_layout(**layout) - - return fig - except Exception as e: - # Fallback chart with error message - fig = go.Figure() - fig.add_annotation( - text=f"Error creating chart: {str(e)}", - xref="paper", yref="paper", - x=0.5, y=0.5, - showarrow=False - ) - fig.update_layout(**get_default_layout(height=600)) - return fig - - -def create_order_book_chart(order_book_data, current_price=None, depth_percentage=1.0, trading_pair=""): - """Create an order book histogram with price on Y-axis and volume on X-axis.""" - if not order_book_data or not order_book_data.get("bids") or not order_book_data.get("asks"): - fig = go.Figure() - fig.add_annotation( - text="No order book data available", - xref="paper", yref="paper", - x=0.5, y=0.5, - showarrow=False - ) - fig.update_layout(**get_default_layout(title="Order Book", height=600, width=300)) - return fig, None, None - - try: - bids = order_book_data.get("bids", []) - asks = order_book_data.get("asks", []) - - if not bids or not asks: - fig = go.Figure() - fig.add_annotation( - text="Insufficient order book data", - xref="paper", yref="paper", - x=0.5, y=0.5, - showarrow=False - ) - fig.update_layout(**get_default_layout(title="Order Book", height=600, width=300)) - return fig, None, None - - # Process bids and asks - they're already objects with price/amount keys - bids_df = pd.DataFrame(bids) - asks_df = pd.DataFrame(asks) - - # Convert to float - bids_df['price'] = bids_df['price'].astype(float) - bids_df['amount'] = bids_df['amount'].astype(float) - asks_df['price'] = asks_df['price'].astype(float) - asks_df['amount'] = asks_df['amount'].astype(float) - - # Convert amounts to quote asset (USDT) for better normalization - bids_df['quote_volume'] = bids_df['price'] * bids_df['amount'] - asks_df['quote_volume'] = asks_df['price'] * asks_df['amount'] - - # Sort bids descending (highest price first) and asks ascending (lowest price first) - bids_df = bids_df.sort_values('price', ascending=False) - asks_df = asks_df.sort_values('price', ascending=True) - - # Calculate cumulative volumes for better visualization - bids_df['cumulative_volume'] = bids_df['quote_volume'].cumsum() - asks_df['cumulative_volume'] = asks_df['quote_volume'].cumsum() - - # Filter by depth percentage if current price is available - if current_price: - price_range = current_price * (depth_percentage / 100) - min_price = current_price - price_range - max_price = current_price + price_range - - bids_df = bids_df[bids_df['price'] >= min_price] - asks_df = asks_df[asks_df['price'] <= max_price] - - # Create order book chart - fig = go.Figure() - - # Add bid bars (green, all positive values) - using cumulative volume - if not bids_df.empty: - fig.add_trace( - go.Bar( - x=bids_df['cumulative_volume'], # Using cumulative volume - y=bids_df['price'], - orientation='h', - name='Bids', - marker=dict(opacity=0.8), - hovertemplate='BID
Price: $%{y:.4f}
Cumulative Volume: $%{x:,.0f}
Level Volume: $%{customdata:,.0f}', - customdata=bids_df['quote_volume'], # Show individual level volume in hover - offsetgroup='bids' - ) - ) - - # Add ask bars (red, all positive values) - using cumulative volume - if not asks_df.empty: - fig.add_trace( - go.Bar( - x=asks_df['cumulative_volume'], # Using cumulative volume - y=asks_df['price'], - orientation='h', - name='Asks', - marker=dict(opacity=0.8), - hovertemplate='ASK
Price: $%{y:.4f}
Cumulative Volume: $%{x:,.0f}
Level Volume: $%{customdata:,.0f}', - customdata=asks_df['quote_volume'], # Show individual level volume in hover - offsetgroup='asks' - ) - ) - - # Update layout for histogram style - layout = get_default_layout(title="Order Book Depth", height=600, width=300) - layout.update({ - "xaxis": { - "title": "Cumulative Volume (USDT)", - "color": "white", - "showgrid": True, - "gridcolor": "rgba(255,255,255,0.1)", - "zeroline": True, - "zerolinecolor": "rgba(255,255,255,0.3)", - "zerolinewidth": 1 - }, - "yaxis": { - "title": "Price ($)", - "color": "white", - "showgrid": True, - "gridcolor": "rgba(255,255,255,0.1)", - "type": "linear" - }, - "bargap": 0.02, - "bargroupgap": 0.02, - "showlegend": False, - "hovermode": "closest" - }) - - fig.update_layout(**layout) - - # Return price range for syncing with candles chart - price_min = None - price_max = None - - if not bids_df.empty and not asks_df.empty: - price_min = min(bids_df['price'].min(), asks_df['price'].min()) - price_max = max(bids_df['price'].max(), asks_df['price'].max()) - elif not bids_df.empty: - price_min = price_max = bids_df['price'].min() - elif not asks_df.empty: - price_min = price_max = asks_df['price'].max() - - return fig, price_min, price_max - except Exception as e: - # Fallback chart with error message - fig = go.Figure() - fig.add_annotation( - text=f"Error creating order book: {str(e)}", - xref="paper", yref="paper", - x=0.5, y=0.5, - showarrow=False - ) - fig.update_layout(**get_default_layout(title="Order Book", height=600, width=300)) - return fig, None, None - - -def render_positions_table(positions_data): - """Render positions table with enhanced metrics and hedging information.""" - if not positions_data: - st.info("No open positions found.") - return - - # Convert to DataFrame for better display - df = pd.DataFrame(positions_data) - if df.empty: - st.info("No open positions found.") - return - - # Calculate original value (amount * entry_price) - if 'amount' in df.columns and 'entry_price' in df.columns: - df['original_value'] = df['amount'] * df['entry_price'] - - st.subheader("🎯 Open Positions") - - # Calculate and display summary metrics - col1, col2, col3, col4 = st.columns(4) - - with col1: - total_unrealized_pnl = df['unrealized_pnl'].sum() if 'unrealized_pnl' in df.columns else 0 - st.metric( - "Total Unrealized PnL", - f"${total_unrealized_pnl:,.2f}", - delta=None, - delta_color="normal" if total_unrealized_pnl >= 0 else "inverse" - ) - - with col2: - total_original_value = abs(df['original_value']).sum() if 'original_value' in df.columns else 0 - st.metric( - "Total Position Amount", - f"${abs(total_original_value):,.2f}" - ) - - # Separate long and short positions for hedging analysis - long_positions = df[df['amount'] > 0] if 'amount' in df.columns else pd.DataFrame() - short_positions = df[df['amount'] < 0] if 'amount' in df.columns else pd.DataFrame() - - with col3: - long_value = long_positions['original_value'].sum() if not long_positions.empty and 'original_value' in long_positions.columns else 0 - st.metric( - "Long Exposure", - f"${abs(long_value):,.2f}", - help="Total value of long positions" - ) - - with col4: - short_value = short_positions['original_value'].sum() if not short_positions.empty and 'original_value' in short_positions.columns else 0 - st.metric( - "Short Exposure", - f"${abs(short_value):,.2f}", - help="Total value of short positions" - ) - - # Calculate hedge ratio if we have both long and short positions - if long_value != 0 and short_value != 0: - hedge_ratio = min(abs(short_value), abs(long_value)) / max(abs(short_value), abs(long_value)) * 100 - st.info(f"📊 **Hedge Ratio**: {hedge_ratio:.1f}% (Higher = More Hedged)") - elif long_value > 0 and short_value == 0: - st.warning("⚠️ **Portfolio is fully LONG** - No short hedging") - elif short_value > 0 and long_value == 0: - st.warning("⚠️ **Portfolio is fully SHORT** - No long hedging") - - # Display positions table with enhanced formatting - st.dataframe( - df, - use_container_width=True, - hide_index=True, - column_config={ - "amount": st.column_config.NumberColumn( - "Amount", - format="%.6f", - help="Positive = Long, Negative = Short" - ), - "entry_price": st.column_config.NumberColumn( - "Entry Price", - format="$%.4f" - ), - "original_value": st.column_config.NumberColumn( - "Original Value", - format="$%.2f", - help="Amount × Entry Price" - ), - "mark_price": st.column_config.NumberColumn( - "Mark Price", - format="$%.4f" - ), - "unrealized_pnl": st.column_config.NumberColumn( - "Unrealized PnL", - format="$%.2f" - ) - } - ) - - # Show separate long/short breakdown if there are both types - if not long_positions.empty and not short_positions.empty: - st.divider() - - col1, col2 = st.columns(2) - - with col1: - st.subheader("🟢 Long Positions") - if not long_positions.empty: - long_pnl = long_positions['unrealized_pnl'].sum() if 'unrealized_pnl' in long_positions.columns else 0 - st.caption(f"PnL: ${long_pnl:,.2f}") - st.dataframe( - long_positions, - use_container_width=True, - hide_index=True, - column_config={ - "amount": st.column_config.NumberColumn("Amount", format="%.6f"), - "entry_price": st.column_config.NumberColumn("Entry Price", format="$%.4f"), - "unrealized_pnl": st.column_config.NumberColumn("PnL", format="$%.2f") - } - ) - - with col2: - st.subheader("🔴 Short Positions") - if not short_positions.empty: - short_pnl = short_positions['unrealized_pnl'].sum() if 'unrealized_pnl' in short_positions.columns else 0 - st.caption(f"PnL: ${short_pnl:,.2f}") - st.dataframe( - short_positions, - use_container_width=True, - hide_index=True, - column_config={ - "amount": st.column_config.NumberColumn("Amount", format="%.6f"), - "entry_price": st.column_config.NumberColumn("Entry Price", format="$%.4f"), - "unrealized_pnl": st.column_config.NumberColumn("PnL", format="$%.2f") - } - ) - elif not long_positions.empty: - st.info("📈 All positions are LONG") - elif not short_positions.empty: - st.info("📉 All positions are SHORT") - - -def render_orders_table(orders_data): - """Render active orders table.""" - if not orders_data: - st.info("No active orders found.") - return - - # Convert to DataFrame - df = pd.DataFrame(orders_data) - if df.empty: - st.info("No active orders found.") - return - - st.subheader("📋 Active Orders") - - # Add cancel column to dataframe - df_with_cancel = df.copy() - df_with_cancel["cancel"] = False - - # Create column configurations based on what's available in the data - column_config = { - "cancel": st.column_config.CheckboxColumn( - "Cancel", - help="Select orders to cancel", - default=False, - ), - "price": st.column_config.NumberColumn( - "Price", - format="$%.4f" - ), - "amount": st.column_config.NumberColumn( - "Amount", - format="%.6f" - ), - "executed_amount_base": st.column_config.NumberColumn( - "Executed (Base)", - format="%.6f" - ), - "executed_amount_quote": st.column_config.NumberColumn( - "Executed (Quote)", - format="%.6f" - ), - "last_update_timestamp": st.column_config.DatetimeColumn( - "Last Update", - format="DD/MM/YYYY HH:mm:ss" - ) - } - - # Add cancel button functionality - edited_df = st.data_editor( - df_with_cancel, - column_config=column_config, - disabled=[col for col in df_with_cancel.columns if col != "cancel"], - hide_index=True, - use_container_width=True, - key="orders_editor" - ) - - # Handle order cancellation - if "cancel" in edited_df.columns: - selected_orders = edited_df[edited_df["cancel"]] - if not selected_orders.empty and st.button(f"❌ Cancel Selected ({len(selected_orders)}) Orders", - type="secondary"): - with st.spinner("Cancelling orders..."): - for _, order in selected_orders.iterrows(): - cancel_order( - order.get("account_name", ""), - order.get("connector_name", ""), - order.get("client_order_id", "") - ) - st.rerun() - - -# Page Header -st.title("💹 Trading Hub") -st.caption("Execute trades, monitor positions, and analyze markets") - -# Get accounts and credentials -accounts_list, credentials_dict = get_accounts_and_credentials() -candles_connectors = get_candles_connectors() - -# Account and Trading Selection Section - Reorganized -selection_col, market_data_col = st.columns([1, 3]) - -with selection_col: - st.subheader("🏦 Account & Market") - - # All selection in one column - if accounts_list: - # Default to first account if not set - if st.session_state.selected_account is None: - st.session_state.selected_account = accounts_list[0] - - selected_account = st.selectbox( - "📱 Account", - accounts_list, - index=accounts_list.index( - st.session_state.selected_account) if st.session_state.selected_account in accounts_list else 0, - key="account_selector" - ) - st.session_state.selected_account = selected_account - else: - st.error("No accounts found") - st.stop() - - if selected_account and credentials_dict.get(selected_account): - credentials = credentials_dict[selected_account] - - # Handle different credential formats - if isinstance(credentials, list) and credentials: - # If credentials is a list of strings (connector names) - if isinstance(credentials[0], str): - # Convert string list to dict format - credentials = [{"connector_name": cred} for cred in credentials] - # If credentials is already a list of dicts, use as is - elif isinstance(credentials[0], dict): - credentials = credentials - elif isinstance(credentials, dict): - # If credentials is a dict, convert to list of dicts - credentials = [{"connector_name": k, **v} for k, v in credentials.items()] - else: - credentials = [] - - # For simplicity, just use the first credential available - default_cred = credentials[0] if credentials else None - - if default_cred and credentials: - connector = st.selectbox( - "📡 Exchange", - [cred["connector_name"] for cred in credentials], - index=0, - key="connector_selector" - ) - st.session_state.selected_connector = connector - else: - st.error("No credentials found for this account") - connector = None - else: - st.error("No credentials available") - connector = None - - trading_pair = st.text_input( - "💱 Trading Pair", - value="BTC-USDT", - key="trading_pair_input" - ) - - # Update selected market - if connector and trading_pair: - st.session_state.selected_market = {"connector": connector, "trading_pair": trading_pair} - -with market_data_col: - st.subheader("📊 Market Data") - - # Only show metrics if we have a selected market - if st.session_state.selected_market.get("connector") and st.session_state.selected_market.get("trading_pair"): - # Get market data for metrics - connector = st.session_state.selected_market["connector"] - trading_pair = st.session_state.selected_market["trading_pair"] - interval = st.session_state.chart_interval - max_candles = st.session_state.max_candles - candles_connector = st.session_state.candles_connector - - # Create sub-columns for organized display - price_col, depth_col, funding_col, controls_col = st.columns([1, 1, 1, 1]) - - with price_col: - candles, prices = get_market_data( - connector, trading_pair, interval, max_candles, candles_connector - ) - - # Get order book data for bid/ask prices and volumes - order_book = get_order_book(connector, trading_pair, depth=1000) - - if order_book and "bids" in order_book and "asks" in order_book: - bid_price = float(order_book["bids"][0]["price"]) if order_book["bids"] else 0 - ask_price = float(order_book["asks"][0]["price"]) if order_book["asks"] else 0 - mid_price = (bid_price + ask_price) / 2 if bid_price > 0 and ask_price > 0 else 0 - - st.metric(f"💰 {trading_pair}", f"${mid_price:.4f}") - st.metric("📈 Bid Price", f"${bid_price:.4f}") - st.metric("📉 Ask Price", f"${ask_price:.4f}") - else: - # Fallback to current price if no order book - if prices and trading_pair in prices: - current_price = prices[trading_pair] - st.metric( - f"💰 {trading_pair}", - f"${float(current_price):,.4f}" - ) - else: - st.metric(f"💰 {trading_pair}", "Loading...") - with depth_col: - # Order book depth configuration - depth_percentage = st.number_input( - "📊 Depth ±%", - min_value=0.1, - max_value=10.0, - value=1.0, - step=0.1, - format="%.1f", - key="depth_percentage" - ) - - # Calculate depth using the actual API method - if order_book and "bids" in order_book and "asks" in order_book: - bid_price = float(order_book["bids"][0]["price"]) if order_book["bids"] else 0 - ask_price = float(order_book["asks"][0]["price"]) if order_book["asks"] else 0 - - if bid_price > 0 and ask_price > 0: - # Calculate prices at depth percentage - depth_factor = depth_percentage / 100 - buy_price = bid_price * (1 - depth_factor) # Price below current bid - sell_price = ask_price * (1 + depth_factor) # Price above current ask - - try: - # Get buy depth (volume available when buying up to sell_price - hitting asks) - buy_response = backend_api_client.market_data.get_quote_volume_for_price( - connector_name=connector, - trading_pair=trading_pair, - price=sell_price, # Use sell_price for buying (hitting asks above current price) - is_buy=True - ) - - # Get sell depth (volume available when selling down to buy_price - hitting bids) - sell_response = backend_api_client.market_data.get_quote_volume_for_price( - connector_name=connector, - trading_pair=trading_pair, - price=buy_price, # Use buy_price for selling (hitting bids below current price) - is_buy=False - ) - - # Handle response format based on your example - buy_vol = 0 - sell_vol = 0 - - if isinstance(buy_response, dict) and "result_quote_volume" in buy_response: - buy_vol = buy_response["result_quote_volume"] - # Handle NaN values more robustly - import math - if buy_vol is None or (isinstance(buy_vol, float) and math.isnan(buy_vol)) or str(buy_vol).lower() == 'nan': - buy_vol = 0 - - if isinstance(sell_response, dict) and "result_quote_volume" in sell_response: - sell_vol = sell_response["result_quote_volume"] - # Handle NaN values more robustly - import math - if sell_vol is None or (isinstance(sell_vol, float) and math.isnan(sell_vol)) or str(sell_vol).lower() == 'nan': - sell_vol = 0 - - st.metric( - "📊 Buy Depth (USDT)", - f"${float(buy_vol):,.0f}" if buy_vol != 0 else "N/A", - help="Volume available when buying (hitting asks)" - ) - st.metric( - "📊 Sell Depth (USDT)", - f"${float(sell_vol):,.0f}" if sell_vol != 0 else "N/A", - help="Volume available when selling (hitting bids)" - ) - except Exception: - # Fallback to simple calculation if API fails - total_bid_volume = sum(float(bid["amount"] * bid["price"]) for bid in order_book["bids"]) - total_ask_volume = sum(float(ask["amount"] * ask["price"]) for ask in order_book["asks"]) - - st.metric( - "📊 Buy Depth (USDT)", - f"${total_ask_volume:,.0f}", - help="Total ask volume (for buying)" - ) - st.metric( - "📊 Sell Depth (USDT)", - f"${total_bid_volume:,.0f}", - help="Total bid volume (for selling)" - ) - else: - st.metric(f"📊 Depth ±{depth_percentage:.1f}%", "No data") - else: - st.metric(f"📊 Depth ±{depth_percentage:.1f}%", "No order book") - - with funding_col: - # Funding rate for perpetual contracts - if "perpetual" in connector.lower(): - funding_data = get_funding_rate(connector, trading_pair) - if funding_data and "funding_rate" in funding_data: - funding_rate = float(funding_data["funding_rate"]) * 100 - st.metric( - "💸 Funding Rate", - f"{funding_rate:.4f}%" - ) - else: - st.metric("💸 Funding Rate", "N/A") - else: - st.metric("💸 Funding Rate", "Spot") - - with controls_col: - # Show fetch time and refresh button together - if "last_fetch_time" in st.session_state: - fetch_time = st.session_state["last_fetch_time"] - st.caption(f"⚡ Fetch: {fetch_time:.0f}ms") - - # Auto-refresh toggle - auto_refresh = st.toggle( - "🔄 Auto-refresh", - value=st.session_state.auto_refresh_enabled, - help=f"Refresh data every {REFRESH_INTERVAL} seconds" - ) - st.session_state.auto_refresh_enabled = auto_refresh - - # Refresh button - if st.button("🔄 Refresh Now", use_container_width=True, type="primary"): - st.session_state.last_refresh_time = time.time() - st.rerun() - else: - st.info("Select account and pair to view extended market data") - - -# Main trading data display function -def show_trading_data(): - """Display trading data with chart controls.""" - - connector = st.session_state.selected_market.get("connector") - trading_pair = st.session_state.selected_market.get("trading_pair") - - if not connector or not trading_pair: - st.warning("Please select an account and trading pair") - return - - # Chart and Trade Execution section - st.divider() - chart_col, orderbook_col, trade_col = st.columns([3, 1, 1]) - - # Get market data first (needed for both charts) - candles, prices = get_market_data( - connector, trading_pair, st.session_state.chart_interval, - st.session_state.max_candles, st.session_state.candles_connector - ) - - # Get order book data - order_book = get_order_book(connector, trading_pair, depth=20) - - # Get current price and depth percentage - current_price = 0.0 - if prices and trading_pair in prices: - current_price = float(prices[trading_pair]) - depth_percentage = st.session_state.get("depth_percentage", 1.0) - - with chart_col: - st.subheader("📈 Price Chart") - - # Chart controls in the same fragment - controls_col1, controls_col2, controls_col3 = st.columns([1, 1, 1]) - - with controls_col1: - interval = st.selectbox( - "⏱️ Chart Interval", - ["1m", "3m", "5m", "15m", "1h", "4h", "1d"], - index=0, - key="chart_interval_selector" - ) - st.session_state.chart_interval = interval - - with controls_col2: - candles_connectors = get_candles_connectors() - if candles_connectors: - # Add option to use same connector as trading - candles_options = ["Same as trading"] + candles_connectors - selected_candles = st.selectbox( - "📊 Candles Source", - candles_options, - index=0, - key="chart_candles_connector_selector", - help="Some exchanges don't provide candles. Select an alternative source." - ) - st.session_state.candles_connector = None if selected_candles == "Same as trading" else selected_candles - else: - st.session_state.candles_connector = None - - with controls_col3: - max_candles = st.number_input( - "📈 Max Candles", - min_value=50, - max_value=500, - value=100, - step=50, - key="chart_max_candles_input" - ) - st.session_state.max_candles = max_candles - - # Get trade history for the selected account/connector/pair - trades = [] - if st.session_state.selected_account and st.session_state.selected_connector: - trades = get_trade_history( - st.session_state.selected_account, - st.session_state.selected_connector, - trading_pair - ) - - # Add small gap before chart - st.write("") - - # Create candlestick chart - candles_source = st.session_state.candles_connector if st.session_state.candles_connector else connector - candlestick_fig = create_candlestick_chart(candles, candles_source, trading_pair, interval, trades) - - with orderbook_col: - st.subheader("📊 Order Book") - - # Create and display order book chart - orderbook_fig, price_min, price_max = create_order_book_chart( - order_book, current_price, depth_percentage, trading_pair - ) - - # Display both charts - with chart_col: - st.plotly_chart(candlestick_fig, use_container_width=True) - # Show last update time - current_time = datetime.datetime.now().strftime("%H:%M:%S") - st.caption(f"🔄 Last updated: {current_time} (auto-refresh every 30s)") - - with orderbook_col: - st.plotly_chart(orderbook_fig, use_container_width=True) - - with trade_col: - st.subheader("💸 Execute Trade") - - if st.session_state.selected_account and st.session_state.selected_connector: - # Get current price for calculations - current_price = 0.0 - if prices and trading_pair in prices: - current_price = float(prices[trading_pair]) - - # Extract base and quote tokens from trading pair - base_token, quote_token = trading_pair.split('-') - - # Order type selection - order_type = st.selectbox( - "Order Type", - ["market", "limit"], - key="trade_order_type" - ) - - # Side selection - side = st.selectbox( - "Side", - ["buy", "sell"], - key="trade_side" - ) - - # Position mode selection - position_action = st.selectbox( - "Position Mode", - ["OPEN", "CLOSE"], - index=0, # Default to OPEN - key="trade_position_action", - help="OPEN creates new positions, CLOSE reduces existing positions" - ) - - # Amount input - amount = st.number_input( - "Amount", - min_value=0.0, - value=0.001, - format="%.6f", - key="trade_amount" - ) - - # Base/Quote toggle switch - is_quote = st.toggle( - f"Amount in {quote_token}", - value=False, - help=f"Toggle to enter amount in {quote_token} instead of {base_token}", - key="trade_is_quote" - ) - - # Show conversion line - if current_price > 0 and amount > 0: - if is_quote: - # User entered quote amount, show base equivalent - base_equivalent = amount / current_price - st.caption(f"≈ {base_equivalent:.6f} {base_token}") - else: - # User entered base amount, show quote equivalent - quote_equivalent = amount * current_price - st.caption(f"≈ {quote_equivalent:.2f} {quote_token}") - - # Price input for limit orders - if order_type == "limit": - # Check if order type changed or if user hasn't set a custom price - if (st.session_state.last_order_type != order_type or - not st.session_state.trade_price_set_by_user or - st.session_state.trade_custom_price is None): - # Only set default price when switching to limit or no custom price set - if current_price > 0: - st.session_state.trade_custom_price = current_price - else: - st.session_state.trade_custom_price = 0.0 - st.session_state.trade_price_set_by_user = False - - # Update last order type - st.session_state.last_order_type = order_type - - price = st.number_input( - "Price", - min_value=0.0, - value=st.session_state.trade_custom_price, - format="%.4f", - key="trade_price", - on_change=lambda: setattr(st.session_state, 'trade_price_set_by_user', True) - ) - - # Update custom price when user changes it - if price != st.session_state.trade_custom_price: - st.session_state.trade_custom_price = price - st.session_state.trade_price_set_by_user = True - - # Show updated conversion for limit orders - if price > 0 and amount > 0: - if is_quote: - base_equivalent = amount / price - st.caption(f"At limit price: ≈ {base_equivalent:.6f} {base_token}") - else: - quote_equivalent = amount * price - st.caption(f"At limit price: ≈ {quote_equivalent:.2f} {quote_token}") - else: - price = None - - # Submit button - st.write("") - if st.button("🚀 Place Order", type="primary", use_container_width=True, key="place_order_btn"): - if amount > 0: - # Convert amount to base if needed - final_amount = amount - conversion_price = price if order_type == "limit" and price else current_price - - if is_quote and conversion_price > 0: - # Convert quote amount to base amount - final_amount = amount / conversion_price - st.success(f"Converting {amount} {quote_token} to {final_amount:.6f} {base_token}") - - order_data = { - "account_name": st.session_state.selected_account, - "connector_name": st.session_state.selected_connector, - "trading_pair": st.session_state.selected_market["trading_pair"], - "order_type": order_type.upper(), - "trade_type": side.upper(), - "amount": final_amount, - "position_action": position_action - } - if order_type == "limit" and price: - order_data["price"] = price - - with st.spinner("Placing order..."): - place_order(order_data) - else: - st.error("Please enter a valid amount") - - st.write("") - st.info(f"🎯 {st.session_state.selected_connector}\n{st.session_state.selected_market['trading_pair']}") - else: - st.warning("Please select an account and exchange to execute trades") - - # Data tables section - st.divider() - - # Get positions, orders, and history - positions = get_positions() - orders = get_active_orders() - order_history = get_order_history() - - # Display in tabs - Balances first - tab1, tab2, tab3, tab4 = st.tabs(["💰 Balances", "📊 Positions", "📋 Active Orders", "📜 Order History"]) - - with tab1: - render_balances_table() - with tab2: - render_positions_table(positions) - with tab3: - render_orders_table(orders) - with tab4: - render_order_history_table(order_history) - - -def render_order_history_table(order_history): - """Render order history table.""" - if not order_history: - st.info("No order history found.") - return - - # Convert to DataFrame - df = pd.DataFrame(order_history) - if df.empty: - st.info("No order history found.") - return - - st.subheader("📜 Order History") - st.dataframe( - df, - use_container_width=True, - hide_index=True, - column_config={ - "price": st.column_config.NumberColumn( - "Price", - format="$%.4f" - ), - "amount": st.column_config.NumberColumn( - "Amount", - format="%.6f" - ), - "timestamp": st.column_config.DatetimeColumn( - "Time", - format="DD/MM/YYYY HH:mm:ss" - ) - } - ) - - -def get_balances(): - """Get account balances.""" - try: - if not st.session_state.selected_account: - return [] - - # Get portfolio state for the selected account - portfolio_state = backend_api_client.portfolio.get_state( - account_names=[st.session_state.selected_account] - ) - - # Extract balances - balances = [] - if st.session_state.selected_account in portfolio_state: - for exchange, tokens in portfolio_state[st.session_state.selected_account].items(): - for token_info in tokens: - balances.append({ - "exchange": exchange, - "token": token_info["token"], - "total": token_info["units"], - "available": token_info["available_units"], - "price": token_info["price"], - "value": token_info["value"] - }) - return balances - except Exception as e: - st.error(f"Failed to fetch balances: {e}") - return [] - - -def render_balances_table(): - """Render balances table.""" - balances = get_balances() - - if not balances: - st.info("No balances found.") - return - - # Convert to DataFrame - df = pd.DataFrame(balances) - if df.empty: - st.info("No balances found.") - return - - st.subheader(f"💰 Account Balances - {st.session_state.selected_account}") - - # Calculate total value - total_value = df['value'].sum() - st.metric("Total Portfolio Value", f"${total_value:,.2f}") - - st.dataframe( - df, - use_container_width=True, - hide_index=True, - column_config={ - "total": st.column_config.NumberColumn( - "Total Balance", - format="%.6f" - ), - "available": st.column_config.NumberColumn( - "Available", - format="%.6f" - ), - "price": st.column_config.NumberColumn( - "Price", - format="$%.4f" - ), - "value": st.column_config.NumberColumn( - "Value (USD)", - format="$%.2f" - ) - } - ) - - -# Auto-refresh logic - only if user is not actively trading -if st.session_state.auto_refresh_enabled and not st.session_state.trade_price_set_by_user: - # Check if it's time to refresh - current_time = time.time() - time_since_last_refresh = current_time - st.session_state.last_refresh_time - - if time_since_last_refresh >= REFRESH_INTERVAL: - # Update last refresh time and rerun - st.session_state.last_refresh_time = current_time - time.sleep(0.1) # Small delay to prevent rapid refreshes - st.rerun() - -# Display trading data -show_trading_data() diff --git a/pages/performance/__init__.py b/pages/performance/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/pages/performance/bot_performance/README.md b/pages/performance/bot_performance/README.md deleted file mode 100644 index ca530fd9bf..0000000000 --- a/pages/performance/bot_performance/README.md +++ /dev/null @@ -1,5 +0,0 @@ -This page helps you analyze database files of several Hummingbot strategies and measure performance. - -#### Support - -For any inquiries, feedback, or assistance, please contact @drupman on Hummingbot's [Discord](https://discord.com/invite/hummingbot). \ No newline at end of file diff --git a/pages/performance/bot_performance/__init__.py b/pages/performance/bot_performance/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/pages/performance/bot_performance/app.py b/pages/performance/bot_performance/app.py deleted file mode 100755 index 1a99ed99ca..0000000000 --- a/pages/performance/bot_performance/app.py +++ /dev/null @@ -1,43 +0,0 @@ -import asyncio - -import streamlit as st - -from backend.utils.performance_data_source import PerformanceDataSource -from frontend.st_utils import get_backend_api_client, initialize_st_page -from frontend.visualization.bot_performance import ( - display_execution_analysis, - display_global_results, - display_performance_summary_table, - display_tables_section, -) -from frontend.visualization.performance_etl import display_etl_section - - -async def main(): - initialize_st_page(title="Bot Performance", icon="🚀", initial_sidebar_state="collapsed") - st.session_state["default_config"] = {} - backend_api = get_backend_api_client() - - st.subheader("🔫 DATA SOURCE") - checkpoint_data = display_etl_section(backend_api) - data_source = PerformanceDataSource(checkpoint_data) - st.divider() - - st.subheader("📊 OVERVIEW") - display_performance_summary_table(data_source.get_executors_df(), data_source.executors_with_orders) - st.divider() - - st.subheader("🌎 GLOBAL RESULTS") - display_global_results(data_source) - st.divider() - - st.subheader("🧨 EXECUTION") - display_execution_analysis(data_source) - st.divider() - - st.subheader("💾 EXPORT") - display_tables_section(data_source) - - -if __name__ == "__main__": - asyncio.run(main()) diff --git a/pages/permissions.py b/pages/permissions.py deleted file mode 100644 index cc6648deb5..0000000000 --- a/pages/permissions.py +++ /dev/null @@ -1,39 +0,0 @@ -import streamlit as st - - -def main_page(): - return [st.Page("frontend/pages/landing.py", title="Hummingbot Dashboard", icon="📊", url_path="landing")] - - -def public_pages(): - return { - "Config Generator": [ - st.Page("frontend/pages/config/grid_strike/app.py", title="Grid Strike", icon="🎳", url_path="grid_strike"), - st.Page("frontend/pages/config/pmm_simple/app.py", title="PMM Simple", icon="👨‍🏫", url_path="pmm_simple"), - st.Page("frontend/pages/config/pmm_dynamic/app.py", title="PMM Dynamic", icon="👩‍🏫", url_path="pmm_dynamic"), - st.Page("frontend/pages/config/dman_maker_v2/app.py", title="D-Man Maker V2", icon="🤖", url_path="dman_maker_v2"), - st.Page("frontend/pages/config/bollinger_v1/app.py", title="Bollinger V1", icon="📈", url_path="bollinger_v1"), - st.Page("frontend/pages/config/macd_bb_v1/app.py", title="MACD_BB V1", icon="📊", url_path="macd_bb_v1"), - st.Page("frontend/pages/config/supertrend_v1/app.py", title="SuperTrend V1", icon="👨‍🔬", url_path="supertrend_v1"), - st.Page("frontend/pages/config/xemm_controller/app.py", title="XEMM Controller", icon="⚡️", url_path="xemm_controller"), - ], - "Data": [ - st.Page("frontend/pages/data/download_candles/app.py", title="Download Candles", icon="💹", url_path="download_candles"), - ], - "Community Pages": [ - st.Page("frontend/pages/data/tvl_vs_mcap/app.py", title="TVL vs Market Cap", icon="🦉", url_path="tvl_vs_mcap"), - ] - } - - -def private_pages(): - return { - "Bot Orchestration": [ - st.Page("frontend/pages/orchestration/instances/app.py", title="Instances", icon="🦅", url_path="instances"), - st.Page("frontend/pages/orchestration/launch_bot_v2/app.py", title="Deploy V2", icon="🚀", url_path="launch_bot_v2"), - st.Page("frontend/pages/orchestration/credentials/app.py", title="Credentials", icon="🔑", url_path="credentials"), - st.Page("frontend/pages/orchestration/portfolio/app.py", title="Portfolio", icon="💰", url_path="portfolio"), - st.Page("frontend/pages/orchestration/trading/app.py", title="Trading", icon="🪄", url_path="trading"), - st.Page("frontend/pages/orchestration/archived_bots/app.py", title="Archived Bots", icon="🗃️", url_path="archived_bots"), - ] - } diff --git a/setup.sh b/setup.sh old mode 100644 new mode 100755 index 77f2eb88e9..0b524240ed --- a/setup.sh +++ b/setup.sh @@ -1,173 +1,617 @@ #!/bin/bash - -# Hummingbot Deploy Setup Script -# This script sets up the deployment environment for Hummingbot Deploy -# with all necessary configuration options - -set -e # Exit on any error - -# Colors for better output +# +# Hummingbot Deploy Instance Installer +# +# This script sets up Hummingbot instances with optional Hummingbot API integration. +# Each repository manages its own docker-compose and setup via Makefile. +set -eu +# --- Configuration --- +CONDOR_REPO="https://github.com/hummingbot/condor.git" +API_REPO="https://github.com/hummingbot/hummingbot-api.git" +CONDOR_DIR="condor" +API_DIR="hummingbot-api" +DOCKER_COMPOSE="" # Will be set by detect_docker_compose() +# --- Color Codes --- RED='\033[0;31m' GREEN='\033[0;32m' YELLOW='\033[1;33m' BLUE='\033[0;34m' PURPLE='\033[0;35m' CYAN='\033[0;36m' -NC='\033[0m' # No Color - -echo "🚀 Hummingbot Deploy Setup" -echo "" - -echo -n "Config password [default: admin]: " -read CONFIG_PASSWORD -CONFIG_PASSWORD=${CONFIG_PASSWORD:-admin} - -echo -n "Dashboard username [default: admin]: " -read USERNAME -USERNAME=${USERNAME:-admin} - -echo -n "Dashboard password [default: admin]: " -read PASSWORD -PASSWORD=${PASSWORD:-admin} - -# Set paths and defaults -BOTS_PATH=$(pwd) - -# Use sensible defaults for deployment -DEBUG_MODE="false" -BROKER_HOST="localhost" -BROKER_PORT="1883" -BROKER_USERNAME="admin" -BROKER_PASSWORD="password" -DATABASE_URL="postgresql+asyncpg://hbot:hummingbot-api@localhost:5432/hummingbot_api" -CLEANUP_INTERVAL="300" -FEED_TIMEOUT="600" -AWS_API_KEY="" -AWS_SECRET_KEY="" -S3_BUCKET="" -LOGFIRE_ENV="prod" -BANNED_TOKENS='["NAV","ARS","ETHW","ETHF","NEWT"]' - -echo "" -echo -e "${GREEN}✅ Using sensible defaults for MQTT, Database, and other settings${NC}" - -echo "" -echo -e "${GREEN}📝 Creating .env file...${NC}" - -# Create .env file with proper structure and comments -cat > .env << EOF -# ================================================================= -# Hummingbot Deploy Environment Configuration -# Generated on: $(date) -# ================================================================= - -# ================================================================= -# 🔐 Security Configuration -# ================================================================= -USERNAME=$USERNAME -PASSWORD=$PASSWORD -DEBUG_MODE=$DEBUG_MODE -CONFIG_PASSWORD=$CONFIG_PASSWORD - -# ================================================================= -# 🔗 MQTT Broker Configuration (BROKER_*) -# ================================================================= -BROKER_HOST=$BROKER_HOST -BROKER_PORT=$BROKER_PORT -BROKER_USERNAME=$BROKER_USERNAME -BROKER_PASSWORD=$BROKER_PASSWORD - -# ================================================================= -# 💾 Database Configuration (DATABASE_*) -# ================================================================= -DATABASE_URL=$DATABASE_URL - -# ================================================================= -# 📊 Market Data Feed Manager Configuration (MARKET_DATA_*) -# ================================================================= -MARKET_DATA_CLEANUP_INTERVAL=$CLEANUP_INTERVAL -MARKET_DATA_FEED_TIMEOUT=$FEED_TIMEOUT - -# ================================================================= -# ☁️ AWS Configuration (AWS_*) - Optional -# ================================================================= -AWS_API_KEY=$AWS_API_KEY -AWS_SECRET_KEY=$AWS_SECRET_KEY -AWS_S3_DEFAULT_BUCKET_NAME=$S3_BUCKET - -# ================================================================= -# ⚙️ Application Settings -# ================================================================= -LOGFIRE_ENVIRONMENT=$LOGFIRE_ENV -BANNED_TOKENS=$BANNED_TOKENS - -# ================================================================= -# 📁 Application Paths -# ================================================================= -BOTS_PATH=$BOTS_PATH - -EOF - -echo -e "${GREEN}✅ .env file created successfully!${NC}" +NC='\033[0m' +# --- Track directories we create for cleanup --- +CREATED_DIRS=() +# --- Cleanup trap --- +cleanup() { + local exit_code=$? + # Remove any partial downloads + rm -f get-docker.sh 2>/dev/null || true + + # If we're exiting with an error, remove partial git clones we created + if [ $exit_code -ne 0 ]; then + for dir in "${CREATED_DIRS[@]}"; do + if [ -d "$dir" ]; then + msg_warn "Cleaning up partial installation: $dir" + rm -rf "$dir" 2>/dev/null || true + fi + done + fi +} +trap cleanup EXIT +# --- Helper Functions --- +msg_info() { + echo -e "${CYAN}[INFO] $1${NC}" +} +msg_ok() { + echo -e "${GREEN}[OK] $1${NC}" +} +msg_warn() { + echo -e "${YELLOW}[WARN] $1${NC}" >&2 +} +msg_error() { + echo -e "${RED}[ERROR] $1${NC}" >&2 +} +prompt() { + echo -en "${PURPLE}$1${NC}" > /dev/tty +} +prompt_visible() { + prompt "$1" + read -r val < /dev/tty || val="" + echo "$val" +} +prompt_default() { + prompt "$1 [$2]: " + read -r val < /dev/tty || val="" + echo "${val:-$2}" +} +prompt_yesno() { + while true; do + prompt "$1 (y/n): " + read -r val < /dev/tty || val="" + case "$val" in + [Yy]) echo "y"; return ;; + [Nn]) echo "n"; return ;; + *) msg_warn "Please answer 'y' or 'n'" ;; + esac + done +} +command_exists() { + command -v "$1" >/dev/null 2>&1 +} +# Check if running in interactive mode +is_interactive() { + [[ -t 0 ]] && [[ -t 1 ]] && [[ "${TERM:-dumb}" != "dumb" ]] +} +# Check if running inside a container +is_container() { + [ -f /.dockerenv ] || grep -q docker /proc/1/cgroup 2>/dev/null || grep -q containerd /proc/1/cgroup 2>/dev/null +} +# Escape special characters for .env file +escape_env_value() { + local value="$1" + # Escape backslashes, double quotes, and dollar signs + value="${value//\\/\\\\}" + value="${value//\"/\\\"}" + value="${value//\$/\\\$}" + echo "$value" +} +# --- Parse Command Line Arguments --- +UPGRADE_MODE="n" +while [[ $# -gt 0 ]]; do + case $1 in + --upgrade) + UPGRADE_MODE="y" + shift + ;; + -h|--help) + echo "Usage: $0 [OPTIONS]" + echo "" + echo "Options:" + echo " --upgrade Upgrade existing installation" + echo " -h, --help Show this help message" + exit 0 + ;; + *) + msg_error "Unknown option: $1" + echo "Use -h or --help for usage information." + exit 1 + ;; + esac +done +# --- Installation and Setup Functions --- +detect_os_arch() { + OS=$(uname -s | tr '[:upper:]' '[:lower:]') + ARCH=$(uname -m) + case "$ARCH" in + x86_64|amd64) ARCH="amd64" ;; + aarch64|arm64) ARCH="arm64" ;; + armv7*|armv8*) ARCH="arm" ;; + armv*) ARCH="arm" ;; + *) msg_warn "Unknown architecture: $ARCH, defaulting to amd64"; ARCH="amd64" ;; + esac + msg_info "Detected OS: $OS, Architecture: $ARCH" + + # Detect Homebrew location for Apple Silicon + if [[ "$OS" == "darwin" ]]; then + if [[ -x "/opt/homebrew/bin/brew" ]]; then + eval "$(/opt/homebrew/bin/brew shellenv)" + elif [[ -x "/usr/local/bin/brew" ]]; then + eval "$(/usr/local/bin/brew shellenv)" + fi + fi +} +detect_docker_compose() { + if docker compose version >/dev/null 2>&1; then + DOCKER_COMPOSE="docker compose" + msg_info "Using Docker Compose plugin" + elif command_exists docker-compose; then + DOCKER_COMPOSE="docker-compose" + msg_info "Using standalone docker-compose" + else + msg_error "Neither docker-compose nor docker compose plugin found" + exit 1 + fi +} +check_docker_running() { + if ! docker info >/dev/null 2>&1; then + if [[ "$OS" == "darwin" ]]; then + msg_error "Docker Desktop is not running." + msg_info "Please start Docker Desktop and try again." + else + msg_error "Docker daemon is not running." + msg_info "Please start Docker and try again." + if command_exists systemctl; then + msg_info "Try: sudo systemctl start docker" + elif command_exists service; then + msg_info "Try: sudo service docker start" + fi + fi + exit 1 + fi + msg_ok "Docker daemon is running" +} +check_disk_space() { + local required_mb=2048 # 2GB minimum + local available_mb + + if [[ "$OS" == "linux" ]]; then + available_mb=$(df -m . 2>/dev/null | tail -1 | awk '{print $4}') + elif [[ "$OS" == "darwin" ]]; then + available_mb=$(df -m . 2>/dev/null | tail -1 | awk '{print $4}') + else + # Skip check on unknown OS + return + fi + + if [[ -n "$available_mb" ]] && [[ $available_mb -lt $required_mb ]]; then + msg_error "Insufficient disk space. Need ${required_mb}MB, have ${available_mb}MB" + exit 1 + fi + msg_ok "Sufficient disk space available (${available_mb}MB)" +} +install_dependencies() { + msg_info "Checking for dependencies..." + + MISSING_DEPS=() + + if ! command_exists git; then + MISSING_DEPS+=("git") + fi + if ! command_exists curl; then + MISSING_DEPS+=("curl") + fi + if ! command_exists docker; then + MISSING_DEPS+=("docker") + fi + + # Check for docker-compose (either standalone or as docker compose plugin) + if ! (command_exists docker-compose || (command_exists docker && docker compose version >/dev/null 2>&1)); then + MISSING_DEPS+=("docker-compose") + fi + + if ! command_exists make; then + MISSING_DEPS+=("make") + fi + if [ ${#MISSING_DEPS[@]} -eq 0 ]; then + msg_ok "All dependencies (git, curl, docker, docker-compose, make) are already installed." + return + fi + msg_warn "Missing dependencies: ${MISSING_DEPS[*]}" + msg_info "Attempting to install missing dependencies..." + case "$OS" in + linux) + if ! command_exists apt-get; then + msg_error "This script currently only supports Debian/Ubuntu-based Linux distributions for automatic dependency installation." + msg_info "Please install missing dependencies manually: ${MISSING_DEPS[*]}" + exit 1 + fi + export DEBIAN_FRONTEND=noninteractive + sudo apt-get update + sudo apt-get install -y git curl build-essential + if ! command_exists docker; then + msg_info "Installing Docker..." + curl -fsSL https://get.docker.com -o get-docker.sh + sudo sh get-docker.sh + rm -f get-docker.sh + sudo usermod -aG docker "$USER" + msg_warn "Added $USER to docker group." + + # Try to use Docker with sg to get group permissions without re-login + msg_info "Attempting to apply Docker group permissions..." + fi + + # Start Docker daemon - handle both systemd and non-systemd systems + # Skip if running inside a container + if ! is_container; then + if command_exists systemctl; then + sudo systemctl start docker 2>/dev/null || true + sudo systemctl enable docker 2>/dev/null || true + elif command_exists service; then + sudo service docker start 2>/dev/null || true + else + msg_warn "Could not detect init system. Please start Docker daemon manually if needed." + fi + else + msg_info "Running inside container - Docker daemon should be managed by host" + fi + # Check if docker-compose is needed + if ! (command_exists docker-compose || (command_exists docker && docker compose version >/dev/null 2>&1)); then + msg_info "Installing Docker Compose..." + + # Try to get latest version, with fallback + LATEST_COMPOSE="" + if command_exists jq; then + LATEST_COMPOSE=$(curl -s https://api.github.com/repos/docker/compose/releases/latest | jq -r '.tag_name' 2>/dev/null) || true + fi + + # Fallback to grep/sed if jq not available or failed + if [ -z "$LATEST_COMPOSE" ] || [ "$LATEST_COMPOSE" = "null" ]; then + LATEST_COMPOSE=$(curl -s https://api.github.com/repos/docker/compose/releases/latest | grep '"tag_name":' | sed -E 's/.*"tag_name": *"([^"]+)".*/\1/' | head -1) || true + fi + + # Final fallback to a known stable version + if [ -z "$LATEST_COMPOSE" ] || [ "$LATEST_COMPOSE" = "null" ]; then + LATEST_COMPOSE="v2.24.0" + msg_warn "Could not detect latest Docker Compose version, using fallback: $LATEST_COMPOSE" + fi + + COMPOSE_URL="https://github.com/docker/compose/releases/download/$LATEST_COMPOSE/docker-compose-$(uname -s)-$(uname -m)" + + if ! sudo curl -L "$COMPOSE_URL" -o /usr/local/bin/docker-compose; then + msg_error "Failed to download Docker Compose from $COMPOSE_URL" + exit 1 + fi + sudo chmod +x /usr/local/bin/docker-compose + msg_ok "Docker Compose installed." + fi + ;; + darwin) + if ! command_exists brew; then + msg_error "Homebrew is not installed. Please install it first by visiting https://brew.sh/" + exit 1 + fi + + if ! command_exists git; then + msg_info "Installing git..." + brew install git + fi + + if ! command_exists curl; then + msg_info "Installing curl..." + brew install curl + fi + + if ! command_exists make; then + # On macOS, make comes with Xcode Command Line Tools + if ! xcode-select -p >/dev/null 2>&1; then + msg_info "Installing Xcode Command Line Tools (includes make)..." + xcode-select --install 2>/dev/null || true + msg_warn "Xcode Command Line Tools installation started. Please complete the installation dialog, then re-run this script." + exit 0 + else + # CLT installed but make not found - try brew as fallback + msg_info "Installing make via Homebrew..." + brew install make + # Note: brew installs as 'gmake', warn user + if command_exists gmake && ! command_exists make; then + msg_warn "GNU Make installed as 'gmake'. You may need to create an alias or use 'gmake' directly." + msg_info "To use as 'make', add to your shell profile: alias make='gmake'" + fi + fi + fi + + if ! command_exists docker; then + msg_info "Installing Docker Desktop for Mac..." + brew install --cask docker + msg_warn "Docker Desktop installed. Please open Docker Desktop to start the Docker daemon, then re-run this script." + exit 0 + fi + ;; + *) + msg_error "Unsupported operating system: $OS" + msg_info "Please install dependencies manually: ${MISSING_DEPS[*]}" + exit 1 + ;; + esac + msg_ok "Dependency installation complete." +} +setup_condor_config() { + ENV_FILE="$CONDOR_DIR/.env" + + # Check if running in interactive mode + if ! is_interactive; then + msg_warn "Non-interactive mode detected. Skipping configuration prompts." + msg_info "Please configure $ENV_FILE manually after installation." + return + fi + + # 1. Check if .env already exists + if [ -f "$ENV_FILE" ]; then + echo "" + echo ">> Found existing $ENV_FILE file." + echo ">> Credentials already exist. Skipping setup params." + echo "" + return + fi + + # 2. Prompt for Telegram Bot Token with validation + echo "" + while true; do + prompt "Enter your Telegram Bot Token: " + read -r telegram_token < /dev/tty || telegram_token="" + telegram_token=$(echo "$telegram_token" | tr -d '[:space:]') + + if [ -z "$telegram_token" ]; then + msg_warn "Telegram Bot Token cannot be empty. Please try again." + continue + fi + + # Validate token format: digits:alphanumeric + if ! [[ "$telegram_token" =~ ^[0-9]+:[A-Za-z0-9_-]+$ ]]; then + msg_error "Invalid Telegram Bot Token format. Expected format: 123456789:ABCdefGHIjklMNOpqrsTUVwxyz" + msg_info "Please enter a valid token." + continue + fi + + break + done + + # 3. Prompt for Admin User ID with validation + echo "" + echo "Enter your Telegram User ID (you will be the admin)." + echo "(Tip: Message @userinfobot on Telegram to get your ID)" + while true; do + prompt "Admin User ID: " + read -r admin_id < /dev/tty || admin_id="" + admin_id=$(echo "$admin_id" | tr -d '[:space:]') + + if [ -z "$admin_id" ]; then + msg_warn "Admin User ID cannot be empty. Please try again." + continue + fi + + # Validate user ID is numeric + if ! [[ "$admin_id" =~ ^[0-9]+$ ]]; then + msg_error "Invalid User ID. User ID should be numeric (e.g., 123456789)." + continue + fi + + break + done + + # 4. Prompt for OpenAI API Key (optional) + echo "" + echo "Enter your OpenAI API Key (optional, for AI features)." + echo "Press Enter to skip if not using AI features." + prompt "OpenAI API Key: " + read -r openai_key < /dev/tty || openai_key="" + openai_key=$(echo "$openai_key" | tr -d '[:space:]') + + # 5. Create .env file with escaped values + { + echo "TELEGRAM_TOKEN=$(escape_env_value "$telegram_token")" + echo "ADMIN_USER_ID=$(escape_env_value "$admin_id")" + if [ -n "$openai_key" ]; then + echo "OPENAI_API_KEY=$(escape_env_value "$openai_key")" + fi + } > "$ENV_FILE" + + msg_ok "Configuration saved to $ENV_FILE" +} +run_upgrade() { + msg_info "Existing installation detected. Starting upgrade/installation process..." + + # Upgrade Condor if it exists + if [ -d "$CONDOR_DIR" ]; then + msg_info "Upgrading Condor..." + if ! (cd "$CONDOR_DIR" && git pull); then + msg_error "Failed to update Condor repository" + exit 1 + fi + msg_ok "Condor repository updated." + else + msg_warn "Condor directory not found, skipping Condor upgrade." + fi + # Check if API needs to be installed (Condor exists but API doesn't) + if [ -d "$CONDOR_DIR" ] && [ ! -d "$API_DIR" ]; then + echo "" + msg_info "Hummingbot API is not installed yet." + INSTALL_API=$(prompt_yesno "Do you want to install Hummingbot API now?") + + if [ "$INSTALL_API" = "y" ]; then + echo "" + echo -e "${BLUE}Installing Hummingbot API:${NC}" + msg_info "Cloning Hummingbot API repository..." + CREATED_DIRS+=("$API_DIR") + if ! git clone --depth 1 "$API_REPO" "$API_DIR"; then + msg_error "Failed to clone Hummingbot API repository" + exit 1 + fi + + msg_info "Setting up Hummingbot API (running: make setup)..." + if ! (cd "$API_DIR" && make setup); then + msg_error "Failed to run make setup for Hummingbot API" + exit 1 + fi + + msg_info "Deploying Hummingbot API (running: make deploy)..." + if ! (cd "$API_DIR" && make deploy); then + msg_error "Failed to deploy Hummingbot API" + exit 1 + fi + msg_ok "Hummingbot API installation complete!" + fi + # Upgrade Hummingbot API if it already exists + elif [ -d "$API_DIR" ]; then + msg_info "Upgrading Hummingbot API..." + if ! (cd "$API_DIR" && git pull); then + msg_error "Failed to update Hummingbot API repository" + exit 1 + fi + msg_ok "Hummingbot API repository updated." + fi + # Pull latest images for condor and hummingbot-api only + msg_info "Pulling latest Docker images (condor and hummingbot-api only)..." + + if [ -f "$CONDOR_DIR/docker-compose.yml" ]; then + msg_info "Updating Condor container..." + if ! (cd "$CONDOR_DIR" && $DOCKER_COMPOSE pull); then + msg_warn "Failed to pull Condor images, continuing anyway..." + fi + fi + if [ -f "$API_DIR/docker-compose.yml" ]; then + msg_info "Updating Hummingbot API container..." + if ! (cd "$API_DIR" && $DOCKER_COMPOSE pull); then + msg_warn "Failed to pull Hummingbot API images, continuing anyway..." + fi + fi + # Restart services + msg_info "Restarting services..." + if [ -f "$CONDOR_DIR/docker-compose.yml" ]; then + if ! (cd "$CONDOR_DIR" && $DOCKER_COMPOSE up -d --remove-orphans); then + msg_warn "Failed to restart Condor services" + fi + fi + if [ -f "$API_DIR/docker-compose.yml" ]; then + if ! (cd "$API_DIR" && $DOCKER_COMPOSE up -d --remove-orphans); then + msg_warn "Failed to restart Hummingbot API services" + fi + fi + msg_ok "Installation/upgrade complete!" + + echo "" + echo -e "${BLUE}Running services:${NC}" + msg_info "Check container status with: cd $CONDOR_DIR && $DOCKER_COMPOSE ps" + if [ -d "$API_DIR" ]; then + msg_info "Or: cd $API_DIR && $DOCKER_COMPOSE ps" + fi +} +run_installation() { + msg_info "Starting new installation..." + + SCRIPT_DIR="$(pwd)" + msg_ok "Installation directory: $SCRIPT_DIR" + # --- Clone and Setup Condor --- + echo "" + echo -e "${BLUE}Installing Condor Bot:${NC}" + msg_info "Cloning Condor repository..." + CREATED_DIRS+=("$CONDOR_DIR") + if ! git clone --depth 1 "$CONDOR_REPO" "$CONDOR_DIR"; then + msg_error "Failed to clone Condor repository" + exit 1 + fi + + # Setup Condor configuration + setup_condor_config + + msg_info "Setting up Condor (running: make setup)..." + if ! (cd "$CONDOR_DIR" && make setup); then + msg_error "Failed to run make setup for Condor" + exit 1 + fi + + msg_info "Deploying Condor (running: make deploy)..." + if ! (cd "$CONDOR_DIR" && make deploy); then + msg_error "Failed to deploy Condor" + exit 1 + fi + msg_ok "Condor installation complete!" + + # --- Prompt for API Installation --- + echo "" + INSTALL_API=$(prompt_yesno "Do you also want to install Hummingbot API on this machine?") + + if [ "$INSTALL_API" = "y" ]; then + echo "" + echo -e "${BLUE}Installing Hummingbot API:${NC}" + + msg_info "Cloning Hummingbot API repository..." + CREATED_DIRS+=("$API_DIR") + if ! git clone --depth 1 "$API_REPO" "$API_DIR"; then + msg_error "Failed to clone Hummingbot API repository" + exit 1 + fi + + msg_info "Setting up Hummingbot API (running: make setup)..." + if ! (cd "$API_DIR" && make setup); then + msg_error "Failed to run make setup for Hummingbot API" + exit 1 + fi + + msg_info "Deploying Hummingbot API (running: make deploy)..." + if ! (cd "$API_DIR" && make deploy); then + msg_error "Failed to deploy Hummingbot API" + exit 1 + fi + msg_ok "Hummingbot API installation complete!" + fi + # --- Summary --- + echo "" + echo -e "${GREEN}════════════════════════════════════════${NC}" + msg_ok "Installation Complete!" + echo -e "${GREEN}════════════════════════════════════════${NC}" + echo "" + echo -e "${BLUE}Installation Summary:${NC}" + msg_info "Installation directory: $SCRIPT_DIR" + msg_info "Condor is installed and running" + if [ "$INSTALL_API" = "y" ]; then + msg_info "Hummingbot API is installed and running" + fi + + echo "" + echo -e "${BLUE}Next Steps:${NC}" + msg_info "Check Condor status: cd $SCRIPT_DIR/$CONDOR_DIR && $DOCKER_COMPOSE ps" + if [ "$INSTALL_API" = "y" ]; then + msg_info "Check API status: cd $SCRIPT_DIR/$API_DIR && $DOCKER_COMPOSE ps" + fi + + echo "" + echo -e "${BLUE}To upgrade in the future:${NC}" + msg_info "Run this script with --upgrade flag from the deployment directory" + msg_info "Or re-run this script without flags if both repos exist" +} +# --- Main Execution --- +clear +echo -e "${GREEN}" +cat << "BANNER" + ██████╗ ██████╗ ███╗ ██╗██████╗ ██████╗ ██████╗ + ██╔════╝██╔═══██╗████╗ ██║██╔══██╗██╔═══██╗██╔══██╗ + ██║ ██║ ██║██╔██╗ ██║██║ ██║██║ ██║██████╔╝ + ██║ ██║ ██║██║╚██╗██║██║ ██║██║ ██║██╔══██╗ + ╚██████╗╚██████╔╝██║ ╚████║██████╔╝╚██████╔╝██║ ██║ + ╚═════╝ ╚═════╝ ╚═╝ ╚═══╝╚═════╝ ╚═════╝ ╚═╝ ╚═╝ +BANNER +echo -e "${NC}" +echo -e "${CYAN} Hummingbot Deploy Installer${NC}" echo "" - -# Display configuration summary -echo -e "${BLUE}📋 Configuration Summary${NC}" -echo "=======================" -echo -e "${CYAN}Security:${NC} Username: $USERNAME, Debug: $DEBUG_MODE" -echo -e "${CYAN}Broker:${NC} $BROKER_HOST:$BROKER_PORT" -echo -e "${CYAN}Database:${NC} ${DATABASE_URL%%@*}@[hidden]" -echo -e "${CYAN}Market Data:${NC} Cleanup: ${CLEANUP_INTERVAL}s, Timeout: ${FEED_TIMEOUT}s" -echo -e "${CYAN}Environment:${NC} $LOGFIRE_ENV" - -if [ -n "$AWS_API_KEY" ]; then - echo -e "${CYAN}AWS:${NC} Configured with S3 bucket: $S3_BUCKET" +detect_os_arch +check_disk_space +install_dependencies +check_docker_running +detect_docker_compose +# Determine installation or upgrade path +if [ "$UPGRADE_MODE" = "y" ] || ([ -d "$CONDOR_DIR" ] && [ -d "$API_DIR" ]) || ([ -d "$CONDOR_DIR" ] && [ -f "$CONDOR_DIR/docker-compose.yml" ]); then + run_upgrade else - echo -e "${CYAN}AWS:${NC} Not configured (optional)" -fi - -echo "" -echo -e "${GREEN}🐳 Pulling required Docker images...${NC}" - -# Pull Docker images in parallel -docker compose pull & -docker pull hummingbot/hummingbot:latest & - -# Wait for both operations to complete -wait - -echo -e "${GREEN}✅ All Docker images pulled successfully!${NC}" -echo "" - -# Check if password verification file exists -if [ ! -f "bots/credentials/master_account/.password_verification" ]; then - echo -e "${YELLOW}📌 Note:${NC} Password verification file will be created on first startup" - echo -e " Location: ${BLUE}bots/credentials/master_account/.password_verification${NC}" - echo "" + run_installation fi - -echo -e "${GREEN}🚀 Starting Hummingbot Deploy services...${NC}" - -# Start the deployment -docker compose up -d - -echo "" -echo -e "${GREEN}🎉 Deployment Complete!${NC}" -echo "" - -echo -e "Your services are now running:" -echo -e "📊 ${BLUE}Dashboard:${NC} http://localhost:8501" -echo -e "🔧 ${BLUE}API Docs:${NC} http://localhost:8000/docs" -echo -e "📡 ${BLUE}MQTT Broker:${NC} localhost:1883" -echo "" - -echo -e "Next steps:" -echo "1. Access the Dashboard: http://localhost:8501" -echo "2. Configure your trading strategies" -echo "3. Monitor logs: docker compose logs -f" -echo "" -echo -e "${PURPLE}💡 Pro tip:${NC} You can modify environment variables in .env file anytime" -echo -e "${PURPLE}📚 Documentation:${NC} Check CLAUDE.md for project guidance" -echo -e "${PURPLE}🔒 Security:${NC} The password verification file secures bot credentials" -echo "" -echo -e "${GREEN}Happy Trading! 🤖💰${NC}" diff --git a/setup_dydx.sh b/setup_dydx.sh deleted file mode 100644 index be70067072..0000000000 --- a/setup_dydx.sh +++ /dev/null @@ -1,12 +0,0 @@ -#!/bin/bash - -# Pulling the required Docker images -docker compose pull -docker pull hummingbot/hummingbot:latest_dydx - -# Creating .env file with the required environment variables -echo "CONFIG_PASSWORD=a" > .env -echo "BOTS_PATH=$(pwd)" >> .env - -# Running docker-compose in detached mode -docker compose -f docker-compose-dydx.yml up -d \ No newline at end of file