A cloud provider integration plugin for IBM Spectrum Symphony Host Factory, enabling dynamic provisioning of compute resources with a REST API interface and structured architecture implementation.
The Open Resource Broker provides integration between IBM Spectrum Symphony Host Factory and cloud providers, implementing industry-standard patterns including Domain-Driven Design (DDD), Command Query Responsibility Segregation (CQRS), and structured architecture principles.
Currently Supported Providers:
- AWS - Amazon Web Services (RunInstances, EC2Fleet, SpotFleet, Auto Scaling Groups)
- Context field support for EC2Fleet, SpotFleet, and Auto Scaling Groups
- HostFactory Compatible Output: Native compatibility with IBM Symphony Host Factory requirements
- Multi-Provider Architecture: Extensible provider system supporting multiple cloud platforms
- REST API Interface: REST API with OpenAPI/Swagger documentation
- Configuration-Driven: Dynamic provider selection and configuration through centralized config system
- Clean Architecture: Domain-driven design with clear separation of concerns
- CQRS Pattern: Command Query Responsibility Segregation for scalable operations
- Event-Driven Architecture: Domain events with optional event publishing for template operations
- Dependency Injection: Comprehensive DI container with automatic dependency resolution
- Strategy Pattern: Pluggable provider strategies with runtime selection
- Resilience Patterns: Built-in retry mechanisms, circuit breakers, and error handling
- Flexible Field Control: Configurable output fields for different use cases
- Multiple Output Formats: JSON, YAML, Table, and List formats
- Legacy Compatibility: Support for camelCase field naming conventions
- Professional Tables: Rich Unicode table formatting for CLI output
# Clone repository
git clone https://github.com/awslabs/open-resource-broker.git
cd open-resource-broker
# Configure environment
cp .env.example .env
# Edit .env with your configuration
# Start services
docker-compose up -d
# Verify deployment
curl http://localhost:8000/health# Install from PyPI
pip install orb-py
# Verify installation
orb --version
orb --help# Auto-detects best location (no sudo needed if not available)
make install-system
# Or install to custom directory (requires sudo if needed)
ORB_INSTALL_DIR=/opt/orb make install-system
# Installation will output the actual location and next steps
# Add to PATH as instructed by the installer
orb --version# Clone repository
git clone https://github.com/awslabs/open-resource-broker.git
cd open-resource-broker
# Install local development environment
make dev-install
# Or full development workflow (recommended)
make dev| Method | Location | Use Case | Command |
|---|---|---|---|
| PyPI | System Python | End users | pip install orb-py |
| System | /usr/local/orb/ or ~/.local/orb/ |
Production deployment | make install-system |
| Local | ./.venv/ |
Development | make dev-install |
For faster dependency resolution and installation, use uv:
# Install uv (if not already installed)
pip install uv
# Clone repository
git clone https://github.com/awslabs/open-resource-broker.git
cd open-resource-broker
# Fast development setup with uv
make dev-install
# Generate lock files for reproducible builds
make uv-lock
# Sync with lock files (fastest)
make uv-sync-devpip install -e ".[dev]"
## Usage Examples
### MCP Server Mode (AI Assistant Integration)
The plugin provides a Model Context Protocol (MCP) server for AI assistant integration:
```bash
# Start MCP server in stdio mode (recommended for AI assistants)
orb mcp serve --stdio
# Start MCP server as TCP server (for development/testing)
orb mcp serve --port 3000 --host localhost
# Configure logging level
orb mcp serve --stdio --log-level DEBUG
The MCP server exposes all CLI functionality as tools for AI assistants:
- Provider Management:
check_provider_health,list_providers,get_provider_config,get_provider_metrics - Template Operations:
list_templates,get_template,validate_template - Infrastructure Requests:
request_machines,get_request_status,list_return_requests,return_machines
Access domain objects via MCP resource URIs:
templates://- Available compute templatesrequests://- Provisioning requestsmachines://- Compute instancesproviders://- Cloud providers
Pre-built prompts for common infrastructure tasks:
provision_infrastructure- Guide infrastructure provisioning workflowstroubleshoot_deployment- Help diagnose deployment issuesinfrastructure_best_practices- Provide deployment best practices
Claude Desktop Configuration:
{
"mcpServers": {
"open-resource-broker": {
"command": "orb",
"args": ["mcp", "serve", "--stdio"]
}
}
}Python MCP Client:
import asyncio
from mcp import ClientSession, StdioServerParameters
async def use_hostfactory():
server_params = StdioServerParameters(
command="orb",
args=["mcp", "serve", "--stdio"]
)
async with ClientSession(server_params) as session:
# List available tools
tools = await session.list_tools()
# Request infrastructure
result = await session.call_tool(
"request_machines",
{"template_id": "EC2FleetInstant", "count": 3}
)# List available templates
orb templates list
orb templates list --long # Detailed information
orb templates list --format table # Table format
# Show specific template
orb templates show TEMPLATE_ID
# Create new template
orb templates create --file template.json
orb templates create --file template.yaml --validate-only
# Update existing template
orb templates update TEMPLATE_ID --file updated-template.json
# Delete template
orb templates delete TEMPLATE_ID
orb templates delete TEMPLATE_ID --force # Force without confirmation
# Validate template configuration
orb templates validate --file template.json
# Refresh template cache
orb templates refresh
orb templates refresh --force # Force complete refresh# Request machines
orb requests create --template-id my-template --count 5
# Check request status
orb requests status --request-id req-12345
# List active machines
orb machines list
# Return machines
orb requests return --request-id req-12345orb storage list # List available storage strategies
orb storage show # Show current storage configuration
orb storage health # Check storage health
orb storage validate # Validate storage configuration
orb storage test # Test storage connectivity
orb storage metrics # Show storage performance metrics# Get available templates
curl -X GET "http://localhost:8000/api/v1/templates"
# Create machine request
curl -X POST "http://localhost:8000/api/v1/requests" \
-H "Content-Type: application/json" \
-d '{"templateId": "my-template", "maxNumber": 5}'
# Check request status
curl -X GET "http://localhost:8000/api/v1/requests/req-12345"The plugin implements Clean Architecture principles with the following layers:
- Domain Layer: Core business logic, entities, and domain services
- Application Layer: Use cases, command/query handlers, and application services
- Infrastructure Layer: External integrations, persistence, and technical concerns
- Interface Layer: REST API, CLI, and external interfaces
- Domain-Driven Design (DDD): Rich domain models with clear bounded contexts
- CQRS: Separate command and query responsibilities for scalability
- Ports and Adapters: Hexagonal architecture for testability and flexibility
- Strategy Pattern: Pluggable provider implementations
- Factory Pattern: Dynamic object creation based on configuration
- Repository Pattern: Data access abstraction with multiple storage strategies
- Clean Architecture: Strict layer separation with dependency inversion principles
# Provider configuration
PROVIDER_TYPE=aws
AWS_REGION=us-east-1
AWS_PROFILE=default
# API configuration
API_HOST=0.0.0.0
API_PORT=8000
# Storage configuration
STORAGE_TYPE=dynamodb
STORAGE_TABLE_PREFIX=hostfactory
# Scheduler directory configuration
# HostFactory scheduler
HF_PROVIDER_WORKDIR=/path/to/working/directory
HF_PROVIDER_CONFDIR=/path/to/config/directory
HF_PROVIDER_LOGDIR=/path/to/logs/directory
# Default scheduler
DEFAULT_PROVIDER_WORKDIR=/path/to/working/directory
DEFAULT_PROVIDER_CONFDIR=/path/to/config/directory
DEFAULT_PROVIDER_LOGDIR=/path/to/logs/directory# config/providers.yml
providers:
- name: aws-primary
type: aws
config:
region: us-east-1
profile: default
handlers:
default: ec2_fleet
spot_fleet:
enabled: true
auto_scaling_group:
enabled: true
template_defaults:- Python 3.10+ (tested on 3.10, 3.11, 3.12, 3.13, 3.14)
- Docker and Docker Compose
- AWS CLI (for AWS provider)
# Clone repository
git clone https://github.com/awslabs/open-resource-broker.git
cd open-resource-broker
# Create virtual environment
python -m venv .venv
source .venv/bin/activate
# Install development dependencies
pip install -r requirements-dev.txt
# Install in development mode
pip install -e .
# Run tests
make test
# Format code (Ruff replaces Black + isort)
make format
# Check code quality
make lint
# Run before committing (replaces pre-commit hooks)
make pre-commit# Run all tests
make test
# Run with coverage
make test-coverage
# Run integration tests
make test-integration
# Run performance tests
make test-performanceThese badges show real-time project health metrics including workflow success rates, test coverage, code quality indicators, and performance metrics. Dynamic badges are populated by automated workflows and may show "resource not found" until the first workflow runs complete.
The project uses semantic-release for automated version management:
# Create a new release
git commit -m "release: add new features and bug fixes"
git push origin mainRelease Process:
- Uses conventional commits for version calculation
feat:→ minor version bumpfix:→ patch version bumpBREAKING CHANGE:→ major version bump- Commit with "release:" prefix triggers semantic-release
- Automatically publishes to PyPI, builds containers, and deploys documentation
See Release Management Guide for complete documentation.
Comprehensive documentation is available at:
- Architecture Guide: Understanding the system design and patterns
- API Reference: Complete REST API documentation
- Configuration Guide: Detailed configuration options
- Developer Guide: Contributing and extending the plugin
- Deployment Guide: Production deployment scenarios
The plugin is designed for seamless integration with IBM Spectrum Symphony Host Factory:
- API Compatibility: Full compatibility with HostFactory API requirements
- Attribute Generation: Automatic CPU and RAM specifications based on AWS instance types
- Output Format Compliance: Native support for expected output formats with accurate resource specifications
- Configuration Integration: Easy integration with existing HostFactory configurations
- Monitoring Integration: Compatible with HostFactory monitoring and logging
The plugin generates HostFactory attributes based on AWS instance types:
{
"templates": [
{
"templateId": "t3-medium-template",
"maxNumber": 5,
"attributes": {
"type": ["String", "X86_64"],
"ncpus": ["Numeric", "2"],
"nram": ["Numeric", "4096"]
}
},
{
"templateId": "m5-xlarge-template",
"maxNumber": 3,
"attributes": {
"type": ["String", "X86_64"],
"ncpus": ["Numeric", "4"],
"nram": ["Numeric", "16384"]
}
}
]
}Supported Instance Types: Common AWS instance types with appropriate CPU and RAM mappings
- Documentation: Comprehensive guides and API reference
- Issues: GitHub Issues for bug reports and feature requests
- Discussions: Community discussions and questions
We welcome contributions! Please see our Contributing Guide for details on:
- Code style and standards
- Testing requirements
- Pull request process
- Development workflow
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
For security concerns, please see our Security Policy for responsible disclosure procedures.