Releases: aipartnerup/apflow
Release 0.10.0
Changed
- Refactor import paths from aipartnerupflow to apflow across test files
- Updated import statements in various test files to reflect the new module structure under apflow.
- Ensured all references to aipartnerupflow are replaced with apflow in test modules related to extensions, including but not limited to:
- crewai
- docker
- generate
- grpc
- http
- llm
- mcp
- ssh
- stdio
- websocket
- Adjusted integration tests to align with the new import paths.
Release 0.9.0
Added
-
Hook Execution Context for Database Access
-
New
get_hook_repository()function allows hooks to access database within task execution context -
New
get_hook_session()function provides direct access to database session in hooks -
Hooks now share the same database session/transaction as TaskManager (no need for separate sessions)
-
Auto-persistence for
task.inputsmodifications in pre-hooks (detected and saved automatically) -
Explicit repository methods available for other field modifications (name, priority, status, etc.)
-
Thread-safe context isolation using Python's
ContextVar(similar to Flask/Celery patterns) -
Added
set_hook_context()andclear_hook_context()internal functions for context management -
Exported to public API:
aipartnerupflow.get_hook_repositoryandaipartnerupflow.get_hook_session -
Added comprehensive test coverage (16 tests):
-
Hook context basic operations and lifecycle
-
Multiple hooks sharing same session instance
-
Hooks sharing transaction context and seeing uncommitted changes
-
Hooks cooperating via shared session
-
Auto-persistence of task.inputs modifications
-
Explicit field updates via repository methods
-
Database Session Safety Enhancements
-
Added
flag_modified()calls for all JSON fields (result, inputs, dependencies, params, schemas) to ensure SQLAlchemy detects in-place modifications -
Added
db.refresh()after critical status updates to ensure fresh data from database -
Added concurrent execution protection: same task_tree cannot run multiple times simultaneously
-
Returns
{"status": "already_running"}when attempting concurrent execution of same task tree -
Added 12 new tests for concurrent protection and JSON field persistence
-
CLI Extension Decorator (
cli_register) -
New
@cli_register()decorator for registering CLI extensions, similar to@executor_register() -
Decorator supports
name,help, andoverrideparameters -
Auto-derives command name from class name (converts
_to-) -
New
get_cli_registry()function to access registered CLI extensions -
CLI extensions are loaded from decorator registry before entry_points discovery
-
Added comprehensive test coverage (18 tests) for CLI extension decorators
-
Exception Handling Architecture
-
New exception hierarchy based on FastAPI/production framework best practices
-
ApflowErrorbase exception for all framework-specific errors -
BusinessErrorfor expected user/configuration errors (logged without stack traces) -
ValidationErrorfor input validation failures -
ConfigurationErrorfor missing configuration/dependencies -
SystemErrorfor unexpected system-level errors (logged with stack traces) -
ExecutorErrorfor executor runtime failures -
StorageErrorfor database/storage failures -
Created
core/execution/errors.pywith comprehensive exception documentation -
Created
docs/development/exception-handling-standards.mdwith implementation guidelines
Changed
-
Executor Error Handling Refactoring
-
All executors now raise exceptions instead of returning error dictionaries
-
Technical exceptions (TimeoutError, ConnectionError, etc.) now propagate naturally to TaskManager
-
Executors validate inputs and raise
ValidationErrororConfigurationErrorfor expected failures -
Updated executors:
DockerExecutor,GrpcExecutor,RestExecutor,SshExecutor,CommandExecutor,LLMExecutor -
TaskManager now catches all exceptions, marks tasks as failed, and logs appropriately based on exception type
-
BusinessErrorlogged without stack trace (clean logs), other exceptions logged with full stack trace -
CrewAI Executor/Batch Executor Renaming and Test Fixes
-
crewai/crew_manager.py→crewai/crewai_executor.py, with the class name updated toCrewaiExecutor -
crewai/batch_manager.py→crewai/batch_crewai_executor.py, with the class name updated toBatchCrewaiExecutor -
All related test cases (test_crewai_executor.py, test_batch_crewai_executor.py, etc.) have been batch-updated with corrected patch paths, mocks, imports, and class names to align with the new naming
-
Resolved AttributeError: module 'aipartnerupflow.extensions.crewai' has no attribute 'crew_manager' and similar test failures caused by the renaming
Release 0.8.0
Added
-
LLM Executor Integration
-
Added
LLMExecutor(llm_executor) for direct LLM interaction via LiteLLM -
Supports unified
modelparameter for 100+ providers (OpenAI, Anthropic, Gemini, etc.) -
Support for
stream=Truein inputs or context metadata for Server-Sent Events (SSE) -
Automatic API key handling via
LLMKeyConfigManageror environment variables -
Auto-registration via extensions mechanism
-
Added
[llm]optional dependency includinglitellm -
CLI: Plugin Mechanism for Extensions
-
Added
CLIExtensionclass to facilitate creating CLI subcommands in external projects. -
Implemented dynamic subcommands discovery using Python
entry_points(aipartnerupflow.cli_plugins). -
Allows projects like
aipartnerupflow-demoto register commands (e.g.,apflow users stat) without modifying the core library. -
Supports both full
typer.Typerapps and single-command callables as plugins. -
CLI: Improved Task Count Output
- Changed default output format of
apflow tasks countfromjsontotablefor better terminal readability.
Changed
- CLI: Simplified
apflow taskscommands apflow tasks countnow defaults to providing comprehensive database statistics grouped by status.- Removed redundant
--alland--statusflags fromcountcommand (database statistics are now the default). - Renamed
apflow tasks allcommand toapflow tasks listfor better alignment with API naming conventions. - Removed the legacy
apflow tasks listcommand (which only showed running tasks). - The new
apflow tasks listcommand now lists all tasks from the database with support for filtering and pagination.
Fixed
- Tests: Infrastructure and LLM Integration
- Updated
tests/conftest.pyto automatically load.envfile environment variables at the start of the test session. - Added auto-registration for
LLMExecutorin the testconftest.pyfixture. - Fixed
LLMExecutorintegration tests to correctly use real API keys from.envwhen available.
Release 0.7.3
Fixed
-
CLI: event-loop handling for async database operations
-
Ensured async database sessions and repositories are created and closed inside
the same event loop to avoid "no running event loop" and "Event loop is closed" errors -
Updated
apflow taskscommands to run async work in a safe context -
Added
nest_asynciosupport for nested event loops in test environments -
Logging: clean CLI output by default
-
Default log level for the library is now
ERRORto keep CLI output clean -
Support
LOG_LEVELandDEBUGenvironment variables to override logging when needed -
Debug logs can be enabled with
LOG_LEVEL=DEBUG apflow ... -
Extensions registration noise reduced
-
Demoted expected registration instantiation messages to
DEBUG(no longer printed by default) -
This prevents benign initialization messages from appearing during normal CLI runs
-
Miscellaneous
-
Added
nest_asyncioto CLI optional dependencies to improve compatibility in nested-loop contexts
Release 0.7.2
Fixed
-
Documentation Corrections for
schemas.methodField -
Clarified that
schemas.methodis a required field whenschemasis provided -
Updated documentation to explicitly state that
schemas.methodmust match an executor ID from the extensions registry -
Fixed all documentation examples to use real executor IDs instead of placeholder values
-
Updated examples across all documentation files:
-
docs/api/http.md: Replaced generic"executor_id"with concrete IDs like"system_info_executor","rest_executor","command_executor" -
docs/getting-started/quick-start.md: Updated all task examples to use valid executor IDs -
docs/guides/cli.md: Fixed CLI command examples with correct executor IDs -
docs/development/design/cli-design.md: Updated design documentation examples -
docs/development/setup.md: Fixed setup guide examples -
Fixed
generate_executor.pyLLM prompt to correctly instruct LLM to useschemas.method(notname) as executor ID -
Updated task structure examples in LLM prompt to reflect correct usage
-
API Endpoint Test Coverage
-
Added missing test cases for API endpoints:
-
test_jsonrpc_tasks_list: Teststasks.listendpoint with pagination -
test_jsonrpc_tasks_running_status: Teststasks.running.statusendpoint with array format -
test_jsonrpc_tasks_running_count: Teststasks.running.countendpoint -
test_jsonrpc_tasks_cancel: Teststasks.cancelendpoint with array format -
test_jsonrpc_tasks_generate: Teststasks.generateendpoint for task tree generation -
Fixed test parameter format issues:
-
tasks.running.statusandtasks.cancelnow correctly usetask_idsarray parameter instead of singletask_id -
Tests now expect array responses instead of single object responses
-
All API endpoints now have comprehensive test coverage
-
CLI Command Test Coverage
-
Added
test_tasks_watchtest cases fortasks watchCLI command -
Uses mock to avoid interactive
Livedisplay component issues in automated tests -
Tests parameter validation and basic functionality
-
Properly handles error messages in stderr
-
API Documentation Completeness
-
Added missing response example for
tasks.running.statusendpoint -
Includes complete response format with all fields (task_id, context_id, status, progress, error, is_running, timestamps)
-
Documents error cases (not_found, permission_denied)
-
Clarifies that method returns array format even for single task queries
Added
- Comprehensive Documentation Review
- Verified all documentation examples use valid executor IDs
- Ensured all examples are functional and can be parsed correctly
- Validated that all CLI commands have corresponding test cases
- Confirmed API endpoint documentation matches actual implementation
Release 0.7.1
Fixed
- DuckDB Custom Path Directory Creation
- Fixed issue where DuckDB would fail when using custom directory paths that don't exist
- Added
_ensure_database_directory_exists()function to automatically create parent directories before creating DuckDB connections - Directory creation is now handled automatically in
create_session(),SessionPoolManager.initialize(), andPooledSessionContext.__init__() - Skips directory creation for in-memory databases (
:memory:) and handles errors gracefully with appropriate logging - Users can now specify custom DuckDB file paths without manually creating directories first
- Missing Return Type Annotations
- Added missing return type annotation
-> Nonetocheck_input_schema()function incore/utils/helpers.py - Added missing return type annotation
-> ParseResulttovalidate_url()function incore/utils/helpers.py - Fixed type checker errors and ensured 100% type annotation compliance as required by code quality rules
- Module-Level Resource Creation
- Refactored
core/storage/factory.pyto eliminate module-level global variables for database sessions - Replaced
_default_sessionand_session_pool_managermodule-level globals withSessionRegistryclass - Session state is now encapsulated in
SessionRegistryclass following dependency injection principles - All session management functions (
get_default_session(),set_default_session(),reset_default_session(),get_session_pool_manager(),reset_session_pool_manager()) now useSessionRegistryclass methods - Maintains full backward compatibility - all public APIs remain unchanged
- Follows code quality rules requiring dependency injection instead of module-level resource creation
Release 0.7.0
Added
-
Task Context Sharing and LLM Key Management
-
Task Context Sharing: TaskManager now passes the entire
taskobject (TaskModel instance) to executors -
Executors can access all task fields including custom TaskModel fields via
self.task -
Supports custom TaskModel classes with additional fields
-
Enables executors to modify task context (e.g., update status, progress, custom fields)
-
BaseTask uses weak references (
weakref.ref) to store task objects, preventing memory leaks -
Task context is automatically cleared after execution or cancellation
-
task_idis stored separately for future extension (e.g., Redis-based task storage) -
Unified user_id Access: BaseTask provides
user_idproperty that automatically retrieves fromtask.user_id -
Executors can use
self.user_idinstead ofinputs.get("user_id") -
Falls back to
_user_idwhen task is not available (for backward compatibility and testing) -
All LLM executors (
generate_executor,crew_manager) now useself.user_id -
Unified LLM Key Retrieval: Centralized LLM key management with context-aware priority order
-
New
get_llm_key()function with unified priority logic for API and CLI contexts -
API context priority: header → LLMKeyConfigManager → environment variables
-
CLI context priority: params → LLMKeyConfigManager → environment variables
-
Auto-detection mode (
context="auto") automatically detects API or CLI context -
All LLM executors now proactively retrieve keys using unified mechanism
-
Removed hardcoded LLM key injection logic from TaskManager (separation of concerns)
-
LLM Key Context Optimization: Refactored
llm_key_context.pyto eliminate code duplication -
Extracted
_get_key_from_user_config()helper function for user config lookup -
Extracted
_get_key_from_source()helper function for header/CLI params retrieval -
Reduced code duplication by ~40%, improved maintainability
-
All functionality preserved, backward compatible
-
Enhanced Task Copy Functionality
-
UUID Generation for Task IDs: Task copy now always generates new UUIDs for copied tasks, regardless of
saveparameter value -
Ensures clear task tree relationships and prevents ID conflicts
-
All copied tasks receive unique IDs for proper dependency mapping
-
Compatible with
tasks.createAPI whensave=False(returns task array with complete data) -
Save Parameter Support: New
saveparameter forcreate_task_copy()method andtasks.copyAPI -
save=True(default): Saves copied tasks to database and returns TaskTreeNode -
save=False: Returns task array without saving to database, suitable for preview or direct use withtasks.create -
Task array format includes all required fields (id, name, parent_id, dependencies) with new UUIDs
-
Dependencies correctly reference new task IDs within the copied tree
-
Parameter Renaming for Clarity: Renamed parameters in custom copy mode for better clarity
-
task_ids→custom_task_ids(required whencopy_mode="custom") -
include_children→custom_include_children(used whencopy_mode="custom") -
Old parameter names removed (no backward compatibility)
-
CLI:
--task-ids→--custom-task-ids,--include-children→--custom-include-children -
Improved Dependency Mapping: Fixed dependency resolution in copied task trees
-
Dependencies now correctly reference new task IDs within the copied tree
-
Original task IDs properly mapped to new IDs for all tasks in the tree
-
Circular dependency detection works correctly with new task IDs
-
original_task_idcorrectly points to each task's direct original counterpart (not root) -
Comprehensive Test Coverage: Added extensive test cases for API and CLI
-
API tests: 11 test cases covering all copy modes, save parameter, error handling
-
CLI tests: 7 test cases covering all copy modes, dry-run, reset_fields
-
Tests verify UUID generation, dependency mapping, and database interaction
-
API Module Refactoring for Better Library Usage
-
Split
api/main.pyinto modular components for improved code organization -
New
api/extensions.py: Extension management module withinitialize_extensions()and extension configuration -
New
api/protocols.py: Protocol management module with protocol selection and dependency checking -
New
api/app.py: Application creation module withcreate_app_by_protocol()and protocol-specific server creation functions -
api/main.pynow contains library-friendly entry points (main()andcreate_runnable_app()functions) -
Benefits: Better separation of concerns, easier to use in external projects like aipartnerupflow-demo
-
Migration: Import paths updated:
-
from aipartnerupflow.api.extensions import initialize_extensions -
from aipartnerupflow.api.protocols import get_protocol_from_env, check_protocol_dependency -
from aipartnerupflow.api.app import create_app_by_protocol, create_a2a_server, create_mcp_server -
All existing imports from
api/maincontinue to work via re-exports for backward compatibility -
Enhanced Library Usage Support in
api/main.py -
New
create_runnable_app()function: Replacescreate_app()with clearer naming -
Returns a fully initialized, runnable application instance
-
Handles all initialization steps: .env loading, extension initialization, custom TaskModel loading, examples initialization
-
Supports custom routes, middleware, and TaskRoutes class via
**kwargs -
Can be used when you need the app object but want to run the server yourself
-
Usage:
from aipartnerupflow.api.main import create_runnable_app; app = create_runnable_app() -
Enhanced
main()function: Now fully supports library usage -
Can be called directly from external projects with custom configuration
-
Separates application configuration (passed to
create_runnable_app()) from server configuration (uvicorn parameters) -
Supports all uvicorn parameters:
host,port,workers,loop,limit_concurrency,limit_max_requests,access_log -
Usage:
from aipartnerupflow.api.main import main; main(custom_routes=[...], port=8080) -
Smart .env File Loading: New
_load_env_file()function with priority-based discovery -
Priority order: 1) Current working directory, 2) Main script's directory, 3) Library's own directory (development only)
-
Ensures that when used as a library, it loads
.envfrom the consuming project, not from the library's installation directory -
Respects existing environment variables (
override=False) -
Gracefully handles missing
python-dotenvpackage -
Development Environment Setup: New
_setup_development_environment()function -
Only runs when executing library's own
main.pydirectly (not when installed as package) -
Suppresses specific warnings for cleaner output
-
Adds project root to Python path for development mode
-
Does not affect library usage in external projects
-
Backward Compatibility: All existing code continues to work
-
create_app()name deprecated but still available via alias -
All initialization steps remain the same, just better organized
-
Enhanced API Server Creation Functions
-
Added
auto_initialize_extensionsparameter tocreate_a2a_server()inapi/a2a/server.py -
Matches behavior of
create_app_by_protocol()for consistent API -
Default:
False(backward compatible) -
Added
task_routes_classparameter tocreate_app_by_protocol()and server creation functions -
Supports custom
TaskRoutesclass injection throughout the server creation chain -
Enables aipartnerupflow-demo to use standard API functions directly without workarounds
-
All new parameters are optional with safe defaults for backward compatibility
-
Executor Metadata API
-
New
get_executor_metadata(executor_id)function to query executor metadata -
New
validate_task_format(task, executor_id)function to validate tasks against executor schemas -
New
get_all_executor_metadata()function to get metadata for all executors -
Located in
aipartnerupflow.core.extensions.executor_metadata -
Used by demo applications to generate accurate demo tasks
-
Returns: id, name, description, input_schema, examples, tags
Removed
-
Examples Module Deprecation
-
Removed
aipartnerupflow.examplesmodule from core library -
Removed
examplesCLI command (aipartnerupflow examples init) -
Removed
examples = []optional dependency frompyproject.toml -
Migration: Demo task initialization has been moved to the aipartnerupflow-demo project
-
Demo task definitions are now managed separately from the core library
-
This keeps the core library focused on orchestration functionality
-
For demo tasks, please use aipartnerupflow-demo
-
Examples API Methods
-
Removed
examples.initandexamples.statusAPI methods from system routes -
These methods are no longer available in the API
-
Migration: Use aipartnerupflow-demo for demo task initialization
Changed
- Session Management Refactoring
- Replaced
get_default_session()withcreate_pooled_session()context manager in all API routes - Renamed
create_task_tree_sessiontocreate_pooled_sessioninstorage/factory.py - Updated
TaskExecutorto usecreate_pooled_sessionas fallback - Improved concurrency safety for API requests
- Breaking Change:
get_default_session()is now deprecated for route handlers - LLM Key Management Architecture
- Executors now proactively retrieve LLM keys instead of receiving them via inputs
- TaskManager no longer handles LLM key injection for specific executors
- LLM key retrieval is now executor responsibility, following separation of concerns
- All executors use unified
get_llm_key()function with consistent priority order
Release 0.6.1
Added
-
JWT Token Generation Support
-
New
generate_token()function inaipartnerupflow.api.a2a.serverfor generating JWT tokens -
Supports custom payload, secret key, algorithm (default: HS256), and expiration (default: 30 days)
-
Uses
python-jose[cryptography]for token generation and verification -
Complements existing
verify_token()function for complete JWT token lifecycle management -
Usage:
from aipartnerupflow.api.a2a.server import generate_token; token = generate_token({"user_id": "user123"}, secret_key) -
Cookie-based JWT Authentication
-
Support for JWT token extraction from
request.cookies.get("Authorization")in addition to Authorization header -
Priority: Authorization header is checked first, then falls back to cookie if header is not present
-
Enables cookie-based authentication for web applications and browser-based clients
-
Maintains security: Only JWT tokens are trusted (no fallback to HTTP headers for user identification)
-
Updated
_extract_user_id_from_request()method inBaseRouteHandlerto support both header and cookie sources -
Dependency Updates
-
Added
python-jose[cryptography]>=3.3.0to[a2a]optional dependencies inpyproject.toml -
Required for JWT token generation and verification functionality
Release 0.6.0
Added
-
TaskRoutes Extension Mechanism
-
Added
task_routes_classparameter tocreate_a2a_server()and_create_request_handler()for custom TaskRoutes injection -
Eliminates the need for monkey patching when extending TaskRoutes functionality
-
Supports custom routes via
custom_routesparameter inCustomA2AStarletteApplication -
Backward compatible: optional parameter with default
TaskRoutesclass -
Usage:
create_a2a_server(task_routes_class=CustomTaskRoutes, custom_routes=[...]) -
Task Tree Lifecycle Hooks
-
New
register_task_tree_hook()decorator for task tree lifecycle events -
Four hook types:
on_tree_created,on_tree_started,on_tree_completed,on_tree_failed -
Explicit lifecycle tracking without manual root task detection
-
Hooks receive root task and relevant context (status, error message)
-
Usage:
@register_task_tree_hook("on_tree_completed") async def on_completed(root_task, status): ... -
Executor-Specific Hooks
-
Added
pre_hookandpost_hookparameters to@executor_register()decorator -
Runtime hook registration via
add_executor_hook(executor_id, hook_type, hook_func) -
Inject custom logic (e.g., quota checks, demo data fallback) for specific executors
-
pre_hookcan return a result to skip executor execution (useful for demo mode) -
post_hookreceives executor, task, inputs, and result for post-processing -
Supports both decorator-based and runtime registration for existing executors
-
Automatic user_id Extraction
-
Automatic
user_idextraction from JWT token inTaskRoutes.handle_task_generateandhandle_task_create -
Only extracts from JWT token payload for security (HTTP headers can be spoofed)
-
Supports
user_idfield or standard JWTsubclaim in token payload -
Extracted
user_idautomatically set on task data -
Simplifies custom route implementations and ensures consistent user identification
-
Security: Only trusted JWT tokens are used, no fallback to HTTP headers
-
Demo Mode Support
-
Built-in demo mode via
use_demoparameter in task inputs -
CLI support:
--use-demoflag forapflow run flowcommand -
API support:
use_demoparameter in task creation and execution -
Executors can override
get_demo_result()method inBaseTaskfor custom demo data -
Default demo data format:
{"result": "Demo execution result", "demo_mode": True} -
All built-in executors now implement
get_demo_result()method: -
SystemInfoExecutor,CommandExecutor,AggregateResultsExecutor -
RestExecutor,GenerateExecutor,ApiExecutor -
SshExecutor,GrpcExecutor,WebSocketExecutor -
McpExecutor,DockerExecutor -
CrewManager,BatchManager(CrewAI executors) -
Realistic Demo Execution Timing: All executors include
_demo_sleepvalues to simulate real execution time: -
Network operations (HTTP, SSH, API): 0.2-0.5 seconds
-
Container operations (Docker): 1.0 second
-
LLM operations (CrewAI, Generate): 1.0-1.5 seconds
-
Local operations (SystemInfo, Command, Aggregate): 0.05-0.1 seconds
-
Global Demo Sleep Scale: Configurable via
AIPARTNERUPFLOW_DEMO_SLEEP_SCALEenvironment variable (default: 1.0) -
Allows adjusting demo execution speed globally (e.g.,
0.5for faster,2.0for slower) -
API:
set_demo_sleep_scale(scale)andget_demo_sleep_scale()functions -
CrewAI Demo Support:
CrewManagerandBatchManagergenerate realistic demo results: -
Based on
worksdefinition (agents and tasks) from task params or schemas -
Includes simulated
token_usagematching real LLM execution patterns -
BatchManageraggregates token usage across multiple works -
Demo mode helps developers test workflows without external dependencies
-
TaskModel Customization Improvements
-
Enhanced
set_task_model_class()with improved validation and error messages -
New
@task_model_register()decorator for convenient TaskModel registration -
Validation ensures custom classes inherit from
TaskModelwith helpful error messages -
Supports
__table_args__ = {'extend_existing': True}for extending existing table definitions -
Better support for user-defined
MyTaskModel(TaskModel)with additional fields -
Documentation for Hook Types
-
Added comprehensive documentation explaining differences between hook types
-
pre_hook/post_hook: Task-level hooks for individual task execution -
task_tree_hook: Task tree-level hooks for tree lifecycle events -
Clear usage scenarios and examples in
docs/development/extending.md
Changed
- LLM Model Parameter Naming: Unified LLM model parameter naming to
modelacross all components - Breaking Change:
llm_modelparameter ingenerate_executorhas been renamed tomodel - GenerateExecutor:
inputs["llm_model"]→inputs["model"] - API Routes:
params["llm_model"]→params["model"] - CLI:
--modelparameter remains unchanged (internal mapping updated) - New Feature: Support for
schemas["model"]configuration for CrewAI executor - Model configuration can now be specified in task schemas and will be passed to CrewManager
- Priority:
schemas["model"]>params.works.agents[].llm(CrewAI standard) - Impact: Only affects generate functionality introduced in 0.5.0, minimal breaking change
- Migration: Update any code using
llm_modelparameter to usemodelinstead
Removed
- Redundant decorators.py file
- Removed
src/aipartnerupflow/decorators.pyas it was no longer used - Functionality superseded by
src/aipartnerupflow/core/decorators.py - No impact on existing code (file was not imported by any other modules)
Release 0.5.0
Added
-
Extended Executor Framework with Mainstream Execution Methods
-
HTTP/REST API Executor (
rest_executor) -
Support for all HTTP methods (GET, POST, PUT, DELETE, PATCH, HEAD, OPTIONS)
-
Authentication support (Bearer token, Basic auth, API keys)
-
Configurable timeout and retry mechanisms
-
Request/response headers and body handling
-
JSON and form data support
-
Comprehensive error handling for HTTP status codes
-
Full test coverage with 15+ test cases
-
SSH Remote Executor (
ssh_executor) -
Execute commands on remote servers via SSH
-
Support for password and key-based authentication
-
Key file validation with security checks (permissions, existence)
-
Environment variable injection
-
Custom SSH port configuration
-
Timeout and cancellation support
-
Comprehensive error handling for connection and execution failures
-
Full test coverage with 12+ test cases
-
Docker Container Executor (
docker_executor) -
Execute commands in isolated Docker containers
-
Support for custom Docker images
-
Volume mounts for data persistence
-
Environment variable configuration
-
Resource limits (CPU, memory)
-
Container lifecycle management (create, start, wait, remove)
-
Timeout handling with automatic container cleanup
-
Option to keep containers after execution
-
Comprehensive error handling for image not found, execution failures
-
Full test coverage with 13+ test cases
-
gRPC Executor (
grpc_executor) -
Call gRPC services and microservices
-
Support for dynamic proto file loading
-
Method invocation with parameter serialization
-
Metadata and timeout configuration
-
Error handling for gRPC status codes
-
Support for unary, server streaming, client streaming, and bidirectional streaming
-
Full test coverage with 10+ test cases
-
WebSocket Executor (
websocket_executor) -
Bidirectional WebSocket communication
-
Send and receive messages in real-time
-
Support for JSON and text messages
-
Custom headers for authentication
-
Optional response waiting with timeout
-
Connection error handling (invalid URI, connection closed, timeout)
-
Cancellation support
-
Full test coverage with 13+ test cases
-
aipartnerupflow API Executor (
apflow_api_executor) -
Call other aipartnerupflow API instances for distributed execution
-
Support for all task management methods (tasks.execute, tasks.create, tasks.get, etc.)
-
Authentication via JWT tokens
-
Task completion polling with production-grade retry logic:
-
Exponential backoff on failures (1s → 2s → 4s → 8s → 30s max)
-
Circuit breaker pattern (stops after 10 consecutive failures)
-
Error classification (retryable vs non-retryable)
-
Total failure threshold (20 failures across all polls)
-
Detailed logging for debugging
-
Timeout protection and cancellation support
-
Comprehensive error handling for network, server, and client errors
-
Full test coverage with 12+ test cases
-
Dependency Management
-
Optional dependencies for new executors:
-
[ssh]: asyncssh for SSH executor -
[docker]: docker for Docker executor -
[grpc]: grpcio, grpcio-tools for gRPC executor -
[all]: Includes all optional dependencies -
Graceful handling when optional dependencies are not installed
-
Clear error messages with installation instructions
-
Documentation
-
Comprehensive usage examples for all new executors in
docs/guides/custom-tasks.md -
Configuration parameters and examples for each executor
-
Best practices and common patterns
-
Error handling guidelines
-
Auto-discovery
-
All new executors automatically registered via extension system
-
Auto-imported in API service startup
-
Available immediately after installation
-
MCP (Model Context Protocol) Executor (
mcp_executor) -
Interact with MCP servers to access external tools and data sources
-
Support for stdio and HTTP transport modes
-
Operations: list_tools, call_tool, list_resources, read_resource
-
JSON-RPC 2.0 protocol compliance
-
Environment variable injection for stdio mode
-
Custom headers support for HTTP mode
-
Timeout and cancellation support
-
Comprehensive error handling for MCP protocol errors
-
Full test coverage with 20+ test cases
-
MCP (Model Context Protocol) Server (
api/mcp/) -
Expose aipartnerupflow task orchestration capabilities as MCP tools and resources
-
Support for stdio and HTTP/SSE transport modes
-
MCP Tools (8 tools):
-
execute_task- Execute tasks or task trees -
create_task- Create new tasks or task trees -
get_task- Get task details by ID -
update_task- Update existing tasks -
delete_task- Delete tasks (if all pending) -
list_tasks- List tasks with filtering -
get_task_status- Get status of running tasks -
cancel_task- Cancel running tasks -
MCP Resources:
-
task://{task_id}- Access individual task data -
tasks://- Access task list with query parameters -
JSON-RPC 2.0 protocol compliance
-
Integration with existing TaskRoutes for protocol-agnostic design
-
HTTP mode: FastAPI/Starlette integration with
/mcpendpoint -
stdio mode: Standalone process for local integration
-
Comprehensive error handling with proper HTTP status codes
-
Full test coverage with 45+ test cases across all components
-
Protocol selection via
AIPARTNERUPFLOW_API_PROTOCOL=mcpenvironment variable -
CLI protocol selection:
--protocolparameter forserveanddaemoncommands -
Default protocol:
a2a -
Supported protocols:
a2a,mcp -
Usage:
apflow serve --protocol mcporapflow daemon start --protocol mcp -
Task Tree Generator Executor (
generate_executor) -
Generate valid task tree JSON arrays from natural language requirements using LLM
-
Automatically collects available executors and their input schemas for LLM context
-
Loads framework documentation (task orchestration, examples, concepts) as LLM context
-
Supports multiple LLM providers (OpenAI, Anthropic) via configurable backend
-
Comprehensive validation ensures generated tasks conform to TaskCreator requirements:
-
Validates task structure (name, id consistency, parent_id, dependencies)
-
Detects circular dependencies
-
Ensures single root task
-
Validates all references exist in the array
-
LLM prompt engineering with framework context, executor information, and examples
-
JSON response parsing with markdown code block support
-
Can be used through both API and CLI as a standard executor
-
API Endpoint:
tasks.generatemethod via JSON-RPC/tasksendpoint -
Supports all LLM configuration parameters (provider, model, temperature, max_tokens)
-
Optional
saveparameter to automatically save generated tasks to database -
Returns generated task tree JSON array with count and status message
-
Full test coverage with 8 API endpoint test cases
-
CLI Command:
apflow generate task-treefor direct task tree generation -
Supports output to file or stdout
-
Optional database persistence with
--saveflag -
Comprehensive test command examples in documentation
-
Configuration via environment variables or input parameters:
-
OPENAI_API_KEYorANTHROPIC_API_KEYfor LLM authentication -
AIPARTNERUPFLOW_LLM_PROVIDERfor provider selection (default: openai) -
AIPARTNERUPFLOW_LLM_MODELfor model selection -
Full test coverage with 28+ executor test cases and 8 API endpoint test cases
-
Usage examples:
# Python API
task = await task_manager.task_repository.create_task(
name="generate_executor",
inputs={"requirement": "Fetch data from API, process it, and save to database"}
)// JSON-RPC API
{
"jsonrpc": "2.0",
"method": "tasks.generate",
"params": {
"requirement": "Fetch data from API and process it",
"save": true
}
}# CLI
apflow generate task-tree "Fetch data from API and process it" --save