Releases: aipartnerup/apflow
Release 0.6.0
Added
-
TaskRoutes Extension Mechanism
-
Added
task_routes_classparameter tocreate_a2a_server()and_create_request_handler()for custom TaskRoutes injection -
Eliminates the need for monkey patching when extending TaskRoutes functionality
-
Supports custom routes via
custom_routesparameter inCustomA2AStarletteApplication -
Backward compatible: optional parameter with default
TaskRoutesclass -
Usage:
create_a2a_server(task_routes_class=CustomTaskRoutes, custom_routes=[...]) -
Task Tree Lifecycle Hooks
-
New
register_task_tree_hook()decorator for task tree lifecycle events -
Four hook types:
on_tree_created,on_tree_started,on_tree_completed,on_tree_failed -
Explicit lifecycle tracking without manual root task detection
-
Hooks receive root task and relevant context (status, error message)
-
Usage:
@register_task_tree_hook("on_tree_completed") async def on_completed(root_task, status): ... -
Executor-Specific Hooks
-
Added
pre_hookandpost_hookparameters to@executor_register()decorator -
Runtime hook registration via
add_executor_hook(executor_id, hook_type, hook_func) -
Inject custom logic (e.g., quota checks, demo data fallback) for specific executors
-
pre_hookcan return a result to skip executor execution (useful for demo mode) -
post_hookreceives executor, task, inputs, and result for post-processing -
Supports both decorator-based and runtime registration for existing executors
-
Automatic user_id Extraction
-
Automatic
user_idextraction from JWT token inTaskRoutes.handle_task_generateandhandle_task_create -
Only extracts from JWT token payload for security (HTTP headers can be spoofed)
-
Supports
user_idfield or standard JWTsubclaim in token payload -
Extracted
user_idautomatically set on task data -
Simplifies custom route implementations and ensures consistent user identification
-
Security: Only trusted JWT tokens are used, no fallback to HTTP headers
-
Demo Mode Support
-
Built-in demo mode via
use_demoparameter in task inputs -
CLI support:
--use-demoflag forapflow run flowcommand -
API support:
use_demoparameter in task creation and execution -
Executors can override
get_demo_result()method inBaseTaskfor custom demo data -
Default demo data format:
{"result": "Demo execution result", "demo_mode": True} -
All built-in executors now implement
get_demo_result()method: -
SystemInfoExecutor,CommandExecutor,AggregateResultsExecutor -
RestExecutor,GenerateExecutor,ApiExecutor -
SshExecutor,GrpcExecutor,WebSocketExecutor -
McpExecutor,DockerExecutor -
CrewManager,BatchManager(CrewAI executors) -
Realistic Demo Execution Timing: All executors include
_demo_sleepvalues to simulate real execution time: -
Network operations (HTTP, SSH, API): 0.2-0.5 seconds
-
Container operations (Docker): 1.0 second
-
LLM operations (CrewAI, Generate): 1.0-1.5 seconds
-
Local operations (SystemInfo, Command, Aggregate): 0.05-0.1 seconds
-
Global Demo Sleep Scale: Configurable via
AIPARTNERUPFLOW_DEMO_SLEEP_SCALEenvironment variable (default: 1.0) -
Allows adjusting demo execution speed globally (e.g.,
0.5for faster,2.0for slower) -
API:
set_demo_sleep_scale(scale)andget_demo_sleep_scale()functions -
CrewAI Demo Support:
CrewManagerandBatchManagergenerate realistic demo results: -
Based on
worksdefinition (agents and tasks) from task params or schemas -
Includes simulated
token_usagematching real LLM execution patterns -
BatchManageraggregates token usage across multiple works -
Demo mode helps developers test workflows without external dependencies
-
TaskModel Customization Improvements
-
Enhanced
set_task_model_class()with improved validation and error messages -
New
@task_model_register()decorator for convenient TaskModel registration -
Validation ensures custom classes inherit from
TaskModelwith helpful error messages -
Supports
__table_args__ = {'extend_existing': True}for extending existing table definitions -
Better support for user-defined
MyTaskModel(TaskModel)with additional fields -
Documentation for Hook Types
-
Added comprehensive documentation explaining differences between hook types
-
pre_hook/post_hook: Task-level hooks for individual task execution -
task_tree_hook: Task tree-level hooks for tree lifecycle events -
Clear usage scenarios and examples in
docs/development/extending.md
Changed
- LLM Model Parameter Naming: Unified LLM model parameter naming to
modelacross all components - Breaking Change:
llm_modelparameter ingenerate_executorhas been renamed tomodel - GenerateExecutor:
inputs["llm_model"]→inputs["model"] - API Routes:
params["llm_model"]→params["model"] - CLI:
--modelparameter remains unchanged (internal mapping updated) - New Feature: Support for
schemas["model"]configuration for CrewAI executor - Model configuration can now be specified in task schemas and will be passed to CrewManager
- Priority:
schemas["model"]>params.works.agents[].llm(CrewAI standard) - Impact: Only affects generate functionality introduced in 0.5.0, minimal breaking change
- Migration: Update any code using
llm_modelparameter to usemodelinstead
Removed
- Redundant decorators.py file
- Removed
src/aipartnerupflow/decorators.pyas it was no longer used - Functionality superseded by
src/aipartnerupflow/core/decorators.py - No impact on existing code (file was not imported by any other modules)
Release 0.5.0
Added
-
Extended Executor Framework with Mainstream Execution Methods
-
HTTP/REST API Executor (
rest_executor) -
Support for all HTTP methods (GET, POST, PUT, DELETE, PATCH, HEAD, OPTIONS)
-
Authentication support (Bearer token, Basic auth, API keys)
-
Configurable timeout and retry mechanisms
-
Request/response headers and body handling
-
JSON and form data support
-
Comprehensive error handling for HTTP status codes
-
Full test coverage with 15+ test cases
-
SSH Remote Executor (
ssh_executor) -
Execute commands on remote servers via SSH
-
Support for password and key-based authentication
-
Key file validation with security checks (permissions, existence)
-
Environment variable injection
-
Custom SSH port configuration
-
Timeout and cancellation support
-
Comprehensive error handling for connection and execution failures
-
Full test coverage with 12+ test cases
-
Docker Container Executor (
docker_executor) -
Execute commands in isolated Docker containers
-
Support for custom Docker images
-
Volume mounts for data persistence
-
Environment variable configuration
-
Resource limits (CPU, memory)
-
Container lifecycle management (create, start, wait, remove)
-
Timeout handling with automatic container cleanup
-
Option to keep containers after execution
-
Comprehensive error handling for image not found, execution failures
-
Full test coverage with 13+ test cases
-
gRPC Executor (
grpc_executor) -
Call gRPC services and microservices
-
Support for dynamic proto file loading
-
Method invocation with parameter serialization
-
Metadata and timeout configuration
-
Error handling for gRPC status codes
-
Support for unary, server streaming, client streaming, and bidirectional streaming
-
Full test coverage with 10+ test cases
-
WebSocket Executor (
websocket_executor) -
Bidirectional WebSocket communication
-
Send and receive messages in real-time
-
Support for JSON and text messages
-
Custom headers for authentication
-
Optional response waiting with timeout
-
Connection error handling (invalid URI, connection closed, timeout)
-
Cancellation support
-
Full test coverage with 13+ test cases
-
aipartnerupflow API Executor (
apflow_api_executor) -
Call other aipartnerupflow API instances for distributed execution
-
Support for all task management methods (tasks.execute, tasks.create, tasks.get, etc.)
-
Authentication via JWT tokens
-
Task completion polling with production-grade retry logic:
-
Exponential backoff on failures (1s → 2s → 4s → 8s → 30s max)
-
Circuit breaker pattern (stops after 10 consecutive failures)
-
Error classification (retryable vs non-retryable)
-
Total failure threshold (20 failures across all polls)
-
Detailed logging for debugging
-
Timeout protection and cancellation support
-
Comprehensive error handling for network, server, and client errors
-
Full test coverage with 12+ test cases
-
Dependency Management
-
Optional dependencies for new executors:
-
[ssh]: asyncssh for SSH executor -
[docker]: docker for Docker executor -
[grpc]: grpcio, grpcio-tools for gRPC executor -
[all]: Includes all optional dependencies -
Graceful handling when optional dependencies are not installed
-
Clear error messages with installation instructions
-
Documentation
-
Comprehensive usage examples for all new executors in
docs/guides/custom-tasks.md -
Configuration parameters and examples for each executor
-
Best practices and common patterns
-
Error handling guidelines
-
Auto-discovery
-
All new executors automatically registered via extension system
-
Auto-imported in API service startup
-
Available immediately after installation
-
MCP (Model Context Protocol) Executor (
mcp_executor) -
Interact with MCP servers to access external tools and data sources
-
Support for stdio and HTTP transport modes
-
Operations: list_tools, call_tool, list_resources, read_resource
-
JSON-RPC 2.0 protocol compliance
-
Environment variable injection for stdio mode
-
Custom headers support for HTTP mode
-
Timeout and cancellation support
-
Comprehensive error handling for MCP protocol errors
-
Full test coverage with 20+ test cases
-
MCP (Model Context Protocol) Server (
api/mcp/) -
Expose aipartnerupflow task orchestration capabilities as MCP tools and resources
-
Support for stdio and HTTP/SSE transport modes
-
MCP Tools (8 tools):
-
execute_task- Execute tasks or task trees -
create_task- Create new tasks or task trees -
get_task- Get task details by ID -
update_task- Update existing tasks -
delete_task- Delete tasks (if all pending) -
list_tasks- List tasks with filtering -
get_task_status- Get status of running tasks -
cancel_task- Cancel running tasks -
MCP Resources:
-
task://{task_id}- Access individual task data -
tasks://- Access task list with query parameters -
JSON-RPC 2.0 protocol compliance
-
Integration with existing TaskRoutes for protocol-agnostic design
-
HTTP mode: FastAPI/Starlette integration with
/mcpendpoint -
stdio mode: Standalone process for local integration
-
Comprehensive error handling with proper HTTP status codes
-
Full test coverage with 45+ test cases across all components
-
Protocol selection via
AIPARTNERUPFLOW_API_PROTOCOL=mcpenvironment variable -
CLI protocol selection:
--protocolparameter forserveanddaemoncommands -
Default protocol:
a2a -
Supported protocols:
a2a,mcp -
Usage:
apflow serve --protocol mcporapflow daemon start --protocol mcp -
Task Tree Generator Executor (
generate_executor) -
Generate valid task tree JSON arrays from natural language requirements using LLM
-
Automatically collects available executors and their input schemas for LLM context
-
Loads framework documentation (task orchestration, examples, concepts) as LLM context
-
Supports multiple LLM providers (OpenAI, Anthropic) via configurable backend
-
Comprehensive validation ensures generated tasks conform to TaskCreator requirements:
-
Validates task structure (name, id consistency, parent_id, dependencies)
-
Detects circular dependencies
-
Ensures single root task
-
Validates all references exist in the array
-
LLM prompt engineering with framework context, executor information, and examples
-
JSON response parsing with markdown code block support
-
Can be used through both API and CLI as a standard executor
-
API Endpoint:
tasks.generatemethod via JSON-RPC/tasksendpoint -
Supports all LLM configuration parameters (provider, model, temperature, max_tokens)
-
Optional
saveparameter to automatically save generated tasks to database -
Returns generated task tree JSON array with count and status message
-
Full test coverage with 8 API endpoint test cases
-
CLI Command:
apflow generate task-treefor direct task tree generation -
Supports output to file or stdout
-
Optional database persistence with
--saveflag -
Comprehensive test command examples in documentation
-
Configuration via environment variables or input parameters:
-
OPENAI_API_KEYorANTHROPIC_API_KEYfor LLM authentication -
AIPARTNERUPFLOW_LLM_PROVIDERfor provider selection (default: openai) -
AIPARTNERUPFLOW_LLM_MODELfor model selection -
Full test coverage with 28+ executor test cases and 8 API endpoint test cases
-
Usage examples:
# Python API
task = await task_manager.task_repository.create_task(
name="generate_executor",
inputs={"requirement": "Fetch data from API, process it, and save to database"}
)// JSON-RPC API
{
"jsonrpc": "2.0",
"method": "tasks.generate",
"params": {
"requirement": "Fetch data from API and process it",
"save": true
}
}# CLI
apflow generate task-tree "Fetch data from API and process it" --saveRelease 0.4.0
Added
-
CLI Task Commands Synchronization with API
-
Complete synchronization of CLI task commands with API task routes
-
Unified data format: CLI commands now return the same data structure as API endpoints using
task.to_dict() -
New CLI commands matching API functionality:
-
tasks get <task_id>- Get task details (equivalent totasks.getAPI) -
tasks create --file <file>|--stdin- Create task tree from JSON file or stdin (equivalent totasks.createAPI) -
tasks update <task_id> [options]- Update task fields (equivalent totasks.updateAPI) -
tasks delete <task_id> [--force]- Delete task with validation (equivalent totasks.deleteAPI) -
tasks tree <task_id>- Get task tree structure (equivalent totasks.treeAPI) -
tasks children --parent-id <id>|--task-id <id>- Get child tasks (equivalent totasks.childrenAPI) -
tasks all [options]- List all tasks from database with filters (equivalent totasks.listAPI) -
Enhanced existing commands:
-
tasks list- Now returns fulltask.to_dict()format matching APItasks.running.list -
tasks status- Now includescontext_id,started_at, andupdated_atfields matching API format -
All CLI commands maintain API compatibility for consistent data formats
-
Comprehensive test coverage with 43 test cases covering all CLI task commands
-
Unified A2A Protocol Task Management
-
All task management operations now fully supported through A2A Protocol
/route -
Standardized method naming:
tasks.execute,tasks.create,tasks.get,tasks.update,tasks.delete,tasks.detail,tasks.tree,tasks.list,tasks.children,tasks.running.list,tasks.running.status,tasks.running.count,tasks.cancel,tasks.copy -
TaskRoutesAdaptercomponent bridges A2A ProtocolRequestContext/EventQueuewith existingTaskRouteshandlers -
Automatic conversion between A2A Protocol format and internal task representation
-
Real-time task status updates via
TaskStatusUpdateEventfor all task management operations -
Backward compatibility:
execute_task_treeskill ID still supported (maps totasks.execute) -
All 14 task management skills registered in Agent Card for protocol compliance
-
Comprehensive test coverage with 17+ test cases for all task management methods
-
Fixed
MessageSendConfigurationaccess error: Properly handle Pydantic model attributes instead of dictionary methods
Fixed
-
A2A Protocol Configuration Access
-
Fixed
AttributeError: 'MessageSendConfiguration' object has no attribute 'get'inTaskRoutesAdapter -
Properly handle Pydantic model attributes using
getattr()andmodel_dump()instead of dictionary methods -
Compatible with both Pydantic v1 and v2
-
All 5 previously failing integration tests now pass
-
A2A Protocol Cancel Method Implementation
-
Complete implementation of
AgentExecutor.cancel()method for A2A Protocol -
Task ID extraction with priority:
task_id>context_id>metadata.task_id>metadata.context_id -
Support for custom error messages via
metadata.error_message -
Graceful cancellation by calling executor's
cancel()method if supported -
Token usage and partial results preservation during cancellation
-
Proper
TaskStatusUpdateEventgeneration with A2A Protocol compliance -
Comprehensive error handling for task not found, already completed, and exception scenarios
-
Complete test coverage with 13 test cases covering all scenarios
-
Documentation updates in HTTP API and Python API references
-
Enhanced Task Copy Functionality
-
childrenparameter forcreate_task_copy(): WhenTrue, also copy each direct child task with its dependencies -
Deduplication ensures tasks depending on multiple copied tasks are only copied once
-
childrenparameter fortasks.copyAPI endpoint -
--childrenflag for CLItasks copycommand -
Copy Before Execution
-
copy_executionparameter fortasks.executeAPI: Copy task before execution to preserve original task history -
copy_childrenparameter: WhenTruewithcopy_execution=True, also copy each direct child task with its dependencies -
Response includes both
task_id(copied task) andoriginal_task_id(original task) whencopy_execution=True -
Combines
tasks.copyandtasks.executeinto a single API call for better user experience -
Enhanced Task Deletion with Validation
-
Physical deletion: Tasks are now physically removed from the database (not soft-deleted)
-
Conditional deletion: Tasks can only be deleted if all tasks (task itself + all children) are in
pendingstatus -
Recursive child deletion: When deletion is allowed, all child tasks (including grandchildren) are automatically deleted
-
Dependency validation: Deletion is prevented if other tasks depend on the task being deleted
-
Detailed error messages: Returns specific error information when deletion fails:
-
Lists non-pending children with their statuses
-
Lists tasks that depend on the task being deleted
-
Indicates if the task itself is not pending
-
New TaskRepository methods:
-
get_all_children_recursive(): Recursively get all child tasks -
find_dependent_tasks(): Find all tasks that depend on a given task (reverse dependencies) -
delete_task(): Physically delete a task from the database -
Comprehensive test coverage with 16 test cases covering all scenarios
-
Enhanced Task Update with Critical Field Validation
-
Critical field protection: Three critical fields (
parent_id,user_id,dependencies) are now strictly validated to prevent fatal errors -
parent_idanduser_id: Always rejected - these fields cannot be modified after task creation (task hierarchy and ownership are fixed) -
dependencies: Conditional validation with four critical checks:
- Status check: Can only be updated when task is in
pendingstatus - Reference validation: All dependency references must exist in the same task tree
- Circular dependency detection: Uses DFS algorithm to detect and prevent circular dependencies
- Execution check: Prevents updates if any dependent tasks are currently executing
- Other fields: Can be updated freely without status restrictions (inputs, name, priority, params, schemas, status, result, error, progress, timestamps)
- Comprehensive error reporting: All validation errors are collected and returned in a single response
- New TaskRepository methods:
update_task_dependencies(): Update task dependencies with validationupdate_task_name(): Update task nameupdate_task_priority(): Update task priorityupdate_task_params(): Update executor parametersupdate_task_schemas(): Update validation schemas- New utility module:
dependency_validator.pywith reusable validation functions - Comprehensive test coverage with 23 test cases covering all validation scenarios
Release 0.3.0
Added
-
Webhook Support for Task Execution
-
Webhook callbacks for
tasks.executeJSON-RPC endpoint (similar to A2A Protocol push notifications) -
WebhookStreamingContextclass for sending HTTP callbacks during task execution -
Real-time progress updates sent to configured webhook URL
-
Configurable webhook settings:
-
url(required): Webhook callback URL -
headers(optional): Custom HTTP headers (e.g., Authorization) -
method(optional): HTTP method (default: POST) -
timeout(optional): Request timeout in seconds (default: 30.0) -
max_retries(optional): Maximum retry attempts (default: 3) -
Automatic retry mechanism with exponential backoff for failed requests
-
Error handling: Client errors (4xx) are not retried, server errors (5xx) and network errors are retried
-
Webhook payload includes: protocol identifier, task status, progress, result, error, and timestamp
-
Update types:
task_start,progress,task_completed,task_failed,final -
Comprehensive test coverage for webhook functionality
-
Protocol Identification
-
Added
protocolfield to all API responses to distinguish between execution modes -
JSON-RPC endpoints return
"protocol": "jsonrpc"in response -
A2A Protocol endpoints include
"protocol": "a2a"in metadata and event data -
Enables clients to identify which protocol was used for task execution
-
Consistent protocol identification across streaming updates and webhook callbacks
-
Streaming Mode for JSON-RPC
-
Streaming mode support for
tasks.executeendpoint viause_streamingparameter -
Real-time progress updates via Server-Sent Events (SSE) - returns
StreamingResponsedirectly whenuse_streaming=true -
TaskStreamingContextclass for in-memory event storage -
Consistent behavior with A2A Protocol streaming mode
-
Asynchronous task execution with immediate response
-
Examples Module
-
New
examplesmodule for initializing example task data to help beginners get started -
CLI command
examples initfor initializing example tasks in the database -
--forceflag to re-initialize examples even if they already exist -
Example tasks demonstrate various features:
-
Tasks with different statuses (completed, failed, pending, in_progress)
-
Task trees with parent-child relationships
-
Tasks with different priorities
-
Tasks with dependencies
-
CrewAI task example (requires LLM key)
-
Auto-initialization: API server automatically initializes examples if database is empty on startup
-
Example user ID:
example_userfor demo tasks -
LLM API Key Management
-
Request header support:
X-LLM-API-KEYheader for demo/one-time usage -
Simple format:
X-LLM-API-KEY: <api-key>(provider auto-detected from model name) -
Provider-specific format:
X-LLM-API-KEY: <provider>:<api-key>(e.g.,openai:sk-xxx,anthropic:sk-ant-xxx) -
User configuration support via
llm-key-configextension for multi-user scenarios -
In-memory storage of user-specific LLM keys (never stored in database)
-
Support for multiple providers per user (provider-specific keys)
-
LLMKeyConfigManagersingleton for managing user keys -
LLM Key Context Manager: Thread-local context for LLM keys during task execution
-
LLM Key Injector: Automatic provider detection from model names and works configuration
-
Priority order: Request header > User config > Environment variables
-
Supported providers: OpenAI, Anthropic, Google/Gemini, Mistral, Groq, Cohere, Together, and more
-
Automatic environment variable injection for CrewAI/LiteLLM compatibility
-
Keys are never stored in database, only used during task execution
-
Documentation Updates
-
Added
examplescommand documentation in CLI guide -
Added LLM API key management documentation in API server guide
-
Added LLM key header documentation in HTTP API reference
-
Updated examples documentation with quick start guide and example task structure
-
Comprehensive examples for using LLM keys with CrewAI tasks
-
Added webhook configuration documentation in HTTP API reference
-
Added protocol identification documentation
Changed
-
Unified Task Execution Architecture
-
Refactored task execution logic to unify behavior across A2A Protocol and JSON-RPC endpoints
-
Added
TaskExecutor.execute_task_by_id()method for executing tasks by ID with automatic dependency handling -
Moved dependency collection and subtree building logic from protocol handlers to core
TaskExecutorlayer -
Unified execution modes:
-
Root task execution: Executes the entire task tree when a root task is specified
-
Child task execution: Automatically collects all dependencies (including transitive) and executes the task with required dependencies
-
Both A2A Protocol and JSON-RPC now use the same core execution methods for consistent behavior
-
Protocol handlers are now thin wrappers around core execution logic, making the system more library-friendly
-
Improved code reusability and maintainability by centralizing execution logic in
TaskExecutor -
Task Re-execution Support
-
Added support for re-executing failed tasks via
tasks.executeendpoint -
Failed tasks can now be re-executed by calling
tasks.executewith the failed task's ID -
When re-executing a task, all its dependency tasks are also re-executed (even if they are completed) to ensure consistency
-
Only
pendingandfailedstatus tasks are executed;completedandin_progresstasks are skipped unless marked for re-execution -
Dependency satisfaction logic updated to allow completed tasks marked for re-execution to satisfy dependencies (results are available)
-
Newly created tasks (status:
pending) are not marked for re-execution and execute normally -
Task Execution Order Clarification
-
Important: Parent-child relationships (
parent_id) are now explicitly documented as organizational only and do NOT affect execution order -
Only dependencies (
dependencies) determine execution order - a task executes when its dependencies are satisfied -
This clarification ensures developers understand that task tree structure (parent-child) is separate from execution order (dependencies)
-
Updated all documentation to clearly distinguish between organizational relationships and execution dependencies
-
CLI Command Improvements
-
servecommand now accepts options directly (e.g.,apflow serve --port 8000) without requiringstartsubcommand -
serve startsubcommand still works for backward compatibility -
Improved command structure and user experience
-
CORS Support
-
Added CORS middleware to API server for cross-origin requests
-
Default configuration allows
localhost:3000,localhost:3001and common development ports -
Configurable via
AIPARTNERUPFLOW_CORS_ORIGINSenvironment variable (comma-separated list) -
Development mode:
AIPARTNERUPFLOW_CORS_ALLOW_ALL=trueto allow all origins -
Supports credentials, all HTTP methods, and all headers
-
API Architecture Refactoring
-
Moved documentation routes (
/docs,/openapi.json) toapi/routes/docs.pyfor better code organization -
Consolidated OpenAPI schema generation logic into
DocsRoutesclass -
Improved separation of concerns: route handlers in
api/routes/, documentation tools inapi/docs/ -
All custom routes (tasks, system, docs) are now defined in a unified structure
-
SSE Streaming Simplification
-
SSE streaming is now handled directly by
tasks.executewithuse_streaming=true -
When
use_streaming=true,tasks.executereturns aStreamingResponsewith Server-Sent Events -
Simplified API design: one endpoint (
tasks.execute) handles both regular POST and SSE streaming modes -
Webhook callbacks remain independent and can be used with either response mode
Fixed
-
Test Infrastructure
-
Consolidated database session management in
conftest.pywith globaluse_test_db_sessionfixture -
All tests now use isolated in-memory DuckDB databases to prevent data pollution
-
Removed dependency on persistent database files for testing
-
Improved test isolation and reliability
-
Fixed
test_webhook_config_validationtest by usingpatch.object()for TaskTracker singleton -
Updated
test_jsonrpc_tasks_execute_with_streamingto correctly parse SSE stream responses -
Added comprehensive test coverage for documentation routes (
/docsand/openapi.json) with 14 test cases -
Documentation Corrections
-
Fixed incorrect command examples in README.md and docs:
-
Corrected
run my_batchto properrun flow --tasksorrun flow executor_idformat -
Corrected
run flow example_flowto proper executor ID format -
Removed non-existent
list-flowscommand from documentation -
Ensured all command examples in documentation are accurate and testable
-
Updated SSE streaming documentation to reflect direct integration with
tasks.execute
Release 0.2.0
Added
-
Task Tree Validation
-
Circular dependency detection using DFS algorithm in
TaskCreator.create_task_tree_from_array() -
Single task tree validation ensuring all tasks are in the same tree structure
-
Validation that only one root task exists
-
Verification that all tasks are reachable from root task via parent_id chain
-
Dependent task inclusion validation (ensures all tasks that depend on tasks in the tree are included)
-
Comprehensive test coverage for circular dependency scenarios
-
Task Copy Functionality
-
TaskCreator.create_task_copy()method for creating executable copies of task trees -
Automatic inclusion of dependent tasks (including transitive dependencies) when copying
-
Special handling for failed leaf nodes (filters out pending dependents)
-
Minimal subtree construction to include only required tasks
-
Task copy fields in
TaskModel:original_task_id(links copy to original) andhas_copy(indicates if task has copies) -
API endpoint
tasks.copyvia JSON-RPC/tasksendpoint -
CLI command
tasks copy <task_id>for copying task trees -
Comprehensive test coverage for task copy functionality
v0.1.0
[0.1.0] - 2025-11-19
Added
-
Task Orchestration Engine
TaskManager: Core task orchestration with dependency management and tree executionTaskRepository: Data access layer for task CRUD operationsTaskModel: Task definition model with support for custom fields via inheritance- Task tree structure with parent-child relationships
- Priority-based task execution
- Dependency resolution and satisfaction checking
-
A2A Protocol Integration
- A2A Protocol as the standard communication protocol
- Task definition (TaskModel) vs execution instance (A2A Task) separation
- Context ID mapping: TaskModel.id → A2A Task.context_id
- Uses A2A SDK's InMemoryTaskStore for task execution instances
- TaskModel persistence handled by TaskRepository (separate from A2A TaskStore)
-
API Service Layer
- A2A server implementation with Starlette
- Custom A2A Starlette application with system routes
- Task management APIs:
/system/task_create,/system/task_get,/system/task_update,/system/task_delete - Optional JWT authentication middleware
- Support for custom TaskModel via
task_model_classparameter - Environment variable support for custom TaskModel loading (
AIPARTNERUPFLOW_TASK_MODEL_CLASS)
-
Storage Module
- SQLAlchemy-based storage with DuckDB (default) and PostgreSQL support
- Automatic table creation on first use
- Configurable table name via
AIPARTNERUPFLOW_TASK_TABLE_NAME(default:apflow_tasks) - Session factory with sync/async support
-
Custom TaskModel Support
- Users can define custom TaskModel subclasses with additional fields
- TaskRepository supports custom TaskModel classes
- Custom fields automatically handled in task creation APIs
- Example:
MyTaskModel(TaskModel)withproject_id,departmentfields
-
Event Queue Bridge
- Streaming callbacks integration with A2A Protocol EventQueue
- Real-time task execution progress updates
-
Base Task Infrastructure
BaseTask: Optional base class for executable tasks with common implementations- Input validation utilities (
get_input_schema,validate_input_schema,check_input_schema) - Cancellation support via
cancellation_checkercallback - Streaming context support for progress updates
- Support for Pydantic BaseModel or JSON schema dict for input validation
-
Executor Extension System
- Unified
ExtensionRegistryfor executor registration and discovery @executor_register()decorator for automatic registration- Category and type-based executor discovery
- Extension system supporting executors, storage, hooks, and tools
- Globally unique ID-based extension lookup
- Unified
-
Built-in Executors
AggregateResultsExecutor: Aggregates dependency task results into structured formatSystemInfoExecutor: Safe system resource queries (CPU, memory, disk) with predefined commandsCommandExecutor: Shell command execution (stdio extension)CrewManager: LLM-based agent crew execution via CrewAI with token usage trackingBatchManager: Atomic batch execution of multiple crews with result merging
Infrastructure
- Project structure with
src-layout pyproject.tomlwith optional dependencies ([a2a],[crewai],[cli],[postgres],[all])- Comprehensive documentation in
docs/directory - Test suite with pytest fixtures
- Modular architecture separating core from optional features
Notes
- Project is currently in active development
- No official release has been made yet
- Breaking changes may occur before v0.1.0 release