Skip to content

Releases: aipartnerup/apflow

Release 0.10.0

01 Jan 07:56

Choose a tag to compare

Changed

  • Refactor import paths from aipartnerupflow to apflow across test files
  • Updated import statements in various test files to reflect the new module structure under apflow.
  • Ensured all references to aipartnerupflow are replaced with apflow in test modules related to extensions, including but not limited to:
  • crewai
  • docker
  • generate
  • grpc
  • http
  • llm
  • mcp
  • ssh
  • stdio
  • websocket
  • Adjusted integration tests to align with the new import paths.

Release 0.9.0

28 Dec 02:52

Choose a tag to compare

Added

  • Hook Execution Context for Database Access

  • New get_hook_repository() function allows hooks to access database within task execution context

  • New get_hook_session() function provides direct access to database session in hooks

  • Hooks now share the same database session/transaction as TaskManager (no need for separate sessions)

  • Auto-persistence for task.inputs modifications in pre-hooks (detected and saved automatically)

  • Explicit repository methods available for other field modifications (name, priority, status, etc.)

  • Thread-safe context isolation using Python's ContextVar (similar to Flask/Celery patterns)

  • Added set_hook_context() and clear_hook_context() internal functions for context management

  • Exported to public API: aipartnerupflow.get_hook_repository and aipartnerupflow.get_hook_session

  • Added comprehensive test coverage (16 tests):

  • Hook context basic operations and lifecycle

  • Multiple hooks sharing same session instance

  • Hooks sharing transaction context and seeing uncommitted changes

  • Hooks cooperating via shared session

  • Auto-persistence of task.inputs modifications

  • Explicit field updates via repository methods

  • Database Session Safety Enhancements

  • Added flag_modified() calls for all JSON fields (result, inputs, dependencies, params, schemas) to ensure SQLAlchemy detects in-place modifications

  • Added db.refresh() after critical status updates to ensure fresh data from database

  • Added concurrent execution protection: same task_tree cannot run multiple times simultaneously

  • Returns {"status": "already_running"} when attempting concurrent execution of same task tree

  • Added 12 new tests for concurrent protection and JSON field persistence

  • CLI Extension Decorator (cli_register)

  • New @cli_register() decorator for registering CLI extensions, similar to @executor_register()

  • Decorator supports name, help, and override parameters

  • Auto-derives command name from class name (converts _ to -)

  • New get_cli_registry() function to access registered CLI extensions

  • CLI extensions are loaded from decorator registry before entry_points discovery

  • Added comprehensive test coverage (18 tests) for CLI extension decorators

  • Exception Handling Architecture

  • New exception hierarchy based on FastAPI/production framework best practices

  • ApflowError base exception for all framework-specific errors

  • BusinessError for expected user/configuration errors (logged without stack traces)

  • ValidationError for input validation failures

  • ConfigurationError for missing configuration/dependencies

  • SystemError for unexpected system-level errors (logged with stack traces)

  • ExecutorError for executor runtime failures

  • StorageError for database/storage failures

  • Created core/execution/errors.py with comprehensive exception documentation

  • Created docs/development/exception-handling-standards.md with implementation guidelines

Changed

  • Executor Error Handling Refactoring

  • All executors now raise exceptions instead of returning error dictionaries

  • Technical exceptions (TimeoutError, ConnectionError, etc.) now propagate naturally to TaskManager

  • Executors validate inputs and raise ValidationError or ConfigurationError for expected failures

  • Updated executors: DockerExecutor, GrpcExecutor, RestExecutor, SshExecutor, CommandExecutor, LLMExecutor

  • TaskManager now catches all exceptions, marks tasks as failed, and logs appropriately based on exception type

  • BusinessError logged without stack trace (clean logs), other exceptions logged with full stack trace

  • CrewAI Executor/Batch Executor Renaming and Test Fixes

  • crewai/crew_manager.pycrewai/crewai_executor.py, with the class name updated to CrewaiExecutor

  • crewai/batch_manager.pycrewai/batch_crewai_executor.py, with the class name updated to BatchCrewaiExecutor

  • All related test cases (test_crewai_executor.py, test_batch_crewai_executor.py, etc.) have been batch-updated with corrected patch paths, mocks, imports, and class names to align with the new naming

  • Resolved AttributeError: module 'aipartnerupflow.extensions.crewai' has no attribute 'crew_manager' and similar test failures caused by the renaming

Release 0.8.0

25 Dec 06:57
af9719c

Choose a tag to compare

Added

  • LLM Executor Integration

  • Added LLMExecutor (llm_executor) for direct LLM interaction via LiteLLM

  • Supports unified model parameter for 100+ providers (OpenAI, Anthropic, Gemini, etc.)

  • Support for stream=True in inputs or context metadata for Server-Sent Events (SSE)

  • Automatic API key handling via LLMKeyConfigManager or environment variables

  • Auto-registration via extensions mechanism

  • Added [llm] optional dependency including litellm

  • CLI: Plugin Mechanism for Extensions

  • Added CLIExtension class to facilitate creating CLI subcommands in external projects.

  • Implemented dynamic subcommands discovery using Python entry_points (aipartnerupflow.cli_plugins).

  • Allows projects like aipartnerupflow-demo to register commands (e.g., apflow users stat) without modifying the core library.

  • Supports both full typer.Typer apps and single-command callables as plugins.

  • CLI: Improved Task Count Output

  • Changed default output format of apflow tasks count from json to table for better terminal readability.

Changed

  • CLI: Simplified apflow tasks commands
  • apflow tasks count now defaults to providing comprehensive database statistics grouped by status.
  • Removed redundant --all and --status flags from count command (database statistics are now the default).
  • Renamed apflow tasks all command to apflow tasks list for better alignment with API naming conventions.
  • Removed the legacy apflow tasks list command (which only showed running tasks).
  • The new apflow tasks list command now lists all tasks from the database with support for filtering and pagination.

Fixed

  • Tests: Infrastructure and LLM Integration
  • Updated tests/conftest.py to automatically load .env file environment variables at the start of the test session.
  • Added auto-registration for LLMExecutor in the test conftest.py fixture.
  • Fixed LLMExecutor integration tests to correctly use real API keys from .env when available.

Release 0.7.3

22 Dec 09:40

Choose a tag to compare

Fixed

  • CLI: event-loop handling for async database operations

  • Ensured async database sessions and repositories are created and closed inside
    the same event loop to avoid "no running event loop" and "Event loop is closed" errors

  • Updated apflow tasks commands to run async work in a safe context

  • Added nest_asyncio support for nested event loops in test environments

  • Logging: clean CLI output by default

  • Default log level for the library is now ERROR to keep CLI output clean

  • Support LOG_LEVEL and DEBUG environment variables to override logging when needed

  • Debug logs can be enabled with LOG_LEVEL=DEBUG apflow ...

  • Extensions registration noise reduced

  • Demoted expected registration instantiation messages to DEBUG (no longer printed by default)

  • This prevents benign initialization messages from appearing during normal CLI runs

  • Miscellaneous

  • Added nest_asyncio to CLI optional dependencies to improve compatibility in nested-loop contexts

Release 0.7.2

21 Dec 08:36

Choose a tag to compare

Fixed

  • Documentation Corrections for schemas.method Field

  • Clarified that schemas.method is a required field when schemas is provided

  • Updated documentation to explicitly state that schemas.method must match an executor ID from the extensions registry

  • Fixed all documentation examples to use real executor IDs instead of placeholder values

  • Updated examples across all documentation files:

  • docs/api/http.md: Replaced generic "executor_id" with concrete IDs like "system_info_executor", "rest_executor", "command_executor"

  • docs/getting-started/quick-start.md: Updated all task examples to use valid executor IDs

  • docs/guides/cli.md: Fixed CLI command examples with correct executor IDs

  • docs/development/design/cli-design.md: Updated design documentation examples

  • docs/development/setup.md: Fixed setup guide examples

  • Fixed generate_executor.py LLM prompt to correctly instruct LLM to use schemas.method (not name) as executor ID

  • Updated task structure examples in LLM prompt to reflect correct usage

  • API Endpoint Test Coverage

  • Added missing test cases for API endpoints:

  • test_jsonrpc_tasks_list: Tests tasks.list endpoint with pagination

  • test_jsonrpc_tasks_running_status: Tests tasks.running.status endpoint with array format

  • test_jsonrpc_tasks_running_count: Tests tasks.running.count endpoint

  • test_jsonrpc_tasks_cancel: Tests tasks.cancel endpoint with array format

  • test_jsonrpc_tasks_generate: Tests tasks.generate endpoint for task tree generation

  • Fixed test parameter format issues:

  • tasks.running.status and tasks.cancel now correctly use task_ids array parameter instead of single task_id

  • Tests now expect array responses instead of single object responses

  • All API endpoints now have comprehensive test coverage

  • CLI Command Test Coverage

  • Added test_tasks_watch test cases for tasks watch CLI command

  • Uses mock to avoid interactive Live display component issues in automated tests

  • Tests parameter validation and basic functionality

  • Properly handles error messages in stderr

  • API Documentation Completeness

  • Added missing response example for tasks.running.status endpoint

  • Includes complete response format with all fields (task_id, context_id, status, progress, error, is_running, timestamps)

  • Documents error cases (not_found, permission_denied)

  • Clarifies that method returns array format even for single task queries

Added

  • Comprehensive Documentation Review
  • Verified all documentation examples use valid executor IDs
  • Ensured all examples are functional and can be parsed correctly
  • Validated that all CLI commands have corresponding test cases
  • Confirmed API endpoint documentation matches actual implementation

Release 0.7.1

20 Dec 02:50

Choose a tag to compare

Fixed

  • DuckDB Custom Path Directory Creation
  • Fixed issue where DuckDB would fail when using custom directory paths that don't exist
  • Added _ensure_database_directory_exists() function to automatically create parent directories before creating DuckDB connections
  • Directory creation is now handled automatically in create_session(), SessionPoolManager.initialize(), and PooledSessionContext.__init__()
  • Skips directory creation for in-memory databases (:memory:) and handles errors gracefully with appropriate logging
  • Users can now specify custom DuckDB file paths without manually creating directories first
  • Missing Return Type Annotations
  • Added missing return type annotation -> None to check_input_schema() function in core/utils/helpers.py
  • Added missing return type annotation -> ParseResult to validate_url() function in core/utils/helpers.py
  • Fixed type checker errors and ensured 100% type annotation compliance as required by code quality rules
  • Module-Level Resource Creation
  • Refactored core/storage/factory.py to eliminate module-level global variables for database sessions
  • Replaced _default_session and _session_pool_manager module-level globals with SessionRegistry class
  • Session state is now encapsulated in SessionRegistry class following dependency injection principles
  • All session management functions (get_default_session(), set_default_session(), reset_default_session(), get_session_pool_manager(), reset_session_pool_manager()) now use SessionRegistry class methods
  • Maintains full backward compatibility - all public APIs remain unchanged
  • Follows code quality rules requiring dependency injection instead of module-level resource creation

Release 0.7.0

20 Dec 01:53

Choose a tag to compare

Added

  • Task Context Sharing and LLM Key Management

  • Task Context Sharing: TaskManager now passes the entire task object (TaskModel instance) to executors

  • Executors can access all task fields including custom TaskModel fields via self.task

  • Supports custom TaskModel classes with additional fields

  • Enables executors to modify task context (e.g., update status, progress, custom fields)

  • BaseTask uses weak references (weakref.ref) to store task objects, preventing memory leaks

  • Task context is automatically cleared after execution or cancellation

  • task_id is stored separately for future extension (e.g., Redis-based task storage)

  • Unified user_id Access: BaseTask provides user_id property that automatically retrieves from task.user_id

  • Executors can use self.user_id instead of inputs.get("user_id")

  • Falls back to _user_id when task is not available (for backward compatibility and testing)

  • All LLM executors (generate_executor, crew_manager) now use self.user_id

  • Unified LLM Key Retrieval: Centralized LLM key management with context-aware priority order

  • New get_llm_key() function with unified priority logic for API and CLI contexts

  • API context priority: header → LLMKeyConfigManager → environment variables

  • CLI context priority: params → LLMKeyConfigManager → environment variables

  • Auto-detection mode (context="auto") automatically detects API or CLI context

  • All LLM executors now proactively retrieve keys using unified mechanism

  • Removed hardcoded LLM key injection logic from TaskManager (separation of concerns)

  • LLM Key Context Optimization: Refactored llm_key_context.py to eliminate code duplication

  • Extracted _get_key_from_user_config() helper function for user config lookup

  • Extracted _get_key_from_source() helper function for header/CLI params retrieval

  • Reduced code duplication by ~40%, improved maintainability

  • All functionality preserved, backward compatible

  • Enhanced Task Copy Functionality

  • UUID Generation for Task IDs: Task copy now always generates new UUIDs for copied tasks, regardless of save parameter value

  • Ensures clear task tree relationships and prevents ID conflicts

  • All copied tasks receive unique IDs for proper dependency mapping

  • Compatible with tasks.create API when save=False (returns task array with complete data)

  • Save Parameter Support: New save parameter for create_task_copy() method and tasks.copy API

  • save=True (default): Saves copied tasks to database and returns TaskTreeNode

  • save=False: Returns task array without saving to database, suitable for preview or direct use with tasks.create

  • Task array format includes all required fields (id, name, parent_id, dependencies) with new UUIDs

  • Dependencies correctly reference new task IDs within the copied tree

  • Parameter Renaming for Clarity: Renamed parameters in custom copy mode for better clarity

  • task_idscustom_task_ids (required when copy_mode="custom")

  • include_childrencustom_include_children (used when copy_mode="custom")

  • Old parameter names removed (no backward compatibility)

  • CLI: --task-ids--custom-task-ids, --include-children--custom-include-children

  • Improved Dependency Mapping: Fixed dependency resolution in copied task trees

  • Dependencies now correctly reference new task IDs within the copied tree

  • Original task IDs properly mapped to new IDs for all tasks in the tree

  • Circular dependency detection works correctly with new task IDs

  • original_task_id correctly points to each task's direct original counterpart (not root)

  • Comprehensive Test Coverage: Added extensive test cases for API and CLI

  • API tests: 11 test cases covering all copy modes, save parameter, error handling

  • CLI tests: 7 test cases covering all copy modes, dry-run, reset_fields

  • Tests verify UUID generation, dependency mapping, and database interaction

  • API Module Refactoring for Better Library Usage

  • Split api/main.py into modular components for improved code organization

  • New api/extensions.py: Extension management module with initialize_extensions() and extension configuration

  • New api/protocols.py: Protocol management module with protocol selection and dependency checking

  • New api/app.py: Application creation module with create_app_by_protocol() and protocol-specific server creation functions

  • api/main.py now contains library-friendly entry points (main() and create_runnable_app() functions)

  • Benefits: Better separation of concerns, easier to use in external projects like aipartnerupflow-demo

  • Migration: Import paths updated:

  • from aipartnerupflow.api.extensions import initialize_extensions

  • from aipartnerupflow.api.protocols import get_protocol_from_env, check_protocol_dependency

  • from aipartnerupflow.api.app import create_app_by_protocol, create_a2a_server, create_mcp_server

  • All existing imports from api/main continue to work via re-exports for backward compatibility

  • Enhanced Library Usage Support in api/main.py

  • New create_runnable_app() function: Replaces create_app() with clearer naming

  • Returns a fully initialized, runnable application instance

  • Handles all initialization steps: .env loading, extension initialization, custom TaskModel loading, examples initialization

  • Supports custom routes, middleware, and TaskRoutes class via **kwargs

  • Can be used when you need the app object but want to run the server yourself

  • Usage: from aipartnerupflow.api.main import create_runnable_app; app = create_runnable_app()

  • Enhanced main() function: Now fully supports library usage

  • Can be called directly from external projects with custom configuration

  • Separates application configuration (passed to create_runnable_app()) from server configuration (uvicorn parameters)

  • Supports all uvicorn parameters: host, port, workers, loop, limit_concurrency, limit_max_requests, access_log

  • Usage: from aipartnerupflow.api.main import main; main(custom_routes=[...], port=8080)

  • Smart .env File Loading: New _load_env_file() function with priority-based discovery

  • Priority order: 1) Current working directory, 2) Main script's directory, 3) Library's own directory (development only)

  • Ensures that when used as a library, it loads .env from the consuming project, not from the library's installation directory

  • Respects existing environment variables (override=False)

  • Gracefully handles missing python-dotenv package

  • Development Environment Setup: New _setup_development_environment() function

  • Only runs when executing library's own main.py directly (not when installed as package)

  • Suppresses specific warnings for cleaner output

  • Adds project root to Python path for development mode

  • Does not affect library usage in external projects

  • Backward Compatibility: All existing code continues to work

  • create_app() name deprecated but still available via alias

  • All initialization steps remain the same, just better organized

  • Enhanced API Server Creation Functions

  • Added auto_initialize_extensions parameter to create_a2a_server() in api/a2a/server.py

  • Matches behavior of create_app_by_protocol() for consistent API

  • Default: False (backward compatible)

  • Added task_routes_class parameter to create_app_by_protocol() and server creation functions

  • Supports custom TaskRoutes class injection throughout the server creation chain

  • Enables aipartnerupflow-demo to use standard API functions directly without workarounds

  • All new parameters are optional with safe defaults for backward compatibility

  • Executor Metadata API

  • New get_executor_metadata(executor_id) function to query executor metadata

  • New validate_task_format(task, executor_id) function to validate tasks against executor schemas

  • New get_all_executor_metadata() function to get metadata for all executors

  • Located in aipartnerupflow.core.extensions.executor_metadata

  • Used by demo applications to generate accurate demo tasks

  • Returns: id, name, description, input_schema, examples, tags

Removed

  • Examples Module Deprecation

  • Removed aipartnerupflow.examples module from core library

  • Removed examples CLI command (aipartnerupflow examples init)

  • Removed examples = [] optional dependency from pyproject.toml

  • Migration: Demo task initialization has been moved to the aipartnerupflow-demo project

  • Demo task definitions are now managed separately from the core library

  • This keeps the core library focused on orchestration functionality

  • For demo tasks, please use aipartnerupflow-demo

  • Examples API Methods

  • Removed examples.init and examples.status API methods from system routes

  • These methods are no longer available in the API

  • Migration: Use aipartnerupflow-demo for demo task initialization

Changed

  • Session Management Refactoring
  • Replaced get_default_session() with create_pooled_session() context manager in all API routes
  • Renamed create_task_tree_session to create_pooled_session in storage/factory.py
  • Updated TaskExecutor to use create_pooled_session as fallback
  • Improved concurrency safety for API requests
  • Breaking Change: get_default_session() is now deprecated for route handlers
  • LLM Key Management Architecture
  • Executors now proactively retrieve LLM keys instead of receiving them via inputs
  • TaskManager no longer handles LLM key injection for specific executors
  • LLM key retrieval is now executor responsibility, following separation of concerns
  • All executors use unified get_llm_key() function with consistent priority order

Release 0.6.1

11 Dec 08:45

Choose a tag to compare

Added

  • JWT Token Generation Support

  • New generate_token() function in aipartnerupflow.api.a2a.server for generating JWT tokens

  • Supports custom payload, secret key, algorithm (default: HS256), and expiration (default: 30 days)

  • Uses python-jose[cryptography] for token generation and verification

  • Complements existing verify_token() function for complete JWT token lifecycle management

  • Usage: from aipartnerupflow.api.a2a.server import generate_token; token = generate_token({"user_id": "user123"}, secret_key)

  • Cookie-based JWT Authentication

  • Support for JWT token extraction from request.cookies.get("Authorization") in addition to Authorization header

  • Priority: Authorization header is checked first, then falls back to cookie if header is not present

  • Enables cookie-based authentication for web applications and browser-based clients

  • Maintains security: Only JWT tokens are trusted (no fallback to HTTP headers for user identification)

  • Updated _extract_user_id_from_request() method in BaseRouteHandler to support both header and cookie sources

  • Dependency Updates

  • Added python-jose[cryptography]>=3.3.0 to [a2a] optional dependencies in pyproject.toml

  • Required for JWT token generation and verification functionality

Release 0.6.0

10 Dec 09:02

Choose a tag to compare

Added

  • TaskRoutes Extension Mechanism

  • Added task_routes_class parameter to create_a2a_server() and _create_request_handler() for custom TaskRoutes injection

  • Eliminates the need for monkey patching when extending TaskRoutes functionality

  • Supports custom routes via custom_routes parameter in CustomA2AStarletteApplication

  • Backward compatible: optional parameter with default TaskRoutes class

  • Usage: create_a2a_server(task_routes_class=CustomTaskRoutes, custom_routes=[...])

  • Task Tree Lifecycle Hooks

  • New register_task_tree_hook() decorator for task tree lifecycle events

  • Four hook types: on_tree_created, on_tree_started, on_tree_completed, on_tree_failed

  • Explicit lifecycle tracking without manual root task detection

  • Hooks receive root task and relevant context (status, error message)

  • Usage: @register_task_tree_hook("on_tree_completed") async def on_completed(root_task, status): ...

  • Executor-Specific Hooks

  • Added pre_hook and post_hook parameters to @executor_register() decorator

  • Runtime hook registration via add_executor_hook(executor_id, hook_type, hook_func)

  • Inject custom logic (e.g., quota checks, demo data fallback) for specific executors

  • pre_hook can return a result to skip executor execution (useful for demo mode)

  • post_hook receives executor, task, inputs, and result for post-processing

  • Supports both decorator-based and runtime registration for existing executors

  • Automatic user_id Extraction

  • Automatic user_id extraction from JWT token in TaskRoutes.handle_task_generate and handle_task_create

  • Only extracts from JWT token payload for security (HTTP headers can be spoofed)

  • Supports user_id field or standard JWT sub claim in token payload

  • Extracted user_id automatically set on task data

  • Simplifies custom route implementations and ensures consistent user identification

  • Security: Only trusted JWT tokens are used, no fallback to HTTP headers

  • Demo Mode Support

  • Built-in demo mode via use_demo parameter in task inputs

  • CLI support: --use-demo flag for apflow run flow command

  • API support: use_demo parameter in task creation and execution

  • Executors can override get_demo_result() method in BaseTask for custom demo data

  • Default demo data format: {"result": "Demo execution result", "demo_mode": True}

  • All built-in executors now implement get_demo_result() method:

  • SystemInfoExecutor, CommandExecutor, AggregateResultsExecutor

  • RestExecutor, GenerateExecutor, ApiExecutor

  • SshExecutor, GrpcExecutor, WebSocketExecutor

  • McpExecutor, DockerExecutor

  • CrewManager, BatchManager (CrewAI executors)

  • Realistic Demo Execution Timing: All executors include _demo_sleep values to simulate real execution time:

  • Network operations (HTTP, SSH, API): 0.2-0.5 seconds

  • Container operations (Docker): 1.0 second

  • LLM operations (CrewAI, Generate): 1.0-1.5 seconds

  • Local operations (SystemInfo, Command, Aggregate): 0.05-0.1 seconds

  • Global Demo Sleep Scale: Configurable via AIPARTNERUPFLOW_DEMO_SLEEP_SCALE environment variable (default: 1.0)

  • Allows adjusting demo execution speed globally (e.g., 0.5 for faster, 2.0 for slower)

  • API: set_demo_sleep_scale(scale) and get_demo_sleep_scale() functions

  • CrewAI Demo Support: CrewManager and BatchManager generate realistic demo results:

  • Based on works definition (agents and tasks) from task params or schemas

  • Includes simulated token_usage matching real LLM execution patterns

  • BatchManager aggregates token usage across multiple works

  • Demo mode helps developers test workflows without external dependencies

  • TaskModel Customization Improvements

  • Enhanced set_task_model_class() with improved validation and error messages

  • New @task_model_register() decorator for convenient TaskModel registration

  • Validation ensures custom classes inherit from TaskModel with helpful error messages

  • Supports __table_args__ = {'extend_existing': True} for extending existing table definitions

  • Better support for user-defined MyTaskModel(TaskModel) with additional fields

  • Documentation for Hook Types

  • Added comprehensive documentation explaining differences between hook types

  • pre_hook / post_hook: Task-level hooks for individual task execution

  • task_tree_hook: Task tree-level hooks for tree lifecycle events

  • Clear usage scenarios and examples in docs/development/extending.md

Changed

  • LLM Model Parameter Naming: Unified LLM model parameter naming to model across all components
  • Breaking Change: llm_model parameter in generate_executor has been renamed to model
  • GenerateExecutor: inputs["llm_model"]inputs["model"]
  • API Routes: params["llm_model"]params["model"]
  • CLI: --model parameter remains unchanged (internal mapping updated)
  • New Feature: Support for schemas["model"] configuration for CrewAI executor
  • Model configuration can now be specified in task schemas and will be passed to CrewManager
  • Priority: schemas["model"] > params.works.agents[].llm (CrewAI standard)
  • Impact: Only affects generate functionality introduced in 0.5.0, minimal breaking change
  • Migration: Update any code using llm_model parameter to use model instead

Removed

  • Redundant decorators.py file
  • Removed src/aipartnerupflow/decorators.py as it was no longer used
  • Functionality superseded by src/aipartnerupflow/core/decorators.py
  • No impact on existing code (file was not imported by any other modules)

Release 0.5.0

07 Dec 09:00

Choose a tag to compare

Added

  • Extended Executor Framework with Mainstream Execution Methods

  • HTTP/REST API Executor (rest_executor)

  • Support for all HTTP methods (GET, POST, PUT, DELETE, PATCH, HEAD, OPTIONS)

  • Authentication support (Bearer token, Basic auth, API keys)

  • Configurable timeout and retry mechanisms

  • Request/response headers and body handling

  • JSON and form data support

  • Comprehensive error handling for HTTP status codes

  • Full test coverage with 15+ test cases

  • SSH Remote Executor (ssh_executor)

  • Execute commands on remote servers via SSH

  • Support for password and key-based authentication

  • Key file validation with security checks (permissions, existence)

  • Environment variable injection

  • Custom SSH port configuration

  • Timeout and cancellation support

  • Comprehensive error handling for connection and execution failures

  • Full test coverage with 12+ test cases

  • Docker Container Executor (docker_executor)

  • Execute commands in isolated Docker containers

  • Support for custom Docker images

  • Volume mounts for data persistence

  • Environment variable configuration

  • Resource limits (CPU, memory)

  • Container lifecycle management (create, start, wait, remove)

  • Timeout handling with automatic container cleanup

  • Option to keep containers after execution

  • Comprehensive error handling for image not found, execution failures

  • Full test coverage with 13+ test cases

  • gRPC Executor (grpc_executor)

  • Call gRPC services and microservices

  • Support for dynamic proto file loading

  • Method invocation with parameter serialization

  • Metadata and timeout configuration

  • Error handling for gRPC status codes

  • Support for unary, server streaming, client streaming, and bidirectional streaming

  • Full test coverage with 10+ test cases

  • WebSocket Executor (websocket_executor)

  • Bidirectional WebSocket communication

  • Send and receive messages in real-time

  • Support for JSON and text messages

  • Custom headers for authentication

  • Optional response waiting with timeout

  • Connection error handling (invalid URI, connection closed, timeout)

  • Cancellation support

  • Full test coverage with 13+ test cases

  • aipartnerupflow API Executor (apflow_api_executor)

  • Call other aipartnerupflow API instances for distributed execution

  • Support for all task management methods (tasks.execute, tasks.create, tasks.get, etc.)

  • Authentication via JWT tokens

  • Task completion polling with production-grade retry logic:

  • Exponential backoff on failures (1s → 2s → 4s → 8s → 30s max)

  • Circuit breaker pattern (stops after 10 consecutive failures)

  • Error classification (retryable vs non-retryable)

  • Total failure threshold (20 failures across all polls)

  • Detailed logging for debugging

  • Timeout protection and cancellation support

  • Comprehensive error handling for network, server, and client errors

  • Full test coverage with 12+ test cases

  • Dependency Management

  • Optional dependencies for new executors:

  • [ssh]: asyncssh for SSH executor

  • [docker]: docker for Docker executor

  • [grpc]: grpcio, grpcio-tools for gRPC executor

  • [all]: Includes all optional dependencies

  • Graceful handling when optional dependencies are not installed

  • Clear error messages with installation instructions

  • Documentation

  • Comprehensive usage examples for all new executors in docs/guides/custom-tasks.md

  • Configuration parameters and examples for each executor

  • Best practices and common patterns

  • Error handling guidelines

  • Auto-discovery

  • All new executors automatically registered via extension system

  • Auto-imported in API service startup

  • Available immediately after installation

  • MCP (Model Context Protocol) Executor (mcp_executor)

  • Interact with MCP servers to access external tools and data sources

  • Support for stdio and HTTP transport modes

  • Operations: list_tools, call_tool, list_resources, read_resource

  • JSON-RPC 2.0 protocol compliance

  • Environment variable injection for stdio mode

  • Custom headers support for HTTP mode

  • Timeout and cancellation support

  • Comprehensive error handling for MCP protocol errors

  • Full test coverage with 20+ test cases

  • MCP (Model Context Protocol) Server (api/mcp/)

  • Expose aipartnerupflow task orchestration capabilities as MCP tools and resources

  • Support for stdio and HTTP/SSE transport modes

  • MCP Tools (8 tools):

  • execute_task - Execute tasks or task trees

  • create_task - Create new tasks or task trees

  • get_task - Get task details by ID

  • update_task - Update existing tasks

  • delete_task - Delete tasks (if all pending)

  • list_tasks - List tasks with filtering

  • get_task_status - Get status of running tasks

  • cancel_task - Cancel running tasks

  • MCP Resources:

  • task://{task_id} - Access individual task data

  • tasks:// - Access task list with query parameters

  • JSON-RPC 2.0 protocol compliance

  • Integration with existing TaskRoutes for protocol-agnostic design

  • HTTP mode: FastAPI/Starlette integration with /mcp endpoint

  • stdio mode: Standalone process for local integration

  • Comprehensive error handling with proper HTTP status codes

  • Full test coverage with 45+ test cases across all components

  • Protocol selection via AIPARTNERUPFLOW_API_PROTOCOL=mcp environment variable

  • CLI protocol selection: --protocol parameter for serve and daemon commands

  • Default protocol: a2a

  • Supported protocols: a2a, mcp

  • Usage: apflow serve --protocol mcp or apflow daemon start --protocol mcp

  • Task Tree Generator Executor (generate_executor)

  • Generate valid task tree JSON arrays from natural language requirements using LLM

  • Automatically collects available executors and their input schemas for LLM context

  • Loads framework documentation (task orchestration, examples, concepts) as LLM context

  • Supports multiple LLM providers (OpenAI, Anthropic) via configurable backend

  • Comprehensive validation ensures generated tasks conform to TaskCreator requirements:

  • Validates task structure (name, id consistency, parent_id, dependencies)

  • Detects circular dependencies

  • Ensures single root task

  • Validates all references exist in the array

  • LLM prompt engineering with framework context, executor information, and examples

  • JSON response parsing with markdown code block support

  • Can be used through both API and CLI as a standard executor

  • API Endpoint: tasks.generate method via JSON-RPC /tasks endpoint

  • Supports all LLM configuration parameters (provider, model, temperature, max_tokens)

  • Optional save parameter to automatically save generated tasks to database

  • Returns generated task tree JSON array with count and status message

  • Full test coverage with 8 API endpoint test cases

  • CLI Command: apflow generate task-tree for direct task tree generation

  • Supports output to file or stdout

  • Optional database persistence with --save flag

  • Comprehensive test command examples in documentation

  • Configuration via environment variables or input parameters:

  • OPENAI_API_KEY or ANTHROPIC_API_KEY for LLM authentication

  • AIPARTNERUPFLOW_LLM_PROVIDER for provider selection (default: openai)

  • AIPARTNERUPFLOW_LLM_MODEL for model selection

  • Full test coverage with 28+ executor test cases and 8 API endpoint test cases

  • Usage examples:

# Python API
task = await task_manager.task_repository.create_task(
name="generate_executor",
inputs={"requirement": "Fetch data from API, process it, and save to database"}
)
// JSON-RPC API
{
"jsonrpc": "2.0",
"method": "tasks.generate",
"params": {
"requirement": "Fetch data from API and process it",
"save": true
}
}
# CLI
apflow generate task-tree "Fetch data from API and process it" --save