Skip to content

Commit 8ad40ce

Browse files
committed
refined conditional task handling using graph theory and edge conditions, deleted conditional task class, refined structuring response schema, added task docs, refined callback funcs
1 parent c197821 commit 8ad40ce

30 files changed

+1163
-656
lines changed

README.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
[![DL](https://img.shields.io/badge/Download-15K+-red)](https://clickpy.clickhouse.com/dashboard/versionhq)
44
![MIT license](https://img.shields.io/badge/License-MIT-green)
55
[![Publisher](https://github.com/versionHQ/multi-agent-system/actions/workflows/publish.yml/badge.svg)](https://github.com/versionHQ/multi-agent-system/actions/workflows/publish.yml)
6-
![PyPI](https://img.shields.io/badge/PyPI-v1.1.12+-blue)
6+
![PyPI](https://img.shields.io/badge/PyPI-v1.2.0+-blue)
77
![python ver](https://img.shields.io/badge/Python-3.11+-purple)
88
![pyenv ver](https://img.shields.io/badge/pyenv-2.5.0-orange)
99

@@ -185,15 +185,14 @@ By default, agents prioritize JSON over plane text outputs.
185185

186186

187187
agent = vhq.Agent(role="demo", goal="amazing project goal")
188-
189188
task = vhq.Task(
190189
description="Amazing task",
191190
pydantic_output=CustomOutput,
192191
callback=dummy_func,
193192
callback_kwargs=dict(message="Hi! Here is the result: ")
194193
)
195194

196-
res = task.execute_sync(agent=agent, context="amazing context to consider.")
195+
res = task.execute(agent=agent, context="amazing context to consider.")
197196
print(res)
198197
```
199198

@@ -294,6 +293,7 @@ Tasks can be delegated to a team manager, peers in the team, or completely new a
294293
└── workflows/ # Github actions
295294
296295
docs/ # Documentation built by MkDocs
296+
mkdocs.yml # MkDocs config
297297
298298
src/
299299
└── versionhq/ # Orchestration framework package
@@ -309,6 +309,8 @@ src/
309309
│ └── ...
310310
311311
└── uploads/ [.gitignore] # Local directory to store uploaded files such as graphviz diagrams generatd by `Network` class
312+
|
313+
pyproject.toml # Project config
312314
313315
```
314316

docs/core/Agent.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -367,7 +367,7 @@ task = Task(
367367
description="Answer the following question: What is Kuriko's favorite color?"
368368
)
369369

370-
res = task.execute_sync(agent=agent)
370+
res = task.execute(agent=agent)
371371
assert "gold" in res.raw == True
372372
```
373373

docs/core/task.md

Lines changed: 282 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,282 @@
1+
---
2+
tags:
3+
- HTML5
4+
- JavaScript
5+
- CSS
6+
---
7+
8+
# Task
9+
10+
<class>`class` versionhq.tasks.model.<bold>Task<bold></class>
11+
12+
A class to store and manage information for individual tasks, including their assignment to agents or teams, and dependencies via a node-based system that tracks conditions and status.
13+
14+
Ref. Node / Edge / TaskGraph class
15+
16+
<hr />
17+
18+
## Core usage
19+
20+
Create a task by defining its description in one simple sentence. The `description` will be used for task prompting later.
21+
22+
Each task will be assigned a unique ID as an identifier.
23+
24+
```python
25+
import versionhq as vhq
26+
27+
task = vhq.Task(description="MY AMAZING TASK")
28+
29+
import uuid
30+
assert uuid.UUID(str(task.id), version=4)
31+
```
32+
33+
34+
And you can simply execute the task by calling `.execute()` function.
35+
36+
```python
37+
import versionhq as vhq
38+
39+
task = vhq.Task(description="MY AMAZING TASK")
40+
res = task.execute()
41+
42+
assert isinstance(res, vhq.TaskOutput) # Generates TaskOutput object
43+
assert res.raw and res.json # By default, TaskOutput object stores output in plane text and json formats.
44+
assert task.processed_agents is not None # Agents will be automatically assigned to the given task.
45+
```
46+
47+
<hr />
48+
49+
## Customizing tasks
50+
51+
### Structured outputs
52+
53+
By default, agents will generate plane text and JSON outputs, and store them in the `TaskOutput` object.
54+
55+
* Ref. <a href="/core/task/task-output">`TaskOutput`</a> class
56+
57+
But you can choose to generate Pydantic class or specifig JSON object as response.
58+
59+
<hr />
60+
61+
**1. Pydantic**
62+
63+
`[var]`<bold>`pydantic_output: Optional[Type[BaseModel]] = "None"`</bold>
64+
65+
Create and add a `custom Pydantic class` as a structured response format to the `pydantic_output` field.
66+
67+
The custom class can accept **one layer of a nested child** as you can see in the following code snippet:
68+
69+
```python
70+
import versionhq as vhq
71+
from pydantic import BaseModel
72+
from typing import Any
73+
74+
75+
# 1. Define Pydantic class using description (optional), annotation and field name.
76+
class Demo(BaseModel):
77+
"""
78+
A demo pydantic class to validate the outcome with various nested data types.
79+
"""
80+
demo_1: int
81+
demo_2: float
82+
demo_3: str
83+
demo_4: bool
84+
demo_5: list[str]
85+
demo_6: dict[str, Any]
86+
demo_nest_1: list[dict[str, Any]] # 1 layer of nested child is ok.
87+
demo_nest_2: list[list[str]]
88+
demo_nest_3: dict[str, list[str]]
89+
demo_nest_4: dict[str, dict[str, Any]]
90+
# error_1: list[list[dict[str, list[str]]]] # <- Trigger 400 error due to 2+ layers of nested child.
91+
# error_2: InstanceOf[AnotherPydanticClass] # <- Trigger 400 error due to non-typing annotation.
92+
# error_3: list[InstanceOf[AnotherPydanticClass]] # <- Same as above
93+
94+
# 2. Define a task
95+
task = vhq.Task(
96+
description="generate random output that strictly follows the given format",
97+
pydantic_output=Demo,
98+
)
99+
100+
# 3. Execute
101+
res = task.execute()
102+
103+
assert isinstance(res, vhq.TaskOutput)
104+
assert res.raw and res.json
105+
assert isinstance(res.raw, str) and isinstance(res.json_dict, dict)
106+
assert [
107+
getattr(res.pydantic, k) and v.annotation == Demo.model_fields[k].annotation
108+
for k, v in res.pydantic.model_fields.items()
109+
]
110+
```
111+
112+
**2. JSON**
113+
114+
`[var]`<bold>`response_fields: List[InstanceOf[ResponseField]] = "None"`</bold>
115+
116+
Similar to Pydantic, JSON output structure can be defined by using a list of `ResponseField` objects.
117+
118+
The following code snippet demonstrates how to use `ResponseField` to generate output with a maximum of one level of nesting.
119+
120+
Custom JSON outputs can accept **one layer of a nested child**.
121+
122+
**[NOTES]**
123+
124+
- `demo_response_fields` in the following case is identical to the previous Demo class, except that titles are specified for nested fields.
125+
126+
- Agents generate JSON output by default, whether or not `response_fields` are used.
127+
128+
- However, response_fields are REQUIRED to specify JSON key titles and data types.
129+
130+
```python
131+
import versionhq as vhq
132+
133+
# 1. Define a list of ResponseField objects.
134+
demo_response_fields = [
135+
vhq.ResponseField(title="demo_1", data_type=int),
136+
vhq.ResponseField(title="demo_2", data_type=float),
137+
vhq.ResponseField(title="demo_3", data_type=str),
138+
vhq.ResponseField(title="demo_4", data_type=bool),
139+
vhq.ResponseField(title="demo_5", data_type=list, items=str),
140+
vhq.ResponseField(title="demo_6", data_type=dict, properties=[vhq.ResponseField(title="demo-item", data_type=str)]),
141+
vhq.ResponseField(title="demo_nest_1", data_type=list, items=str, properties=([
142+
vhq.ResponseField(title="nest1", data_type=dict, properties=[vhq.ResponseField(title="nest11", data_type=str)])
143+
])), # you can specify field title of nested items
144+
vhq.ResponseField(title="demo_nest_2", data_type=list, items=list),
145+
vhq.ResponseField(title="demo_nest_3", data_type=dict, properties=[
146+
vhq.ResponseField(title="nest1", data_type=list, items=str)
147+
]),
148+
vhq.ResponseField(title="demo_nest_4", data_type=dict, properties=[
149+
vhq.ResponseField(title="nest1", data_type=dict, properties=[vhq.ResponseField(title="nest12", data_type=str)])
150+
])
151+
]
152+
153+
154+
# 2. Define a task
155+
task = vhq.Task(
156+
description="Output random values strictly following the data type defined in the given response format.",
157+
response_fields=demo_response_fields
158+
)
159+
160+
161+
# 3. Execute
162+
res = task.execute()
163+
164+
assert isinstance(res, vhq.TaskOutput) and res.task_id is task.id
165+
assert res.raw and res.json and res.pydantic is None
166+
assert [v and type(v) == task.response_fields[i].data_type for i, (k, v) in enumerate(res.json_dict.items())]
167+
```
168+
169+
* Ref. <a href="/core/task/response-field">`ResponseField`</a> class
170+
171+
172+
**Structuring reponse format**
173+
174+
- Higlhy recommends assigning agents optimized for `gemini-x` or `gpt-x` to produce structured outputs with nested items.
175+
176+
- To generate response with more than 2 layers of nested items, seperate them into multipe tasks or utilize nodes.
177+
178+
The following case demonstrates to returning a `Main` class that contains a nested `Sub` class.
179+
180+
**[NOTES]**
181+
182+
- Using `callback` functions to format the final response. (You can try other functions suitable for your use case.)
183+
184+
- Passing parameter: `sub` to the callback function via the `callback_kwargs` variable.
185+
186+
- By default, the outputs of `main_task` are automatically passed to the callback function; you do NOT need to explicitly define them.
187+
188+
- Callback results will be stored in the `callback_output` field of the `TaskOutput` class.
189+
190+
191+
```python
192+
import versionhq as vhq
193+
from pydantic import BaseModel
194+
from typing import Any
195+
196+
# 1. Define and execute a sub task with Pydantic output.
197+
class Sub(BaseModel):
198+
sub1: list[dict[str, Any]]
199+
sub2: dict[str, Any]
200+
201+
sub_task = vhq.Task(
202+
description="generate random values that strictly follows the given format.",
203+
pydantic_output=Sub
204+
)
205+
sub_res = sub_task.execute()
206+
207+
# 2. Define a main task, callback function to format the final response.
208+
class Main(BaseModel):
209+
main1: list[Any] # <= assume expecting to store Sub object in this field.
210+
# error_main1: list[InstanceOf[Sub]] # as this will trigger 400 error!
211+
main2: dict[str, Any]
212+
213+
def format_response(sub: InstanceOf[Sub], main1: list[Any], main2: dict[str, Any]) -> Main:
214+
main1.append(sub)
215+
main = Main(main1=main1, main2=main2)
216+
return main
217+
218+
# 3. Execute
219+
main_task = vhq.Task(
220+
description="generate random values that strictly follows the given format.",
221+
pydantic_output=Main,
222+
callback=format_response,
223+
callback_kwargs=dict(sub=Sub(sub1=sub_res.pydantic.sub1, sub2=sub_res.pydantic.sub2)),
224+
)
225+
res = main_task.execute(context=sub_res.raw) # [Optional] Adding sub_task's response as context.
226+
227+
assert [item for item in res.callback_output.main1 if isinstance(item, Sub)]
228+
```
229+
230+
To skip these manual setups, refer to Node/Graph pages.
231+
232+
233+
<!-- ### Context
234+
# task setup
235+
context: Optional[List["Task"]] = Field(default=None, description="other tasks whose outputs should be used as context")
236+
prompt_context: Optional[str] = Field(default=None)
237+
238+
239+
### Execution rules
240+
EXECUTION type
241+
allow_delegation: bool = Field(default=False, description="ask other agents for help and run the task instead")
242+
243+
callback: Optional[Callable] = Field(default=None, description="callback to be executed after the task is completed.")
244+
callback_kwargs: Optional[Dict[str, Any]] = Field(default_factory=dict, description="kwargs for the callback when the callback is callable")
245+
246+
247+
### tools
248+
tools: Optional[List[ToolSet | Tool | Any]] = Field(default_factory=list, description="tools that the agent can use aside from their tools")
249+
can_use_agent_tools: bool = Field(default=False, description="whether the agent can use their own tools when executing the task")
250+
tool_res_as_final: bool = Field(default=False, description="when set True, tools res will be stored in the `TaskOutput`") -->
251+
252+
253+
<hr />
254+
255+
## Executing tasks
256+
257+
### Sync
258+
259+
<hr />
260+
261+
### Async
262+
263+
<hr />
264+
265+
### Assigning agents
266+
267+
<hr />
268+
269+
### Context
270+
271+
272+
273+
## Evaluating task outputs
274+
<!--
275+
# evaluation
276+
should_evaluate: bool = Field(default=False, description="True to run the evaluation flow")
277+
eval_criteria: Optional[List[str]] = Field(default_factory=list, description="criteria to evaluate the outcome. i.e., fit to the brand tone") -->
278+
279+
280+
## Recording
281+
282+
<!-- output: Optional[TaskOutput] = Field(default=None, description="store the final task output in TaskOutput class") -->

docs/core/task/response-field.md

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
---
2+
tags:
3+
- HTML5
4+
- JavaScript
5+
- CSS
6+
---
7+
8+
# ResponseField
9+
10+
<class>`class` versionhq.task.model.<bold>ResponseField<bold></class>
11+
12+
A class to store response formats to create JSON response schema.
13+
14+
<hr/>

docs/core/task/task-output.md

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
---
2+
tags:
3+
- HTML5
4+
- JavaScript
5+
- CSS
6+
---
7+
8+
# TaskOutput
9+
10+
<class>`class` versionhq.tasks.model.<bold>TaskOutput<bold></class>
11+
12+
A class to store and manage output from the `Task` object.
13+
14+
<hr />

0 commit comments

Comments
 (0)