Skip to content

Commit ccd164d

Browse files
committed
completed docs task, refined pytest config, default setting agent tools
1 parent 26ef0b9 commit ccd164d

File tree

12 files changed

+274
-87
lines changed

12 files changed

+274
-87
lines changed

README.md

Lines changed: 4 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Overview
22

3-
[![DL](https://img.shields.io/badge/Download-17K+-red)](https://clickpy.clickhouse.com/dashboard/versionhq)
3+
[![DL](https://img.shields.io/badge/Download-20K+-red)](https://clickpy.clickhouse.com/dashboard/versionhq)
44
![MIT license](https://img.shields.io/badge/License-MIT-green)
55
[![Publisher](https://github.com/versionHQ/multi-agent-system/actions/workflows/publish.yml/badge.svg)](https://github.com/versionHQ/multi-agent-system/actions/workflows/publish.yml)
66
![PyPI](https://img.shields.io/badge/PyPI-v1.2.1+-blue)
@@ -12,11 +12,10 @@ Agentic orchestration framework for multi-agent networks and task graphs for com
1212

1313
**Visit:**
1414

15-
- [PyPI](https://pypi.org/project/versionhq/)
15+
- [Playground](https://versi0n.io/playground)
1616
- [Docs](https://docs.versi0n.io)
1717
- [Github Repository](https://github.com/versionHQ/multi-agent-system)
18-
- [Playground](https://versi0n.io/)
19-
18+
- [PyPI](https://pypi.org/project/versionhq/)
2019

2120
<hr />
2221

@@ -513,10 +512,4 @@ Common issues and solutions:
513512
## Frequently Asked Questions (FAQ)
514513
**Q. Where can I see if the agent is working?**
515514
516-
A. Visit [playground](https://versi0n.io).
517-
518-
519-
**Q. How do you analyze the customer?**
520-
521-
A. We employ soft clustering for each customer.
522-
<img width="200" src="https://res.cloudinary.com/dfeirxlea/image/upload/v1732732628/pj_m_agents/ito937s5d5x0so8isvw6.png">
515+
A. Visit [playground](https://versi0n.io/playground).

docs/core/task-graph.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ The following example demonstrates a simple concept of a `supervising` agent net
2323
You can define nodes and edges mannually by creating nodes from tasks, and defining edges.
2424

2525

26-
### Generating TaskGraph
26+
### Generating
2727

2828
```python
2929
import versionhq as vhq
@@ -57,7 +57,7 @@ assert critical_path and duration and paths
5757
```
5858

5959

60-
### Activating TaskGraph
60+
### Activating
6161

6262
Calling `.activate()` begins execution of the graph's nodes, respecting dependencies [`dependency-met`] and prioritizing the critical path.
6363

docs/core/task.md

Lines changed: 145 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ tags:
99

1010
A class to store and manage information for individual tasks, including their assignment to agents or agent networks, and dependencies via a node-based system that tracks conditions and status.
1111

12-
Ref. Node / Edge / TaskGraph class
12+
Ref. Node / Edge / <a href="/core/task-graph">TaskGraph</a> class
1313

1414
<hr />
1515

@@ -284,6 +284,7 @@ Context can consist of `Task` objects, `TaskOutput` objects, plain text `strings
284284

285285
In this scenario, `sub_task_2` executes before the main task. Its string output is then incorporated into the main task's context prompt on top of other context before the main task is executed.
286286

287+
<hr>
287288

288289
## Executing
289290

@@ -298,7 +299,6 @@ import versionhq as vhq
298299

299300
task = vhq.Task(
300301
description="return the output following the given prompt.",
301-
response_fields=[vhq.ResponseField(title="test1", data_type=str, required=True)],
302302
allow_delegation=True
303303
)
304304
task.execute()
@@ -308,47 +308,169 @@ assert "vhq-Delegated-Agent" in task.processed_agents # delegated agent
308308
assert task.delegations ==1
309309
```
310310

311+
<hr>
311312

312-
<!--
313+
**SYNC - ASYNC**
313314

314-
## Callbacks
315-
callback: Optional[Callable] = Field(default=None, description="callback to be executed after the task is completed.")
316-
callback_kwargs: Optional[Dict[str, Any]] = Field(default_factory=dict, description="kwargs for the callback when the callback is callable")
315+
`[var]`<bold>`type: bool = False`</bold>
317316

317+
You can specify whether the task will be executed asynchronously.
318318

319-
### tools
320-
tools: Optional[List[ToolSet | Tool | Any]] = Field(default_factory=list, description="tools that the agent can use aside from their tools")
321-
can_use_agent_tools: bool = Field(default=False, description="whether the agent can use their own tools when executing the task")
322-
tool_res_as_final: bool = Field(default=False, description="when set True, tools res will be stored in the `TaskOutput`")
319+
```python
320+
import versionhq as vhq
323321

322+
task = vhq.Task(
323+
description="Return a word: 'test'",
324+
type=vhq.TaskExecutionType.ASYNC # default: vhq.TaskExecutionType.SYNC
325+
)
324326

327+
from unittest.mock import patch
328+
with patch.object(vhq.Agent, "execute_task", return_value="test") as execute:
329+
res = task.execute()
330+
assert res.raw == "test"
331+
execute.assert_called_once_with(task=task, context=None, task_tools=list())
332+
```
325333

334+
<hr>
326335

327-
## Executing tasks
328-
EXECUTION type
336+
**Using tools**
329337

330-
### Sync
338+
`[var]`<bold>`tools: Optional[List[ToolSet | Tool | Any]] = None`</bold>
331339

332-
<hr />
340+
`[var]`<bold>`tool_res_as_final: bool = False`</bold>
333341

334-
### Async
335342

336-
<hr />
343+
Tasks can directly store tools explicitly called by the agent.
337344

338-
### Assigning agents
345+
If the results from the tool should be the final results, set `tool_res_as_final` True.
339346

340-
<hr />
347+
This will allow the agent to store the tool results in the `tool_output` field of `TaskOutput` object.
341348

342-
### Context
343349

350+
```python
351+
import versionhq as vhq
352+
from typing import Callable
353+
354+
def random_func(message: str) -> str:
355+
return message + "_demo"
356+
357+
tool = vhq.Tool(name="tool", func=random_func)
358+
tool_set = vhq.ToolSet(tool=tool, kwargs=dict(message="empty func"))
359+
task = vhq.Task(
360+
description="execute the given tools",
361+
tools=[tool_set,], # stores tools
362+
tool_res_as_final=True, # stores tool results in TaskOutput object
363+
)
364+
365+
res = task.execute()
366+
assert res.tool_output == "empty func_demo"
367+
```
368+
369+
Ref. <a href="/core/tool">Tool</a> class / <a href="/core/task/task-output">TaskOutput</a> class
370+
371+
<hr>
372+
373+
**Using agents' tools**
374+
375+
`[var]`<bold>`can_use_agent_tools: bool = True`</bold>
376+
377+
Tasks can explicitly stop/start using agent tools on top of the tools stored in the task object.
378+
379+
```python
380+
import versionhq as vhq
381+
382+
simple_tool = vhq.Tool(name="simple tool", func=lambda x: "simple func")
383+
agent = vhq.Agent(role="demo", goal="execute tools", tools=[simple_tool,])
384+
task = vhq.Task(
385+
description="execute tools",
386+
can_use_agent_tools=True, # Flagged
387+
tool_res_as_final=True
388+
)
389+
res = task.execute(agent=agent)
390+
assert res.tool_output == "simple func"
391+
```
344392

393+
<hr>
394+
395+
## Callback
396+
397+
`[var]`<bold>`callback: Optional[Callable] = None`</bold>
398+
399+
`[var]`<bold>`callback_kwargs: Optional[Dict[str, Any]] = dict()`</bold>
400+
401+
After executing the task, you can run a `callback` function with `callback_kwargs` and task output as parameters.
402+
403+
Callback results will be stored in `callback_output` filed of the `TaskOutput` object.
404+
405+
```python
406+
import versionhq as vhq
407+
408+
def callback_func(condition: str, test1: str):
409+
return f"Result: {test1}, condition added: {condition}"
410+
411+
task = vhq.Task(
412+
description="return the output following the given prompt.",
413+
callback=callback_func,
414+
callback_kwargs=dict(condition="demo for pytest")
415+
)
416+
res = task.execute()
417+
418+
assert res and isinstance(res, vhq.TaskOutput)
419+
assert res.task_id is task.id
420+
assert "demo for pytest" in res.callback_output
421+
```
422+
423+
<hr>
345424

346425
## Evaluating
347426

348-
should_evaluate: bool = Field(default=False, description="True to run the evaluation flow")
349-
eval_criteria: Optional[List[str]] = Field(default_factory=list, description="criteria to evaluate the outcome. i.e., fit to the brand tone")
427+
`[var]`<bold>`should_evaluate: bool = False`</bold>
428+
429+
`[var]`<bold>`eval_criteria: Optional[List[str]] = list()`</bold>
430+
431+
You can turn on customized evaluations using the given criteria.
432+
433+
Refer <a href="/core/task/task-output">TaskOutput</a> class for details.
434+
435+
<hr>
436+
437+
438+
## Ref
439+
440+
### Variables
441+
442+
| <div style="width:160px">**Variable**</div> | **Data Type** | **Default** | **Nullable** | **Description** |
443+
| :--- | :--- | :--- | :--- | :--- |
444+
| **`id`** | UUID | uuid.uuid4() | False | Stores task `id` as an identifier. |
445+
| **`name`** | Optional[str] | None | True | Stores a task name (Inherited as `node` identifier if the task is dependent) |
446+
| **`description`** | str | None | False | Required field to store a concise task description |
447+
| **`pydantic_output`** | Optional[Type[BaseModel]] | None | True | Stores pydantic custom output class for structured response |
448+
| **`response_fields`** | Optional[List[ResponseField]] | list() | True | Stores JSON formats for stuructured response |
449+
| **`tools`** | Optional[List[ToolSet | Tool | Any]] | None | True | Stores tools to be called when the agent executes the task. |
450+
| **`can_use_agent_tools`** | bool | True | - | Whether to use the agent tools |
451+
| **`tool_res_as_final`** | bool | False | - | Whether to make the tool response a final response from the agent |
452+
| **`execution_type`** | TaskExecutionType | TaskExecutionType.SYNC | - | Sync or async execution |
453+
| **`allow_delegation`** | bool | False | - | Whether to allow the agent to delegate the task to another agent |
454+
| **`callback`** | Optional[Callable] | None | True | Callback function to be executed after LLM calling |
455+
| **`callback_kwargs`** | Optional[Dict[str, Any]] | dict() | True | Args for the callback function (if any)|
456+
| **`should_evaluate`** | bool | False | - | Whether to evaluate the task output using eval criteria |
457+
| **`eval_criteria`** | Optional[List[str]] | list() | True | Evaluation criteria given by the human client |
458+
| **`processed_agents`** | Set[str] | set() | True | [Ops] Stores roles of the agents executed the task |
459+
| **`tool_errors`** | int | 0 | True | [Ops] Stores number of tool errors |
460+
| **`delegation`** | int | 0 | True | [Ops] Stores number of agent delegations |
461+
| **`output`** | Optional[TaskOutput] | None | True | [Ops] Stores `TaskOutput` object after the execution |
462+
463+
464+
### Class Methods
465+
466+
| <div style="width:120px">**Method**</div> | <div style="width:300px">**Params**</div> | **Returns** | **Description** |
467+
| :--- | :--- | :--- | :--- |
468+
| **`execute`** | <p>type: TaskExecutionType = None<br>agent: Optional["vhq.Agent"] = None<br>context: Optional[Any] = None</p> | InstanceOf[`TaskOutput`] or None (error) | A main method to handle task execution. Auto-build an agent when the agent is not given. |
350469

351470

352-
## Recording
471+
### Properties
353472

354-
output: Optional[TaskOutput] = Field(default=None, description="store the final task output in TaskOutput class") -->
473+
| <div style="width:120px">**Property**</div> | **Returns** | **Description** |
474+
| :--- | :--- | :--- |
475+
| **`key`** | str | Returns task key based on its description and output format. |
476+
| **`summary`** | str | Returns a summary of the task based on its id, description and tools. |

docs/core/task/response-field.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ tags:
77

88
<class>`class` versionhq.task.model.<bold>ResponseField<bold></class>
99

10-
A Pydantic class to store response formats to create JSON response schema.
10+
A Pydantic class to store response formats to generate a structured response in JSON.
1111

1212
<hr/>
1313

@@ -105,7 +105,7 @@ Agents can handle **one layer** of nested items usign `properties` and `items` f
105105

106106
We highly recommend to use `gemini-x` or `gpt-x` to get stable results.
107107

108-
### List with Object
108+
### Object in List
109109

110110
```python
111111
import versionhq as vhq
@@ -129,7 +129,7 @@ list_with_objects = vhq.ResponseField(
129129

130130
<hr />
131131

132-
### List with List
132+
### List in List
133133

134134
```python
135135
import versionhq as vhq
@@ -150,7 +150,7 @@ list_with_list = vhq.ResponseField(
150150

151151
<hr />
152152

153-
### Object with List
153+
### List in Object
154154

155155
```python
156156
import versionhq as vhq
@@ -173,7 +173,7 @@ dict_with_list = vhq.ResponseField(
173173

174174
<hr />
175175

176-
### Object with Object
176+
### Object in Object
177177

178178
```python
179179
import versionhq as vhq

docs/index.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Overview
22

3-
[![DL](https://img.shields.io/badge/Download-17K+-red)](https://clickpy.clickhouse.com/dashboard/versionhq)
3+
[![DL](https://img.shields.io/badge/Download-20K+-red)](https://clickpy.clickhouse.com/dashboard/versionhq)
44
![MIT license](https://img.shields.io/badge/License-MIT-green)
55
[![Publisher](https://github.com/versionHQ/multi-agent-system/actions/workflows/publish.yml/badge.svg)](https://github.com/versionHQ/multi-agent-system/actions/workflows/publish.yml)
66
![PyPI](https://img.shields.io/badge/PyPI-v1.2.1+-blue)
@@ -11,9 +11,9 @@ A Python framework for agentic orchestration that handles complex task automatio
1111

1212
**Visit:**
1313

14+
- [Playground](https://versi0n.io/playground)
1415
- [PyPI](https://pypi.org/project/versionhq/)
1516
- [Docs](https://docs.versi0n.io)
16-
- [Playground](https://versi0n.io/)
1717

1818
**Contribute:**
1919

@@ -259,7 +259,7 @@ Common issues and solutions:
259259
## FAQ
260260
**Q. Where can I see if the agent is working?**
261261
262-
A. Visit [playground](https://versi0n.io).
262+
A. Visit [playground](https://versi0n.io/playground).
263263
264264
<hr />
265265

mkdocs.yml

Lines changed: 16 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -83,7 +83,7 @@ theme:
8383
icon: material/brightness-7
8484
name: Switch to light mode
8585
features:
86-
- announce.dismiss
86+
# - announce.dismiss
8787
- content.action.edit
8888
- content.action.view
8989
- content.code.annotate
@@ -92,6 +92,7 @@ theme:
9292
- content.tabs.link
9393
- content.tooltips
9494
- header.autohide
95+
- navigation.tabs
9596
- navigation.path
9697
- navigation.top
9798
- navigation.footer
@@ -117,10 +118,9 @@ nav:
117118
- TaskOutput: 'core/task/task-output.md'
118119
- Evaluation: 'core/task/evaluation.md'
119120
- Tool: 'core/tool.md'
120-
# - Compoio Tools: 'core/composio-tool.md'
121121
- Tags: 'tags.md'
122122
- Examples:
123-
- Playground: https://versi0n.io
123+
- Playground: https://versi0n.io/playground
124124
- Experiment - Agent Performance: https://github.com/versionHQ/exp-agent-performance
125125
- Change Log: https://github.com/versionHQ/multi-agent-system/releases
126126

@@ -137,6 +137,19 @@ extra:
137137
analytics:
138138
provider: google
139139
property: G-E19K228ENL
140+
feedback:
141+
title: Was this page helpful?
142+
ratings:
143+
- icon: material/emoticon-happy-outline
144+
name: This page was helpful
145+
data: 1
146+
note: >-
147+
Thanks for your feedback!
148+
- icon: material/emoticon-sad-outline
149+
name: This page could be improved
150+
data: 0
151+
note: >-
152+
Thanks for your feedback! Help us improve this page by using our <a href="https://github.com/versionhq/multi-agent-system/issues/new/?title=[Feedback]+{title}+-+{url}" target="_blank" rel="noopener">feedback form</a>.
140153
social:
141154
- icon: fontawesome/brands/github
142155
link: https://github.com/versionHQ/multi-agent-system

pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ exclude = ["test*", "__pycache__", "*.egg-info"]
1515

1616
[project]
1717
name = "versionhq"
18-
version = "1.2.1.7"
18+
version = "1.2.1.8"
1919
authors = [{ name = "Kuriko Iwai", email = "kuriko@versi0n.io" }]
2020
description = "An agentic orchestration framework for building agent networks that handle task automation."
2121
readme = "README.md"

0 commit comments

Comments
 (0)