As multi-agent frameworks evolve, tool execution becomes a critical security vector.
Problem:
Currently, many agents execute code or tools with broad permissions. While sandboxing exists (Docker, etc.), there isn't a standardized, portable way to define safety constraints per tool or per agent that is framework-agnostic.
Proposal:
Introduce a ToolSafetyPolicy interface that can be implemented by different executors (Docker, Firecracker, WASM).
- Allow defining
allow_network, allow_filesystem, max_memory, etc. at the tool definition level.
- Ensure these policies are enforced by the executor before the tool runs.
This would align Autogen with emerging safety standards in AI.