Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[pull] main from ag2ai:main #61

Merged
merged 3 commits into from
Feb 25, 2025
Merged

[pull] main from ag2ai:main #61

merged 3 commits into from
Feb 25, 2025

Conversation

pull[bot]
Copy link

@pull pull bot commented Feb 25, 2025

See Commits and Changes for more details.


Created by pull[bot] (v2.0.0-alpha.1)

Can you help keep this open source service alive? 💖 Please sponsor : )

Summary by Sourcery

This pull request enhances the ReasoningAgent with code execution capabilities and the ability to use a separate model for grading. It also includes updates to the Anthropic client for better tool handling and a new test case for code execution.

New Features:

  • Adds code execution capability to the ReasoningAgent, allowing it to execute code blocks during the reasoning process using a child UserProxyAgent.
  • Introduces the ability to use a different model for grading reasoning paths by passing a grader_llm_config argument when initializing the ReasoningAgent.

Enhancements:

  • Updates the Tree of Thoughts message to remove instructions with python execution if code_execution_config is not given.
  • Improves the Anthropic client to convert tool definitions into Anthropic-compatible functions, updating nested $ref paths in property schemas.

Tests:

  • Adds a test case to verify that the ReasoningAgent properly executes code in responses when code execution is enabled.

marufaytekin and others added 3 commits February 25, 2025 15:30
* [Bug Fix] Tool signature error for Anthropic #1091

* tests added

* updated tests to reflect test coverage

* pre-commit checks

---------

Co-authored-by: Davor Runje <[email protected]>
* Enable code execution in reasoning agent

1. enable code execution
2. remove the "verbose" parameter, and replace with AG2's default "silent" parameter.

* init _user_proxy

* notebook update

* notebook update

* add test case for running code in reasoning agent

* use mock credential for reasoning test

* reasoning prompt update

* mock credentials for more tests

* Update .secrets.baseline

* clear notebook outputs

* remove variable F
Copy link

gitnotebooks bot commented Feb 25, 2025

Found 1 changed notebook. Review the changes at https://app.gitnotebooks.com/Stars1233/ag2/pull/61

@pull pull bot added the ⤵️ pull label Feb 25, 2025
Copy link

gitstream-cm bot commented Feb 25, 2025

🚨 gitStream Monthly Automation Limit Reached 🚨

Your organization has exceeded the number of pull requests allowed for automation with gitStream.
Monthly PRs automated: 2273/250

To continue automating your PR workflows and unlock additional features, please contact LinearB.

Copy link

sourcery-ai bot commented Feb 25, 2025

Reviewer's Guide by Sourcery

This pull request introduces several enhancements to the ReasoningAgent, including code execution during reasoning, deprecation of the verbose parameter, and new examples in the notebook. It also updates the Anthropic client to correctly convert tool definitions.

Sequence diagram for code execution during reasoning

sequenceDiagram
    participant ReasoningAgent
    participant UserProxyAgent
    participant ThinkerAgent

    ReasoningAgent->>ThinkerAgent: _expand(node)
    ThinkerAgent-->>ReasoningAgent: Returns options with code
    alt code execution enabled
        ReasoningAgent->>UserProxyAgent: send(code)
        UserProxyAgent->>UserProxyAgent: Execute code
        UserProxyAgent-->>ReasoningAgent: Returns code execution result
        ReasoningAgent->>ThinkerAgent: node.content += result
    end
Loading

Updated class diagram for ReasoningAgent

classDiagram
    class ReasoningAgent {
        - _verbose: bool
        - _llm_config: dict
        - _grader_llm_config: dict
        - _reason_config: dict
        - _root: ThinkNode
        - _thinker: AssistantAgent
        - _grader: AssistantAgent
        + __init__(
            name: str,
            llm_config: dict[str, Any],
            grader_llm_config: Optional[dict[str, Any]] = None,
            max_depth: int = 4,
            beam_size: int = 3,
            answer_approach: str = "pool",
            reason_config: Optional[dict[str, Any]] = None,
            **kwargs: Any
        ) : None
        + generate_forest_response(
            messages: Optional[List[Dict]] = None,
            sender: Optional[Agent] = None,
            config: Optional[Dict] = None
        ) : Tuple[bool, Union[str, Dict, None]]
        + rate_node(node: ThinkNode, ground_truth: str = None, is_outcome: bool = False) : float
        - _beam_reply(prompt: str, ground_truth: str = "") : str
        - _mtcs_reply(prompt: str, ground_truth: str = "") : str
        - _expand(node: ThinkNode) : list
        - _is_terminal(node: ThinkNode) : bool
    }
    class AssistantAgent
    class ThinkNode
    ReasoningAgent -- AssistantAgent : has _thinker
    ReasoningAgent -- AssistantAgent : has _grader
    ReasoningAgent -- ThinkNode : has _root
    note for ReasoningAgent "verbose parameter deprecated, use silent instead"
Loading

File-Level Changes

Change Details Files
The notebook now includes a code execution feature during reasoning, allowing the agent to execute Python code within fenced blocks and incorporate the results into its reasoning process.
  • Added a child user agent to execute code automatically during reasoning.
  • Modified the TreeofThought_message to conditionally include or exclude instructions related to Python execution based on the code_execution_config.
  • Implemented code execution using self._user_proxy when code_execution_config is enabled.
  • Added a code_execution_config parameter to the ReasoningAgent class.
  • The agent will now print the results of code execution.
notebook/agentchat_reasoning_agent.ipynb
autogen/agentchat/contrib/reasoning_agent.py
test/agentchat/contrib/test_reasoning_agent.py
The verbose parameter in ReasoningAgent has been deprecated in favor of silent to align with other AG2 agents.
  • Introduced a deprecation warning for the verbose parameter.
  • Replaced usages of verbose with silent throughout the ReasoningAgent class.
autogen/agentchat/contrib/reasoning_agent.py
The notebook has been updated to include examples of using a different model for grading reasoning paths, saving and recovering reasoning trees, and extracting datasets for RLHF and SFT training.
  • Added a section on using a different model for grading by passing the grader_llm_config argument.
  • Included code to save and recover the reasoning tree to/from a JSON file.
  • Added code to extract RLHF preference datasets and SFT datasets from the reasoning tree.
  • Added a section on utilizing ground truth to enhance training data generation.
notebook/agentchat_reasoning_agent.ipynb
The Anthropic client has been updated to correctly convert tool definitions into Anthropic-compatible functions, including updating nested $ref paths in property schemas.
  • Added a convert_tools_to_functions method to the Anthropic class.
  • Implemented recursive updating of $ref values in nested property schemas.
  • Added a test case to verify the correct conversion of tools to functions.
autogen/oai/anthropic.py
test/oai/test_anthropic.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!
  • Generate a plan of action for an issue: Comment @sourcery-ai plan on
    an issue to generate a plan of action for it.

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@pull pull bot merged commit 465eb81 into Stars1233:main Feb 25, 2025
Copy link

coderabbitai bot commented Feb 25, 2025

Important

Review skipped

Bot user detected.

To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

codecov bot commented Feb 26, 2025

Codecov Report

Attention: Patch coverage is 0% with 22 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
autogen/oai/anthropic.py 0.00% 19 Missing ⚠️
autogen/_website/generate_mkdocs.py 0.00% 3 Missing ⚠️
Files with missing lines Coverage Δ
autogen/_website/generate_mkdocs.py 82.27% <0.00%> (ø)
autogen/oai/anthropic.py 19.71% <0.00%> (-1.34%) ⬇️

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants