Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When using SemanticKernel adapter with AWS Bedrock Claude, got Tool Call Error: The tool 'autogen-tools_get_weather' is not available. #5439

Closed
ekzhu opened this issue Feb 7, 2025 · 3 comments

Comments

@ekzhu
Copy link
Collaborator

ekzhu commented Feb 7, 2025

Discussed in #5420

Originally posted by GxWwT February 7, 2025

import boto3
import asyncio
from botocore.config import Config

from autogen_core.models import ModelInfo, ModelFamily
from autogen_ext.models.semantic_kernel import SKChatCompletionAdapter

from semantic_kernel import Kernel
from semantic_kernel.connectors.ai.bedrock import BedrockChatCompletion, BedrockChatPromptExecutionSettings
from semantic_kernel.memory.null_memory import NullMemory

my_config = Config(
    region_name = 'us-east-1',
    signature_version = 'v4',
    retries = {
        'max_attempts': 10,
        'mode': 'standard'
    },
)

# Create the custom boto3 client
bedrock_runtime_client = boto3.client(service_name='bedrock-runtime', config=my_config)
bedrock_client = boto3.client("bedrock", config=my_config)

sk_client = BedrockChatCompletion(
    model_id='anthropic.claude-3-5-sonnet-20240620-v1:0',
    runtime_client=bedrock_runtime_client,
    client=bedrock_client,
)

# Configure execution settings
settings = BedrockChatPromptExecutionSettings(
    temperature=0.7,
    max_tokens=1000,
)

model_info = ModelInfo(vision=False, function_calling=True, json_output=True, family=ModelFamily.UNKNOWN)
model_client = SKChatCompletionAdapter(
    sk_client,
    kernel=Kernel(memory=NullMemory()),
    prompt_settings=settings,
    model_info=model_info,
)

from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage
from autogen_core import CancellationToken

async def get_weather(city: str) -> str:
    """Get the current weather for a given city"""
    return f"The weather in {city} is 73 degrees and Sunny."

async def main() -> None:
    weather_agent = AssistantAgent(
        name="assistant",
        model_client=model_client,
        tools=[get_weather],
        system_message="You are a helpful AI assistant that can provide weather information.",
        model_client_stream=True,
    )
    # print("Registered tools:", [tool.name for tool in weather_agent._tools])

    stream = weather_agent.on_messages_stream(
        [TextMessage(content="Weather in Shanghai", source="user")], CancellationToken()
    )
    async for response in stream:
        print(response)

asyncio.run(main())

Error

source='assistant' models_usage=None content="Certainly! I'd be happy to" type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' provide you with the current weather' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' information for Shanghai. To get the most' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' up-to-date an' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content="d accurate weather data, I'll nee" type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content='d to use the weather' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' tool. Let me fetch' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' that information for you right' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' away.' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=RequestUsage(prompt_tokens=0, completion_tokens=0) content=[FunctionCall(id='tooluse_VN13BhhhTLuwwFxmtQUdqA', arguments='{}', name='autogen-tools_get_weather')] type='ToolCallRequestEvent'
source='assistant' models_usage=None content=[FunctionExecutionResult(content="Error: The tool 'autogen-tools_get_weather' is not available.", call_id='tooluse_VN13BhhhTLuwwFxmtQUdqA')] type='ToolCallExecutionEvent'
Response(chat_message=ToolCallSummaryMessage(source='assistant', models_usage=None, content="**Error: The tool 'autogen-tools_get_weather' is not available.**", type='ToolCallSummaryMessage'), inner_messages=[ToolCallRequestEvent(source='assistant', models_usage=RequestUsage(prompt_tokens=0, completion_tokens=0), content=[FunctionCall(id='tooluse_VN13BhhhTLuwwFxmtQUdqA', arguments='{}', name='autogen-tools_get_weather')], type='ToolCallRequestEvent'), ToolCallExecutionEvent(source='assistant', models_usage=None, content=[FunctionExecutionResult(content="Error: The tool 'autogen-tools_get_weather' is not available.", call_id='tooluse_VN13BhhhTLuwwFxmtQUdqA')], type='ToolCallExecutionEvent')])
@ekzhu ekzhu added this to the python-v0.4.x milestone Feb 7, 2025
@ekzhu
Copy link
Collaborator Author

ekzhu commented Feb 7, 2025

Related #5413. cc @lspinheiro

@ekzhu ekzhu closed this as completed Feb 10, 2025
@ekzhu
Copy link
Collaborator Author

ekzhu commented Feb 10, 2025

Resolved now will be released in v0.4.6

@asheeshgarg
Copy link

@ekzhu @lspinheiro
Is this released I have folowing
opentelemetry-semantic-conventions 0.51b0
semantic-kernel 1.17.1
semantic-version 2.10.0
autogen-agentchat 0.4.7
autogen-core 0.4.7
autogen-ext 0.4.7
Still getting this error ---------- weather_agent ----------
Error: The tool 'get-weather' is not available.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants