From f1b1b6c382e81010ca527244ea4b0bae61c48e40 Mon Sep 17 00:00:00 2001 From: Genevieve Warren <24882762+gewarren@users.noreply.github.com> Date: Thu, 13 Feb 2025 11:54:20 -0800 Subject: [PATCH 1/3] Fix spurious zone-end tag --- .../get-started/quick-start-guide.md | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-) diff --git a/semantic-kernel/get-started/quick-start-guide.md b/semantic-kernel/get-started/quick-start-guide.md index 767fba6f..ed58aa72 100644 --- a/semantic-kernel/get-started/quick-start-guide.md +++ b/semantic-kernel/get-started/quick-start-guide.md @@ -300,11 +300,25 @@ To make it easier to get started building enterprise apps with Semantic Kernel, In the following sections, we'll unpack the above sample by walking through steps **1**, **2**, **3**, **4**, **6**, **9**, and **10**. Everything you need to build a simple agent that is powered by an AI service and can run your code. + ::: zone pivot="programming-language-csharp,programming-language-python" + - [Import packages](#1-import-packages) - [Add AI services](#2-add-ai-services) - ::: zone pivot="programming-language-csharp,programming-language-python" - [Enterprise components](#3-add-enterprise-services) +- [Build the kernel](#4-build-the-kernel-and-retrieve-services) +- Add memory (skipped) +- [Add plugins](#6-add-plugins) +- Create kernel arguments (skipped) +- Create prompts (skipped) +- [Planning](#9-planning) +- [Invoke](#10-invoke) + ::: zone-end + + ::: zone pivot="programming-language-java" + +- [Import packages](#1-import-packages) +- [Add AI services](#2-add-ai-services) - [Build the kernel](#4-build-the-kernel-and-retrieve-services) - Add memory (skipped) - [Add plugins](#6-add-plugins) @@ -313,6 +327,8 @@ In the following sections, we'll unpack the above sample by walking through step - [Planning](#9-planning) - [Invoke](#10-invoke) + ::: zone-end + ### 1) Import packages For this sample, we first started by importing the following packages: From b5eb15b8dc2746c227e9994936c4c8ba3d182bbc Mon Sep 17 00:00:00 2001 From: Genevieve Warren <24882762+gewarren@users.noreply.github.com> Date: Thu, 13 Feb 2025 12:09:40 -0800 Subject: [PATCH 2/3] Ingestion -> injection --- semantic-kernel/concepts/kernel.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/semantic-kernel/concepts/kernel.md b/semantic-kernel/concepts/kernel.md index 69de031c..a3758b3c 100644 --- a/semantic-kernel/concepts/kernel.md +++ b/semantic-kernel/concepts/kernel.md @@ -32,7 +32,7 @@ Before building a kernel, you should first understand the two types of component | | Components | Description | |---|---|---| -| 1 | **Services** | These consist of both AI services (e.g., chat completion) and other services (e.g., logging and HTTP clients) that are necessary to run your application. This was modelled after the Service Provider pattern in .NET so that we could support dependency ingestion across all languages. | +| 1 | **Services** | These consist of both AI services (e.g., chat completion) and other services (e.g., logging and HTTP clients) that are necessary to run your application. This was modelled after the Service Provider pattern in .NET so that we could support dependency injection across all languages. | | 2 | **Plugins** | These are the components that are used by your AI services and prompt templates to perform work. AI services, for example, can use plugins to retrieve data from a database or call an external API to perform actions. | ::: zone pivot="programming-language-csharp" From 39f9534ac03a308084509fa21ac7608a18d2c37a Mon Sep 17 00:00:00 2001 From: Evan Mattson <35585003+moonbox3@users.noreply.github.com> Date: Tue, 18 Feb 2025 08:17:22 +0900 Subject: [PATCH 3/3] Python: merge Python docs updates from live to main (#464) * Improve Python agent learn site samples. * Include links to repo code. * Remove fixed locale from link * Fix python sample resource link * Use site relative links for learn site links. They don't need to be absolute. * Fix media link * Scope link per language * More cleanup * Add prompt template config import. Remove view from link in Python code pivot. * Updates to callout reserved param names with Python function calling. --- README.md | 6 +- .../Frameworks/agent/agent-templates.md | 2 + .../examples/example-agent-collaboration.md | 425 ++++++++++-------- .../agent/examples/example-assistant-code.md | 97 ++-- .../examples/example-assistant-search.md | 7 +- .../agent/examples/example-chat-agent.md | 6 +- .../Frameworks/process/process-deployment.md | 2 +- .../chat-completion/function-calling/index.md | 35 ++ .../ai-services/chat-completion/index.md | 6 +- .../observability/index.md | 10 +- .../observability/telemetry-advanced.md | 4 +- .../telemetry-with-aspire-dashboard.md | 10 +- ...telemetry-with-azure-ai-foundry-tracing.md | 6 +- .../observability/telemetry-with-console.md | 2 +- 14 files changed, 370 insertions(+), 248 deletions(-) diff --git a/README.md b/README.md index 75500b2b..e6f7a7f1 100644 --- a/README.md +++ b/README.md @@ -1,12 +1,12 @@ # Microsoft Semantic Kernel Documentation -This is the GitHub repository for the technical product documentation for **Semantic Kernel**. This documentation is published at [Microsoft Semantic Kernel documentation](https://learn.microsoft.com/semantic-kernel). +This is the GitHub repository for the technical product documentation for **Semantic Kernel**. This documentation is published at [Microsoft Semantic Kernel documentation](/semantic-kernel). ## How to contribute -Thanks for your interest in [contributing](https://learn.microsoft.com/), home of technical content for Microsoft products and services. +Thanks for your interest in contributing to the home of technical content for Microsoft products and services. -To learn how to make contributions to the content in this repository, start with our [Docs contributor guide](https://learn.microsoft.com/contribute). +To learn how to make contributions to the content in this repository, start with our [Docs contributor guide](/contribute). ## Code of conduct diff --git a/semantic-kernel/Frameworks/agent/agent-templates.md b/semantic-kernel/Frameworks/agent/agent-templates.md index 49cb8ae2..36d0ad38 100644 --- a/semantic-kernel/Frameworks/agent/agent-templates.md +++ b/semantic-kernel/Frameworks/agent/agent-templates.md @@ -179,6 +179,8 @@ ChatCompletionAgent agent = ```python import yaml +from semantic_kernel.prompt_template import PromptTemplateConfig + # Read the YAML file with open("./GenerateStory.yaml", "r", encoding="utf-8") as file: generate_story_yaml = file.read() diff --git a/semantic-kernel/Frameworks/agent/examples/example-agent-collaboration.md b/semantic-kernel/Frameworks/agent/examples/example-agent-collaboration.md index 3206ad89..87fe4167 100644 --- a/semantic-kernel/Frameworks/agent/examples/example-agent-collaboration.md +++ b/semantic-kernel/Frameworks/agent/examples/example-agent-collaboration.md @@ -28,6 +28,9 @@ Before proceeding with feature coding, make sure your development environment is ::: zone pivot="programming-language-csharp" +> [!TIP] +> This sample uses an optional text file as part of processing. If you'd like to use it, you may download it [here](https://github.com/microsoft/semantic-kernel/blob/main/dotnet/samples/LearnResources/Resources/WomensSuffrage.txt). Place the file in your code working directory. + Start by creating a _Console_ project. Then, include the following package references to ensure all required dependencies are available. To add package dependencies from the command-line use the `dotnet` command: @@ -68,11 +71,20 @@ The _Agent Framework_ is experimental and requires warning suppression. This ma ::: zone-end ::: zone pivot="programming-language-python" + +> [!TIP] +> This sample uses an optional text file as part of processing. If you'd like to use it, you may download it [here](https://github.com/microsoft/semantic-kernel/blob/main/python/samples/learn_resources/resources/WomensSuffrage.txt). Place the file in your code working directory. + +Start by installing the Semantic Kernel Python package. + +```bash +pip install semantic-kernel +``` + ```python import asyncio import os import copy -import pyperclip # Install via pip from semantic_kernel.agents import AgentGroupChat, ChatCompletionAgent from semantic_kernel.agents.strategies.selection.kernel_function_selection_strategy import ( @@ -167,7 +179,7 @@ Configure the following settings in your `.env` file for either Azure OpenAI or ```python AZURE_OPENAI_API_KEY="..." -AZURE_OPENAI_ENDPOINT="https://..." +AZURE_OPENAI_ENDPOINT="https://.openai.azure.com/" AZURE_OPENAI_CHAT_DEPLOYMENT_NAME="..." AZURE_OPENAI_API_VERSION="..." @@ -254,11 +266,6 @@ toolKernel.Plugins.AddFromType(); ``` ::: zone-end -::: zone pivot="programming-language-python" -```python -tool_kernel = copy.deepcopy(kernel) -tool_kernel.add_plugin(ClipboardAccess(), plugin_name="clipboard") -``` ::: zone-end ::: zone pivot="programming-language-java" @@ -267,9 +274,9 @@ tool_kernel.add_plugin(ClipboardAccess(), plugin_name="clipboard") ::: zone-end +::: zone pivot="programming-language-csharp" The _Clipboard_ plugin may be defined as part of the sample. -::: zone pivot="programming-language-csharp" ```csharp private sealed class ClipboardAccess { @@ -297,21 +304,6 @@ private sealed class ClipboardAccess ``` ::: zone-end -::: zone pivot="programming-language-python" - -Note: we are leveraging a Python package called pyperclip. Please install is using pip. - -```python -class ClipboardAccess: - @kernel_function - def set_clipboard(content: str): - if not content.strip(): - return - - pyperclip.copy(content) -``` -::: zone-end - ::: zone pivot="programming-language-java" > Agents are currently unavailable in Java. @@ -320,9 +312,9 @@ class ClipboardAccess: ### Agent Definition +::: zone pivot="programming-language-csharp" Let's declare the agent names as `const` so they might be referenced in _Agent Group Chat_ strategies: -::: zone pivot="programming-language-csharp" ```csharp const string ReviewerName = "Reviewer"; const string WriterName = "Writer"; @@ -330,6 +322,9 @@ const string WriterName = "Writer"; ::: zone-end ::: zone pivot="programming-language-python" + +We will declare the agent names as "Reviewer" and "Writer." + ```python REVIEWER_NAME = "Reviewer" COPYWRITER_NAME = "Writer" @@ -379,23 +374,20 @@ ChatCompletionAgent agentReviewer = ::: zone pivot="programming-language-python" ```python agent_reviewer = ChatCompletionAgent( - service_id=REVIEWER_NAME, - kernel=_create_kernel_with_chat_completion(REVIEWER_NAME), - name=REVIEWER_NAME, - instructions=""" - Your responsiblity is to review and identify how to improve user provided content. - If the user has providing input or direction for content already provided, specify how to - address this input. - Never directly perform the correction or provide example. - Once the content has been updated in a subsequent response, you will review the content - again until satisfactory. - Always copy satisfactory content to the clipboard using available tools and inform user. - - RULES: - - Only identify suggestions that are specific and actionable. - - Verify previous suggestions have been addressed. - - Never repeat previous suggestions. - """, + service_id=REVIEWER_NAME, + kernel=kernel, + name=REVIEWER_NAME, + instructions=""" +Your responsibility is to review and identify how to improve user provided content. +If the user has provided input or direction for content already provided, specify how to address this input. +Never directly perform the correction or provide an example. +Once the content has been updated in a subsequent response, review it again until it is satisfactory. + +RULES: +- Only identify suggestions that are specific and actionable. +- Verify previous suggestions have been addressed. +- Never repeat previous suggestions. +""", ) ``` ::: zone-end @@ -406,11 +398,11 @@ agent_reviewer = ChatCompletionAgent( ::: zone-end -The _Writer_ agent is is similiar, but doesn't require the specification of _Execution Settings_ since it isn't configured with a plug-in. +::: zone pivot="programming-language-csharp" +The _Writer_ agent is similiar, but doesn't require the specification of _Execution Settings_ since it isn't configured with a plug-in. Here the _Writer_ is given a single-purpose task, follow direction and rewrite the content. -::: zone pivot="programming-language-csharp" ```csharp ChatCompletionAgent agentWriter = new() @@ -430,19 +422,19 @@ ChatCompletionAgent agentWriter = ::: zone-end ::: zone pivot="programming-language-python" +The _Writer_ agent is similiar. It is given a single-purpose task, follow direction and rewrite the content. ```python agent_writer = ChatCompletionAgent( - service_id=COPYWRITER_NAME, - kernel=_create_kernel_with_chat_completion(COPYWRITER_NAME), - name=COPYWRITER_NAME, - instructions=""" - Your sole responsiblity is to rewrite content according to review suggestions. - - - Always apply all review direction. - - Always revise the content in its entirety without explanation. - - Never address the user. - """, -) + service_id=WRITER_NAME, + kernel=kernel, + name=WRITER_NAME, + instructions=""" +Your sole responsibility is to rewrite content according to review suggestions. +- Always apply all review directions. +- Always revise the content in its entirety without explanation. +- Never address the user. +""", + ) ``` ::: zone-end @@ -489,25 +481,25 @@ KernelFunction selectionFunction = ::: zone pivot="programming-language-python" ```python selection_function = KernelFunctionFromPrompt( - function_name="selection", - prompt=f""" - Determine which participant takes the next turn in a conversation based on the the most recent participant. - State only the name of the participant to take the next turn. - No participant should take more than one turn in a row. - - Choose only from these participants: - - {REVIEWER_NAME} - - {COPYWRITER_NAME} - - Always follow these rules when selecting the next participant: - - After user input, it is {COPYWRITER_NAME}'s turn. - - After {COPYWRITER_NAME} replies, it is {REVIEWER_NAME}'s turn. - - After {REVIEWER_NAME} provides feedback, it is {COPYWRITER_NAME}'s turn. - - History: - {{{{$history}}}} - """, -) + function_name="selection", + prompt=f""" +Examine the provided RESPONSE and choose the next participant. +State only the name of the chosen participant without explanation. +Never choose the participant named in the RESPONSE. + +Choose only from these participants: +- {REVIEWER_NAME} +- {WRITER_NAME} + +Rules: +- If RESPONSE is user input, it is {REVIEWER_NAME}'s turn. +- If RESPONSE is by {REVIEWER_NAME}, it is {WRITER_NAME}'s turn. +- If RESPONSE is by {WRITER_NAME}, it is {REVIEWER_NAME}'s turn. + +RESPONSE: +{{{{$lastmessage}}}} +""" + ) ``` ::: zone-end @@ -540,20 +532,20 @@ KernelFunction terminationFunction = ::: zone pivot="programming-language-python" ```python -TERMINATION_KEYWORD = "yes" - -termination_function = KernelFunctionFromPrompt( - function_name="termination", - prompt=f""" - Examine the RESPONSE and determine whether the content has been deemed satisfactory. - If content is satisfactory, respond with a single word without explanation: {TERMINATION_KEYWORD}. - If specific suggestions are being provided, it is not satisfactory. - If no correction is suggested, it is satisfactory. + termination_keyword = "yes" - RESPONSE: - {{{{$history}}}} - """, -) + termination_function = KernelFunctionFromPrompt( + function_name="termination", + prompt=f""" +Examine the RESPONSE and determine whether the content has been deemed satisfactory. +If the content is satisfactory, respond with a single word without explanation: {termination_keyword}. +If specific suggestions are being provided, it is not satisfactory. +If no correction is suggested, it is satisfactory. + +RESPONSE: +{{{{$lastmessage}}}} +""" + ) ``` ::: zone-end @@ -573,7 +565,7 @@ ChatHistoryTruncationReducer historyReducer = new(1); ::: zone pivot="programming-language-python" ```python -**ChatHistoryReducer is coming soon to Python.** +history_reducer = ChatHistoryTruncationReducer(target_count=1) ``` ::: zone-end @@ -644,26 +636,28 @@ Creating `AgentGroupChat` involves: Notice that each strategy is responsible for parsing the `KernelFunction` result. ```python chat = AgentGroupChat( - agents=[agent_writer, agent_reviewer], + agents=[agent_reviewer, agent_writer], selection_strategy=KernelFunctionSelectionStrategy( + initial_agent=agent_reviewer, function=selection_function, - kernel=_create_kernel_with_chat_completion("selection"), - result_parser=lambda result: str(result.value[0]) if result.value is not None else COPYWRITER_NAME, - agent_variable_name="agents", - history_variable_name="history", + kernel=kernel, + result_parser=lambda result: str(result.value[0]).strip() if result.value[0] is not None else WRITER_NAME, + history_variable_name="lastmessage", history_reducer=history_reducer, ), termination_strategy=KernelFunctionTerminationStrategy( agents=[agent_reviewer], function=termination_function, - kernel=_create_kernel_with_chat_completion("termination"), - result_parser=lambda result: TERMINATION_KEYWORD in str(result.value[0]).lower(), - history_variable_name="history", + kernel=kernel, + result_parser=lambda result: termination_keyword in str(result.value[0]).lower(), + history_variable_name="lastmessage", maximum_iterations=10, history_reducer=history_reducer, ), ) ``` + +The `lastmessage` `history_variable_name` corresponds with the `KernelFunctionSelectionStrategy` and the `KernelFunctionTerminationStrategy` prompt that was defined above. This is where the last message is placed when rendering the prompt. ::: zone-end ::: zone pivot="programming-language-java" @@ -702,15 +696,14 @@ while not is_complete: ::: zone-end +::: zone pivot="programming-language-csharp" Now let's capture user input within the previous loop. In this case: - Empty input will be ignored - The term `EXIT` will signal that the conversation is completed - The term `RESET` will clear the _Agent Group Chat_ history - Any term starting with `@` will be treated as a file-path whose content will be provided as input -- Valid input will be added to the _Agent Group Chaty_ as a _User_ message. - +- Valid input will be added to the _Agent Group Chat_ as a _User_ message. -::: zone pivot="programming-language-csharp" ```csharp Console.WriteLine(); Console.Write("> "); @@ -757,8 +750,18 @@ chat.AddChatMessage(new ChatMessageContent(AuthorRole.User, input)); ::: zone-end ::: zone pivot="programming-language-python" +Now let's capture user input within the previous loop. In this case: +- Empty input will be ignored. +- The term `exit` will signal that the conversation is complete. +- The term `reset` will clear the _Agent Group Chat_ history. +- Any term starting with `@` will be treated as a file-path whose content will be provided as input. +- Valid input will be added to the _Agent Group Chat_ as a _User_ message. + +The operation logic inside the while loop looks like: + ```python -user_input = input("User:> ") +print() +user_input = input("User > ").strip() if not user_input: continue @@ -771,18 +774,22 @@ if user_input.lower() == "reset": print("[Conversation has been reset]") continue -if user_input.startswith("@") and len(input) > 1: - file_path = input[1:] +# Try to grab files from the script's current directory +if user_input.startswith("@") and len(user_input) > 1: + file_name = user_input[1:] + script_dir = os.path.dirname(os.path.abspath(__file__)) + file_path = os.path.join(script_dir, file_name) try: if not os.path.exists(file_path): print(f"Unable to access file: {file_path}") continue - with open(file_path) as file: + with open(file_path, "r", encoding="utf-8") as file: user_input = file.read() except Exception: print(f"Unable to access file: {file_path}") continue +# Add the current user_input to the chat await chat.add_chat_message(ChatMessageContent(role=AuthorRole.USER, content=user_input)) ``` ::: zone-end @@ -826,13 +833,17 @@ catch (HttpOperationException exception) ::: zone pivot="programming-language-python" ```python -chat.is_complete = False -async for response in chat.invoke(): - print(f"# {response.role} - {response.name or '*'}: '{response.content}'") +try: + async for response in chat.invoke(): + if response is None or not response.name: + continue + print() + print(f"# {response.name.upper()}:\n{response.content}") +except Exception as e: + print(f"Error during chat invocation: {e}") -if chat.is_complete: - is_complete = True - break +# Reset the chat's complete flag for the new conversation round. +chat.is_complete = False ``` ::: zone-end @@ -845,6 +856,8 @@ if chat.is_complete: ## Final +::: zone pivot="programming-language-csharp" + Bringing all the steps together, we have the final code for this example. The complete implementation is provided below. Try using these suggested inputs: @@ -852,14 +865,12 @@ Try using these suggested inputs: 1. Hi 2. {"message: "hello world"} 3. {"message": "hello world"} -4. Semantic Kernel (SK) is an open-source SDK that enables developers to build and orchestrate complex AI workflows that involve natural language processing (NLP) and machine learning models. It provies a flexible platform for integrating AI capabilities such as semantic search, text summarization, and dialogue systems into applications. With SK, you can easily combine different AI services and models, define thei relationships, and orchestrate interactions between them. +4. Semantic Kernel (SK) is an open-source SDK that enables developers to build and orchestrate complex AI workflows that involve natural language processing (NLP) and machine learning models. It provies a flexible platform for integrating AI capabilities such as semantic search, text summarization, and dialogue systems into applications. With SK, you can easily combine different AI services and models, define their relationships, and orchestrate interactions between them. 5. make this two paragraphs 6. thank you 7. @.\WomensSuffrage.txt 8. its good, but is it ready for my college professor? - -::: zone pivot="programming-language-csharp" ```csharp // Copyright (c) Microsoft. All rights reserved. @@ -1114,12 +1125,28 @@ public static class Program ::: zone-end ::: zone pivot="programming-language-python" + +Bringing all the steps together, we now have the final code for this example. The complete implementation is shown below. + +You can try using one of the suggested inputs. As the agent chat begins, the agents will exchange messages for several iterations until the reviewer agent is satisfied with the copywriter's work. The `while` loop ensures the conversation continues, even if the chat is initially considered complete, by resetting the `is_complete` flag to `False`. + +1. Rozes are red, violetz are blue. +2. Semantic Kernel (SK) is an open-source SDK that enables developers to build and orchestrate complex AI workflows that involve natural language processing (NLP) and machine learning models. It provies a flexible platform for integrating AI capabilities such as semantic search, text summarization, and dialogue systems into applications. With SK, you can easily combine different AI services and models, define their relationships, and orchestrate interactions between them. +4. Make this two paragraphs +5. thank you +7. @WomensSuffrage.txt +8. It's good, but is it ready for my college professor? + +> [!TIP] +> You can reference any file by providing `@`. To reference the "WomensSuffrage" text from above, download it [here](https://github.com/microsoft/semantic-kernel/blob/main/python/samples/learn_resources/resources/WomensSuffrage.txt) and place it in your current working directory. You can then reference it with `@WomensSuffrage.txt`. + ```python # Copyright (c) Microsoft. All rights reserved. import asyncio import os +from semantic_kernel import Kernel from semantic_kernel.agents import AgentGroupChat, ChatCompletionAgent from semantic_kernel.agents.strategies.selection.kernel_function_selection_strategy import ( KernelFunctionSelectionStrategy, @@ -1128,12 +1155,10 @@ from semantic_kernel.agents.strategies.termination.kernel_function_termination_s KernelFunctionTerminationStrategy, ) from semantic_kernel.connectors.ai.open_ai.services.azure_chat_completion import AzureChatCompletion -from semantic_kernel.contents import ChatHistoryTruncationReducer from semantic_kernel.contents.chat_message_content import ChatMessageContent +from semantic_kernel.contents.history_reducer.chat_history_truncation_reducer import ChatHistoryTruncationReducer from semantic_kernel.contents.utils.author_role import AuthorRole -from semantic_kernel.functions.kernel_function_decorator import kernel_function from semantic_kernel.functions.kernel_function_from_prompt import KernelFunctionFromPrompt -from semantic_kernel.kernel import Kernel ################################################################### # The following sample demonstrates how to create a simple, # @@ -1142,122 +1167,123 @@ from semantic_kernel.kernel import Kernel # complete a user's task. # ################################################################### - -class ClipboardAccess: - @kernel_function - def set_clipboard(content: str): - if not content.strip(): - return - - pyperclip.copy(content) - - +# Define agent names REVIEWER_NAME = "Reviewer" -COPYWRITER_NAME = "Writer" +WRITER_NAME = "Writer" -def _create_kernel_with_chat_completion(service_id: str) -> Kernel: +def create_kernel() -> Kernel: + """Creates a Kernel instance with an Azure OpenAI ChatCompletion service.""" kernel = Kernel() - kernel.add_service(AzureChatCompletion(service_id=service_id)) + kernel.add_service(service=AzureChatCompletion()) return kernel async def main(): + # Create a single kernel instance for all agents. + kernel = create_kernel() + + # Create ChatCompletionAgents using the same kernel. agent_reviewer = ChatCompletionAgent( service_id=REVIEWER_NAME, - kernel=_create_kernel_with_chat_completion(REVIEWER_NAME), + kernel=kernel, name=REVIEWER_NAME, instructions=""" - Your responsiblity is to review and identify how to improve user provided content. - If the user has providing input or direction for content already provided, specify how to - address this input. - Never directly perform the correction or provide example. - Once the content has been updated in a subsequent response, you will review the content - again until satisfactory. - Always copy satisfactory content to the clipboard using available tools and inform user. - - RULES: - - Only identify suggestions that are specific and actionable. - - Verify previous suggestions have been addressed. - - Never repeat previous suggestions. - """, +Your responsibility is to review and identify how to improve user provided content. +If the user has provided input or direction for content already provided, specify how to address this input. +Never directly perform the correction or provide an example. +Once the content has been updated in a subsequent response, review it again until it is satisfactory. + +RULES: +- Only identify suggestions that are specific and actionable. +- Verify previous suggestions have been addressed. +- Never repeat previous suggestions. +""", ) agent_writer = ChatCompletionAgent( - service_id=COPYWRITER_NAME, - kernel=_create_kernel_with_chat_completion(COPYWRITER_NAME), - name=COPYWRITER_NAME, + service_id=WRITER_NAME, + kernel=kernel, + name=WRITER_NAME, instructions=""" - Your sole responsiblity is to rewrite content according to review suggestions. - - - Always apply all review direction. - - Always revise the content in its entirety without explanation. - - Never address the user. - """, +Your sole responsibility is to rewrite content according to review suggestions. +- Always apply all review directions. +- Always revise the content in its entirety without explanation. +- Never address the user. +""", ) + # Define a selection function to determine which agent should take the next turn. selection_function = KernelFunctionFromPrompt( function_name="selection", prompt=f""" - Determine which participant takes the next turn in a conversation based on the the most recent participant. - State only the name of the participant to take the next turn. - No participant should take more than one turn in a row. - - Choose only from these participants: - - {REVIEWER_NAME} - - {COPYWRITER_NAME} - - Always follow these rules when selecting the next participant: - - After user input, it is {COPYWRITER_NAME}'s turn. - - After {COPYWRITER_NAME} replies, it is {REVIEWER_NAME}'s turn. - - After {REVIEWER_NAME} provides feedback, it is {COPYWRITER_NAME}'s turn. - - History: - {{{{$history}}}} - """, +Examine the provided RESPONSE and choose the next participant. +State only the name of the chosen participant without explanation. +Never choose the participant named in the RESPONSE. + +Choose only from these participants: +- {REVIEWER_NAME} +- {WRITER_NAME} + +Rules: +- If RESPONSE is user input, it is {REVIEWER_NAME}'s turn. +- If RESPONSE is by {REVIEWER_NAME}, it is {WRITER_NAME}'s turn. +- If RESPONSE is by {WRITER_NAME}, it is {REVIEWER_NAME}'s turn. + +RESPONSE: +{{{{$lastmessage}}}} +""", ) - TERMINATION_KEYWORD = "yes" + # Define a termination function where the reviewer signals completion with "yes". + termination_keyword = "yes" termination_function = KernelFunctionFromPrompt( function_name="termination", prompt=f""" - Examine the RESPONSE and determine whether the content has been deemed satisfactory. - If content is satisfactory, respond with a single word without explanation: {TERMINATION_KEYWORD}. - If specific suggestions are being provided, it is not satisfactory. - If no correction is suggested, it is satisfactory. - - RESPONSE: - {{{{$history}}}} - """, +Examine the RESPONSE and determine whether the content has been deemed satisfactory. +If the content is satisfactory, respond with a single word without explanation: {termination_keyword}. +If specific suggestions are being provided, it is not satisfactory. +If no correction is suggested, it is satisfactory. + +RESPONSE: +{{{{$lastmessage}}}} +""", ) - history_reducer = ChatHistoryTruncationReducer(target_count=1) + history_reducer = ChatHistoryTruncationReducer(target_count=5) + # Create the AgentGroupChat with selection and termination strategies. chat = AgentGroupChat( - agents=[agent_writer, agent_reviewer], + agents=[agent_reviewer, agent_writer], selection_strategy=KernelFunctionSelectionStrategy( + initial_agent=agent_reviewer, function=selection_function, - kernel=_create_kernel_with_chat_completion("selection"), - result_parser=lambda result: str(result.value[0]) if result.value is not None else COPYWRITER_NAME, - agent_variable_name="agents", - history_variable_name="history", + kernel=kernel, + result_parser=lambda result: str(result.value[0]).strip() if result.value[0] is not None else WRITER_NAME, + history_variable_name="lastmessage", history_reducer=history_reducer, ), termination_strategy=KernelFunctionTerminationStrategy( agents=[agent_reviewer], function=termination_function, - kernel=_create_kernel_with_chat_completion("termination"), - result_parser=lambda result: TERMINATION_KEYWORD in str(result.value[0]).lower(), - history_variable_name="history", + kernel=kernel, + result_parser=lambda result: termination_keyword in str(result.value[0]).lower(), + history_variable_name="lastmessage", maximum_iterations=10, history_reducer=history_reducer, ), ) - is_complete: bool = False + print( + "Ready! Type your input, or 'exit' to quit, 'reset' to restart the conversation. " + "You may pass in a file path using @." + ) + + is_complete = False while not is_complete: - user_input = input("User:> ") + print() + user_input = input("User > ").strip() if not user_input: continue @@ -1270,31 +1296,42 @@ async def main(): print("[Conversation has been reset]") continue - if user_input.startswith("@") and len(input) > 1: - file_path = input[1:] + # Try to grab files from the script's current directory + if user_input.startswith("@") and len(user_input) > 1: + file_name = user_input[1:] + script_dir = os.path.dirname(os.path.abspath(__file__)) + file_path = os.path.join(script_dir, file_name) try: if not os.path.exists(file_path): print(f"Unable to access file: {file_path}") continue - with open(file_path) as file: + with open(file_path, "r", encoding="utf-8") as file: user_input = file.read() except Exception: print(f"Unable to access file: {file_path}") continue + # Add the current user_input to the chat await chat.add_chat_message(ChatMessageContent(role=AuthorRole.USER, content=user_input)) - async for response in chat.invoke(): - print(f"# {response.role} - {response.name or '*'}: '{response.content}'") + try: + async for response in chat.invoke(): + if response is None or not response.name: + continue + print() + print(f"# {response.name.upper()}:\n{response.content}") + except Exception as e: + print(f"Error during chat invocation: {e}") - if chat.is_complete: - is_complete = True - break + # Reset the chat's complete flag for the new conversation round. + chat.is_complete = False if __name__ == "__main__": asyncio.run(main()) ``` + +You may find the full [code](https://github.com/microsoft/semantic-kernel/blob/main/python/samples/learn_resources/agent_docs/agent_collaboration.py), as shown above, in our repo. ::: zone-end ::: zone pivot="programming-language-java" diff --git a/semantic-kernel/Frameworks/agent/examples/example-assistant-code.md b/semantic-kernel/Frameworks/agent/examples/example-assistant-code.md index 67aa0290..c9b79287 100644 --- a/semantic-kernel/Frameworks/agent/examples/example-assistant-code.md +++ b/semantic-kernel/Frameworks/agent/examples/example-assistant-code.md @@ -92,7 +92,7 @@ from semantic_kernel.contents.utils.author_role import AuthorRole from semantic_kernel.kernel import Kernel ``` -Additionally, copy the `PopulationByAdmin1.csv` and `PopulationByCountry.csv` data files from [_Semantic Kernel_ `LearnResources` Project](https://github.com/microsoft/semantic-kernel/tree/main/python/samples/learn_resources/resources). Add these files in your project folder. +Additionally, copy the `PopulationByAdmin1.csv` and `PopulationByCountry.csv` data files from the [_Semantic Kernel_ `learn_resources/resources` directory](https://github.com/microsoft/semantic-kernel/tree/main/python/samples/learn_resources/resources). Add these files to your working directory. ::: zone-end ::: zone pivot="programming-language-java" @@ -172,7 +172,7 @@ Configure the following settings in your `.env` file for either Azure OpenAI or ```python AZURE_OPENAI_API_KEY="..." -AZURE_OPENAI_ENDPOINT="https://..." +AZURE_OPENAI_ENDPOINT="https://.openai.azure.com/" AZURE_OPENAI_CHAT_DEPLOYMENT_NAME="..." AZURE_OPENAI_API_VERSION="..." @@ -181,6 +181,9 @@ OPENAI_ORG_ID="" OPENAI_CHAT_MODEL_ID="" ``` +[!TIP] +Azure Assistants require an API version of at least 2024-05-01-preview. As new features are introduced, API versions are updated accordingly. As of this writing, the latest version is 2025-01-01-preview. For the most up-to-date versioning details, refer to the [Azure OpenAI API preview lifecycle](/azure/ai-services/openai/api-version-deprecation). + Once configured, the respective AI service classes will pick up the required variables and use them during instantiation. ::: zone-end @@ -235,18 +238,26 @@ OpenAIFile fileDataCountryList = await fileClient.UploadFileAsync("PopulationByC ::: zone-end ::: zone pivot="programming-language-python" + +> [!TIP] +> You may need to adjust the file paths depending upon where your files are located. + ```python # Let's form the file paths that we will later pass to the assistant csv_file_path_1 = os.path.join( os.path.dirname(os.path.dirname(os.path.realpath(__file__))), + "resources", "PopulationByAdmin1.csv", ) csv_file_path_2 = os.path.join( os.path.dirname(os.path.dirname(os.path.realpath(__file__))), + "resources", "PopulationByCountry.csv", ) ``` +You may need to modify the path creation code based on the storage location of your CSV files. + ::: zone-end ::: zone pivot="programming-language-java" @@ -257,9 +268,10 @@ csv_file_path_2 = os.path.join( ### Agent Definition +::: zone pivot="programming-language-csharp" + We are now ready to instantiate an _OpenAI Assistant Agent_. The agent is configured with its target model, _Instructions_, and the _Code Interpreter_ tool enabled. Additionally, we explicitly associate the two data files with the _Code Interpreter_ tool. -::: zone pivot="programming-language-csharp" ```csharp Console.WriteLine("Defining agent..."); OpenAIAssistantAgent agent = @@ -283,20 +295,23 @@ OpenAIAssistantAgent agent = ::: zone-end ::: zone pivot="programming-language-python" + +We are now ready to instantiate an _Azure Assistant Agent_. The agent is configured with its target model, _Instructions_, and the _Code Interpreter_ tool enabled. Additionally, we explicitly associate the two data files with the _Code Interpreter_ tool. + ```python agent = await AzureAssistantAgent.create( - kernel=Kernel(), - service_id="agent", - name="SampleAssistantAgent", - instructions=""" - Analyze the available data to provide an answer to the user's question. - Always format response using markdown. - Always include a numerical index that starts at 1 for any lists or tables. - Always sort lists in ascending order. - """, - enable_code_interpreter=True, - code_interpreter_filenames=[csv_file_path_1, csv_file_path_2], - ) + kernel=Kernel(), + service_id="agent", + name="SampleAssistantAgent", + instructions=""" + Analyze the available data to provide an answer to the user's question. + Always format response using markdown. + Always include a numerical index that starts at 1 for any lists or tables. + Always sort lists in ascending order. + """, + enable_code_interpreter=True, + code_interpreter_filenames=[csv_file_path_1, csv_file_path_2], +) ``` ::: zone-end @@ -722,7 +737,10 @@ public static class Program ::: zone pivot="programming-language-python" ```python +# Copyright (c) Microsoft. All rights reserved. + import asyncio +import logging import os from semantic_kernel.agents.open_ai.azure_assistant_agent import AzureAssistantAgent @@ -731,19 +749,29 @@ from semantic_kernel.contents.streaming_file_reference_content import StreamingF from semantic_kernel.contents.utils.author_role import AuthorRole from semantic_kernel.kernel import Kernel +logging.basicConfig(level=logging.ERROR) + +################################################################### +# The following sample demonstrates how to create a simple, # +# OpenAI assistant agent that utilizes the code interpreter # +# to analyze uploaded files. # +################################################################### + # Let's form the file paths that we will later pass to the assistant csv_file_path_1 = os.path.join( os.path.dirname(os.path.dirname(os.path.realpath(__file__))), + "resources", "PopulationByAdmin1.csv", ) csv_file_path_2 = os.path.join( os.path.dirname(os.path.dirname(os.path.realpath(__file__))), + "resources", "PopulationByCountry.csv", ) -async def download_file_content(agent, file_id: str): +async def download_file_content(agent: AzureAssistantAgent, file_id: str): try: # Fetch the content of the file using the provided method response_content = await agent.client.files.content(file_id) @@ -766,7 +794,7 @@ async def download_file_content(agent, file_id: str): print(f"An error occurred while downloading file {file_id}: {str(e)}") -async def download_response_image(agent, file_ids: list[str]): +async def download_response_image(agent: AzureAssistantAgent, file_ids: list[str]): if file_ids: # Iterate over file_ids and download each one for file_id in file_ids: @@ -801,30 +829,41 @@ async def main(): if user_input.lower() == "exit": is_complete = True - break await agent.add_chat_message( thread_id=thread_id, message=ChatMessageContent(role=AuthorRole.USER, content=user_input) ) - is_code: bool = False - async for response in agent.invoke_stream(thread_id=thread_id): - if is_code != response.metadata.get("code"): - print() - is_code = not is_code - - print(f"{response.content}", end="", flush=True) + is_code = False + last_role = None + async for response in agent.invoke_stream(thread_id=thread_id): + current_is_code = response.metadata.get("code", False) + + if current_is_code: + if not is_code: + print("\n\n```python") + is_code = True + print(response.content, end="", flush=True) + else: + if is_code: + print("\n```") + is_code = False + last_role = None + if hasattr(response, "role") and response.role is not None and last_role != response.role: + print(f"\n# {response.role}: ", end="", flush=True) + last_role = response.role + print(response.content, end="", flush=True) file_ids.extend([ item.file_id for item in response.items if isinstance(item, StreamingFileReferenceContent) ]) - - print() + if is_code: + print("```\n") await download_response_image(agent, file_ids) file_ids.clear() finally: - print("Cleaning up resources...") + print("\nCleaning up resources...") if agent is not None: [await agent.delete_file(file_id) for file_id in agent.code_interpreter_file_ids] await agent.delete_thread(thread_id) @@ -834,6 +873,8 @@ async def main(): if __name__ == "__main__": asyncio.run(main()) ``` + +You may find the full [code](https://github.com/microsoft/semantic-kernel/blob/main/python/samples/learn_resources/agent_docs/assistant_code.py), as shown above, in our repo. ::: zone-end ::: zone pivot="programming-language-java" diff --git a/semantic-kernel/Frameworks/agent/examples/example-assistant-search.md b/semantic-kernel/Frameworks/agent/examples/example-assistant-search.md index ca3c70b3..f38a28d4 100644 --- a/semantic-kernel/Frameworks/agent/examples/example-assistant-search.md +++ b/semantic-kernel/Frameworks/agent/examples/example-assistant-search.md @@ -174,7 +174,7 @@ Configure the following settings in your `.env` file for either Azure OpenAI or ```python AZURE_OPENAI_API_KEY="..." -AZURE_OPENAI_ENDPOINT="https://..." +AZURE_OPENAI_ENDPOINT="https://.openai.azure.com/" AZURE_OPENAI_CHAT_DEPLOYMENT_NAME="..." AZURE_OPENAI_API_VERSION="..." @@ -183,6 +183,9 @@ OPENAI_ORG_ID="" OPENAI_CHAT_MODEL_ID="" ``` +> [!TIP] +> Azure Assistants require an API version of at least 2024-05-01-preview. As new features are introduced, API versions are updated accordingly. As of this writing, the latest version is 2025-01-01-preview. For the most up-to-date versioning details, refer to the [Azure OpenAI API preview lifecycle](/azure/ai-services/openai/api-version-deprecation). + Once configured, the respective AI service classes will pick up the required variables and use them during instantiation. ::: zone-end @@ -780,6 +783,8 @@ async def main(): if __name__ == "__main__": asyncio.run(main()) ``` + +You may find the full [code](https://github.com/microsoft/semantic-kernel/blob/main/python/samples/learn_resources/agent_docs/assistant_search.py), as shown above, in our repo. ::: zone-end ::: zone pivot="programming-language-java" diff --git a/semantic-kernel/Frameworks/agent/examples/example-chat-agent.md b/semantic-kernel/Frameworks/agent/examples/example-chat-agent.md index 7870d9ed..589526ef 100644 --- a/semantic-kernel/Frameworks/agent/examples/example-chat-agent.md +++ b/semantic-kernel/Frameworks/agent/examples/example-chat-agent.md @@ -177,7 +177,7 @@ Configure the following settings in your `.env` file for either Azure OpenAI or ```python AZURE_OPENAI_API_KEY="..." -AZURE_OPENAI_ENDPOINT="https://..." +AZURE_OPENAI_ENDPOINT="https://.openai.azure.com/" AZURE_OPENAI_CHAT_DEPLOYMENT_NAME="..." AZURE_OPENAI_API_VERSION="..." @@ -343,7 +343,7 @@ agent = ChatCompletionAgent( The current date and time is: {{$now}}. """, arguments=KernelArguments( - settings=AzureAIPromptExecutionSettings(function_choice_behavior=FunctionChoiceBehavior.Auto()), + settings=AzureChatPromptExecutionSettings(function_choice_behavior=FunctionChoiceBehavior.Auto()), repository="microsoft/semantic-kernel", ), ) @@ -669,6 +669,8 @@ async def main(): if __name__ == "__main__": asyncio.run(main()) ``` + +You may find the full [code](https://github.com/microsoft/semantic-kernel/blob/main/python/samples/learn_resources/agent_docs/chat_agent.py), as shown above, in our repo. ::: zone-end ::: zone pivot="programming-language-java" diff --git a/semantic-kernel/Frameworks/process/process-deployment.md b/semantic-kernel/Frameworks/process/process-deployment.md index 3be467a4..8ba8aa0e 100644 --- a/semantic-kernel/Frameworks/process/process-deployment.md +++ b/semantic-kernel/Frameworks/process/process-deployment.md @@ -18,7 +18,7 @@ The Process Framework provides an in-process runtime that allows developers to r ## Cloud Runtimes -For scenarios requiring scalability and distributed processing, the Process Framework supports cloud runtimes such as [**Orleans**](https://learn.microsoft.com/dotnet/orleans/overview) and [**Dapr**](https://dapr.io/). These options empower developers to deploy processes in a distributed manner, facilitating high availability and load balancing across multiple instances. By leveraging these cloud runtimes, organizations can streamline their operations and manage substantial workloads with ease. +For scenarios requiring scalability and distributed processing, the Process Framework supports cloud runtimes such as [**Orleans**](/dotnet/orleans/overview) and [**Dapr**](https://dapr.io/). These options empower developers to deploy processes in a distributed manner, facilitating high availability and load balancing across multiple instances. By leveraging these cloud runtimes, organizations can streamline their operations and manage substantial workloads with ease. - **Orleans Runtime:** This framework provides a programming model for building distributed applications and is particularly well-suited for handling virtual actors in a resilient manner, complementing the Process Framework’s event-driven architecture. diff --git a/semantic-kernel/concepts/ai-services/chat-completion/function-calling/index.md b/semantic-kernel/concepts/ai-services/chat-completion/function-calling/index.md index f11b8df5..f122da5b 100644 --- a/semantic-kernel/concepts/ai-services/chat-completion/function-calling/index.md +++ b/semantic-kernel/concepts/ai-services/chat-completion/function-calling/index.md @@ -207,6 +207,41 @@ kernel.add_plugin(OrderPizzaPlugin(pizza_service, user_context, payment_service) > [!NOTE] > Only functions with the `kernel_function` decorator will be serialized and sent to the model. This allows you to have helper functions that are not exposed to the model. +## Reserved Parameter Names for Auto Function Calling + +When using auto function calling in KernelFunctions, certain parameter names are **reserved** and receive special handling. These reserved names allow you to automatically access key objects required for function execution. + +### Reserved Names + +The following parameter names are reserved: +- `kernel` +- `service` +- `execution_settings` +- `arguments` + +### How They Work + +During function invocation, the method [`gather_function_parameters`](https://github.com/microsoft/semantic-kernel/blob/main/python/semantic_kernel/functions/kernel_function_from_method.py#L148) inspects each parameter. If the parameter's name matches one of the reserved names, it is populated with specific objects: + +- **`kernel`**: Injected with the kernel object. +- **`service`**: Populated with the AI service selected based on the provided arguments. +- **`execution_settings`**: Contains settings pertinent to the function's execution. +- **`arguments`**: Receives the entire set of kernel arguments passed during invocation. + +This design ensures that these parameters are automatically managed, eliminating the need for manual extraction or assignment. + +### Example Usage + +Consider the following example: + +```python +class SimplePlugin: + @kernel_function(name="GetWeather", description="Get the weather for a location.") + async def get_the_weather(self, location: str, arguments: KernelArguments) -> str: + # The 'arguments' parameter is reserved and automatically populated with KernelArguments. + return f"Received user input: {location}, the weather is nice!" +``` + ::: zone-end ::: zone pivot="programming-language-java" diff --git a/semantic-kernel/concepts/ai-services/chat-completion/index.md b/semantic-kernel/concepts/ai-services/chat-completion/index.md index c8218409..88d3eaea 100644 --- a/semantic-kernel/concepts/ai-services/chat-completion/index.md +++ b/semantic-kernel/concepts/ai-services/chat-completion/index.md @@ -920,7 +920,7 @@ chat_completion_service = AzureChatCompletion(service_id="my-service-id") ``` > [!NOTE] -> The `AzureChatCompletion` service also supports [Microsoft Entra](https://learn.microsoft.com/en-us/entra/identity/authentication/overview-authentication) authentication. If you don't provide an API key, the service will attempt to authenticate using the Entra token. +> The `AzureChatCompletion` service also supports [Microsoft Entra](/entra/identity/authentication/overview-authentication) authentication. If you don't provide an API key, the service will attempt to authenticate using the Entra token. # [OpenAI](#tab/python-OpenAI) @@ -967,7 +967,7 @@ chat_completion_service = AzureAIInferenceChatCompletion( ``` > [!NOTE] -> The `AzureAIInferenceChatCompletion` service also supports [Microsoft Entra](https://learn.microsoft.com/en-us/entra/identity/authentication/overview-authentication) authentication. If you don't provide an API key, the service will attempt to authenticate using the Entra token. +> The `AzureAIInferenceChatCompletion` service also supports [Microsoft Entra](/entra/identity/authentication/overview-authentication) authentication. If you don't provide an API key, the service will attempt to authenticate using the Entra token. # [Anthropic](#tab/python-Anthropic) @@ -1274,7 +1274,7 @@ execution_settings = OnnxGenAIPromptExecutionSettings() --- > [!TIP] -> To see what you can configure in the execution settings, you can check the class definition in the [source code](https://github.com/microsoft/semantic-kernel/tree/main/python/semantic_kernel/connectors/ai) or check out the [API documentation](https://learn.microsoft.com/en-us/python/api/semantic-kernel/semantic_kernel.connectors.ai?view=semantic-kernel-python). +> To see what you can configure in the execution settings, you can check the class definition in the [source code](https://github.com/microsoft/semantic-kernel/tree/main/python/semantic_kernel/connectors/ai) or check out the [API documentation](/python/api/semantic-kernel/semantic_kernel.connectors.ai). ::: zone-end diff --git a/semantic-kernel/concepts/enterprise-readiness/observability/index.md b/semantic-kernel/concepts/enterprise-readiness/observability/index.md index 41583f59..c88edbe5 100644 --- a/semantic-kernel/concepts/enterprise-readiness/observability/index.md +++ b/semantic-kernel/concepts/enterprise-readiness/observability/index.md @@ -20,8 +20,8 @@ Observability is typically achieved through logging, metrics, and tracing. They Useful materials for further reading: - [Observability defined by Cloud Native Computing Foundation](https://glossary.cncf.io/observability/) -- [Distributed tracing](https://learn.microsoft.com/dotnet/core/diagnostics/distributed-tracing) -- [Observability in .Net](https://learn.microsoft.com/dotnet/core/diagnostics/observability-with-otel) +- [Distributed tracing](/dotnet/core/diagnostics/distributed-tracing) +- [Observability in .Net](/dotnet/core/diagnostics/observability-with-otel) - [OpenTelemetry](https://opentelemetry.io/docs/what-is-opentelemetry/) ## Observability in Semantic Kernel @@ -33,18 +33,18 @@ Specifically, Semantic Kernel provides the following observability features: - **Logging**: Semantic Kernel logs meaningful events and errors from the kernel, kernel plugins and functions, as well as the AI connectors. ![Logs and events](../../../media/telemetry-log-events-overview-app-insights.png) > [!IMPORTANT] - > [Traces in Application Insights](https://learn.microsoft.com/azure/azure-monitor/app/data-model-complete#trace) represent traditional log entries and [OpenTelemetry span events](https://opentelemetry.io/docs/concepts/signals/traces/#span-events). They are not the same as distributed traces. + > [Traces in Application Insights](/azure/azure-monitor/app/data-model-complete#trace) represent traditional log entries and [OpenTelemetry span events](https://opentelemetry.io/docs/concepts/signals/traces/#span-events). They are not the same as distributed traces. - **Metrics**: Semantic Kernel emits metrics from kernel functions and AI connectors. You will be able to monitor metrics such as the kernel function execution time, the token consumption of AI connectors, etc. ![Metrics](../../../media/telemetry-metrics-overview-app-insights.png) - **Tracing**: Semantic Kernel supports distributed tracing. You can track activities across different services and within Semantic Kernel. - ![Complete end-to-end transaction of a request](../../media/telemetry-trace-overview-app-insights.png) + ![Complete end-to-end transaction of a request](../../../media/telemetry-trace-overview-app-insights.png) ::: zone pivot="programming-language-csharp" | Telemetry | Description | |-----------|---------------------------------------| -| Log | Logs are recorded throughout the Kernel. For more information on Logging in .Net, please refer to this [document](https://learn.microsoft.com/dotnet/core/extensions/logging). Sensitive data, such as kernel function arguments and results, are logged at the trace level. Please refer to this [table](https://learn.microsoft.com/dotnet/core/extensions/logging?tabs=command-line#log-level) for more information on log levels. | +| Log | Logs are recorded throughout the Kernel. For more information on Logging in .Net, please refer to this [document](/dotnet/core/extensions/logging). Sensitive data, such as kernel function arguments and results, are logged at the trace level. Please refer to this [table](/dotnet/core/extensions/logging?tabs=command-line#log-level) for more information on log levels. | | Activity | Each kernel function execution and each call to an AI model are recorded as an activity. All activities are generated by an activity source named "Microsoft.SemanticKernel". | | Metric | Semantic Kernel captures the following metrics from kernel functions:
  • `semantic_kernel.function.invocation.duration` (Histogram) - function execution time (in seconds)
  • `semantic_kernel.function.streaming.duration` (Histogram) - function streaming execution time (in seconds)
  • `semantic_kernel.function.invocation.token_usage.prompt` (Histogram) - number of prompt token usage (only for `KernelFunctionFromPrompt`)
  • `semantic_kernel.function.invocation.token_usage.completion` (Histogram) - number of completion token usage (only for `KernelFunctionFromPrompt`)
  • | diff --git a/semantic-kernel/concepts/enterprise-readiness/observability/telemetry-advanced.md b/semantic-kernel/concepts/enterprise-readiness/observability/telemetry-advanced.md index bb604e6b..2efebe69 100644 --- a/semantic-kernel/concepts/enterprise-readiness/observability/telemetry-advanced.md +++ b/semantic-kernel/concepts/enterprise-readiness/observability/telemetry-advanced.md @@ -12,7 +12,7 @@ ms.service: semantic-kernel # More advanced scenarios for telemetry > [!NOTE] -> This article will use [Aspire Dashboard](https://learn.microsoft.com/dotnet/aspire/fundamentals/dashboard/overview?tabs=bash) for illustration. If you prefer to use other tools, please refer to the documentation of the tool you are using on setup instructions. +> This article will use [Aspire Dashboard](/dotnet/aspire/fundamentals/dashboard/overview?tabs=bash) for illustration. If you prefer to use other tools, please refer to the documentation of the tool you are using on setup instructions. ## Auto Function Calling @@ -375,7 +375,7 @@ Please refer to this [article](./telemetry-with-console.md#environment-variables ### Start the Aspire Dashboard -Follow the instructions [here](https://learn.microsoft.com/dotnet/aspire/fundamentals/dashboard/standalone?tabs=bash#start-the-dashboard) to start the dashboard. Once the dashboard is running, open a browser and navigate to `http://localhost:18888` to access the dashboard. +Follow the instructions [here](/dotnet/aspire/fundamentals/dashboard/standalone?tabs=bash#start-the-dashboard) to start the dashboard. Once the dashboard is running, open a browser and navigate to `http://localhost:18888` to access the dashboard. ### Run diff --git a/semantic-kernel/concepts/enterprise-readiness/observability/telemetry-with-aspire-dashboard.md b/semantic-kernel/concepts/enterprise-readiness/observability/telemetry-with-aspire-dashboard.md index b83abbfb..6bd39dfe 100644 --- a/semantic-kernel/concepts/enterprise-readiness/observability/telemetry-with-aspire-dashboard.md +++ b/semantic-kernel/concepts/enterprise-readiness/observability/telemetry-with-aspire-dashboard.md @@ -11,9 +11,9 @@ ms.service: semantic-kernel # Inspection of telemetry data with Aspire Dashboard -[Aspire Dashboard](https://learn.microsoft.com/dotnet/aspire/fundamentals/dashboard/overview?tabs=bash) is part of the [.NET Aspire](https://learn.microsoft.com/dotnet/aspire/get-started/aspire-overview) offering. The dashboard allows developers to monitor and inspect their distributed applications. +[Aspire Dashboard](/dotnet/aspire/fundamentals/dashboard/overview?tabs=bash) is part of the [.NET Aspire](/dotnet/aspire/get-started/aspire-overview) offering. The dashboard allows developers to monitor and inspect their distributed applications. -In this example, we will use the [standalone mode](https://learn.microsoft.com/dotnet/aspire/fundamentals/dashboard/standalone?tabs=bash) and learn how to export telemetry data to Aspire Dashboard, and inspect the data there. +In this example, we will use the [standalone mode](/dotnet/aspire/fundamentals/dashboard/standalone?tabs=bash) and learn how to export telemetry data to Aspire Dashboard, and inspect the data there. ## Exporter @@ -330,7 +330,7 @@ Please refer to this [article](./telemetry-with-console.md#add-telemetry-1) for ## Start the Aspire Dashboard -Follow the instructions [here](https://learn.microsoft.com/dotnet/aspire/fundamentals/dashboard/standalone?tabs=bash#start-the-dashboard) to start the dashboard. Once the dashboard is running, open a browser and navigate to `http://localhost:18888` to access the dashboard. +Follow the instructions [here](/dotnet/aspire/fundamentals/dashboard/standalone?tabs=bash#start-the-dashboard) to start the dashboard. Once the dashboard is running, open a browser and navigate to `http://localhost:18888` to access the dashboard. ## Run @@ -366,7 +366,7 @@ python telemetry_aspire_dashboard_quickstart.py After running the application, head over to the dashboard to inspect the telemetry data. > [!TIP] -> Follow this [guide](https://learn.microsoft.com/dotnet/aspire/fundamentals/dashboard/explore) to explore the Aspire Dashboard interface. +> Follow this [guide](/dotnet/aspire/fundamentals/dashboard/explore) to explore the Aspire Dashboard interface. ### Traces @@ -383,7 +383,7 @@ In the trace details, you can see the span that represents the prompt function a ### Logs -Head over to the `Structured` tab to view the logs emitted by the application. Please refer to this [guide](https://learn.microsoft.com/dotnet/aspire/fundamentals/dashboard/explore#structured-logs-page) on how to work with structured logs in the dashboard. +Head over to the `Structured` tab to view the logs emitted by the application. Please refer to this [guide](/dotnet/aspire/fundamentals/dashboard/explore#structured-logs-page) on how to work with structured logs in the dashboard. ## Next steps diff --git a/semantic-kernel/concepts/enterprise-readiness/observability/telemetry-with-azure-ai-foundry-tracing.md b/semantic-kernel/concepts/enterprise-readiness/observability/telemetry-with-azure-ai-foundry-tracing.md index 36153162..00ab843c 100644 --- a/semantic-kernel/concepts/enterprise-readiness/observability/telemetry-with-azure-ai-foundry-tracing.md +++ b/semantic-kernel/concepts/enterprise-readiness/observability/telemetry-with-azure-ai-foundry-tracing.md @@ -10,7 +10,7 @@ ms.service: semantic-kernel # Visualize traces on Azure AI Foundry Tracing UI -[Azure AI Foundry](https://learn.microsoft.com/en-us/azure/ai-studio/) Tracing UI is a web-based user interface that allows you to visualize traces and logs generated by your applications. This article provides a step-by-step guide on how to visualize traces on Azure AI Foundry Tracing UI. +[Azure AI Foundry](/azure/ai-studio/) Tracing UI is a web-based user interface that allows you to visualize traces and logs generated by your applications. This article provides a step-by-step guide on how to visualize traces on Azure AI Foundry Tracing UI. > [!IMPORTANT] > Before you start, make sure you have completed the tutorial on [inspecting telemetry data with Application Insights](./telemetry-with-app-insights.md). @@ -20,8 +20,8 @@ ms.service: semantic-kernel Prerequisites: -- An Azure AI Foundry project. Follow this [guide](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/create-projects) to create one if you don't have one. -- A serverless inference API. Follow this [guide](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-serverless) to create one if you don't have one. +- An Azure AI Foundry project. Follow this [guide](/azure/ai-studio/how-to/create-projects) to create one if you don't have one. +- A serverless inference API. Follow this [guide](/azure/ai-studio/how-to/deploy-models-serverless) to create one if you don't have one. - Alternatively, you can attach an Azure OpenAI resource to the project, in which case you don't need to create a serverless API. ## Attach an Application Insights resource to the project diff --git a/semantic-kernel/concepts/enterprise-readiness/observability/telemetry-with-console.md b/semantic-kernel/concepts/enterprise-readiness/observability/telemetry-with-console.md index 56354c16..f066c170 100644 --- a/semantic-kernel/concepts/enterprise-readiness/observability/telemetry-with-console.md +++ b/semantic-kernel/concepts/enterprise-readiness/observability/telemetry-with-console.md @@ -478,7 +478,7 @@ Value: 16 Here you can see the name, the description, the unit, the time range, the type, the value of the metric, and the meter that the metric belongs to. > [!NOTE] -> The above metric is a Counter metric. For a full list of metric types, see [here](https://learn.microsoft.com/dotnet/core/diagnostics/metrics-instrumentation#types-of-instruments). Depending on the type of metric, the output may vary. +> The above metric is a Counter metric. For a full list of metric types, see [here](/dotnet/core/diagnostics/metrics-instrumentation#types-of-instruments). Depending on the type of metric, the output may vary. ::: zone-end