Skip to content

Commit edb367a

Browse files
dmytrostrukCopilotmoonbox3crickmanReubenBond
authored
Python: Azure AI client based on new azure-ai-projects package (#1910)
* Added changes (#1909) * Python: [Feature Branch] Renamed Azure AI agent and small fixes (#1919) * Renaming * Small fixes * Update python/packages/core/agent_framework/openai/_shared.py Co-authored-by: Copilot <[email protected]> --------- Co-authored-by: Copilot <[email protected]> * Small fix * Python: [Feature Branch] Added use_latest_version parameter to AzureAIClient (#1959) * Added use_latest_version parameter to AzureAIClient * Added unit tests * Update python/samples/getting_started/agents/azure_ai/azure_ai_use_latest_version.py Co-authored-by: Copilot <[email protected]> * Update python/packages/azure-ai/agent_framework_azure_ai/_client.py Co-authored-by: Evan Mattson <[email protected]> --------- Co-authored-by: Copilot <[email protected]> Co-authored-by: Evan Mattson <[email protected]> * Python: [Feature Branch] Structured Outputs and more examples for AzureAIClient (#1987) * Small updates * Added support for structured outputs * Added code interpreter example * More examples and fixes * Added more examples and README * Small fix * Addressed PR feedback * Removed optional ID from FunctionResultContent (#2011) * Added hosted MCP support (#2018) * Python: [Feature Branch] Fixed "store" parameter handling (#2069) * Fixed store parameter handling * Small fix * Python: [Feature Branch] Added more examples and fixes for Azure AI agent (#2077) * Updated azure-ai-projects package version * Added an example of hosted MCP with approval required * Updated code interpreter example * Added file search example * Update python/samples/getting_started/agents/azure_ai/azure_ai_with_file_search.py Co-authored-by: Copilot <[email protected]> * Update python/samples/getting_started/agents/azure_ai/azure_ai_with_file_search.py Co-authored-by: Copilot <[email protected]> * Small fix --------- Co-authored-by: Copilot <[email protected]> * Added handling for conversation_id (#2098) * Merge from main * Revert "Merge from main" This reverts commit b8206a8. * Python: [Feature Branch] Merge from main to Azure AI branch (#2111) * Do not build DevUI assets during .NET project build (#2010) * .NET: Add unit tests for declarative executor SetMultipleVariables (#2016) * Add unit tests for create conversation executor * Update indentation and comment typo. * Added unit tests for declarative executor SetMultipleVariablesExecutor * Updated comments and syntactic sugar * Python: DevUI: Use metadata.entity_id instead of model field (#1984) * DevUI: Use metadata.entity_id for agent/workflow name instead of model field * OpenAI Responses: add explicit request validation * Review feedback * .NET: DevUI - Do not automatically add/map OpenAI services/endpoints (#2014) * Don't add OpenAIResponses as part of Dev UI You should be able to add and remove Dev UI without impacting your other production endpoints. * Remove `AddDevUI()` and do not map OpenAI endpoints from `MapDevUI()` * Fix comment wording * Revise documentation --------- Co-authored-by: Daniel Roth <[email protected]> * Python: DevUI: Add OpenAI Responses API proxy support + HIL for Workflows (#1737) * DevUI: Add OpenAI Responses API proxy support with enhanced UI features This commit adds support for proxying requests to OpenAI's Responses API, allowing DevUI to route conversations to OpenAI models when configured to enable testing. Backend changes: - Add OpenAI proxy executor with conversation routing logic - Enhance event mapper to support OpenAI Responses API format - Extend server endpoints to handle OpenAI proxy mode - Update models with OpenAI-specific response types - Remove emojis from logging and CLI output for cleaner text Frontend changes: - Add settings modal with OpenAI proxy configuration UI - Enhance agent and workflow views with improved state management - Add new UI components (separator, switch) for settings - Update debug panel with better event filtering - Improve message renderers for OpenAI content types - Update types and API client for OpenAI integration * update ui, settings modal and workflow input form, add register cleanup hooks. * add workflow HIL support, user mode, other fixes * feat(devui): add human-in-the-loop (HIL) support with dynamic response schemas Implement HIL workflow support allowing workflows to pause for user input with dynamically generated JSON schemas based on response handler type hints. Key Features: - Automatic response schema extraction from @response_handler decorators - Dynamic form generation in UI based on Pydantic/dataclass response types - Checkpoint-based conversation storage for HIL requests/responses - Resume workflow execution after user provides HIL response Backend Changes: - Add extract_response_type_from_executor() to introspect response handlers - Enrich RequestInfoEvent with response_schema via _enrich_request_info_event_with_response_schema() - Map RequestInfoEvent to response.input.requested OpenAI event format - Store HIL responses in conversation history and restore checkpoints Frontend Changes: - Add HILInputModal component with SchemaFormRenderer for dynamic forms - Support Pydantic BaseModel and dataclass response types - Render enum fields as dropdowns, strings as text/textarea, numbers, booleans, arrays, objects - Display original request context alongside response form Testing: - Add tests for checkpoint storage (test_checkpoints.py) - Add schema generation tests for all input types (test_schema_generation.py) - Validate end-to-end HIL flow with spam workflow sample This enables workflows to seamlessly pause execution and request structured user input with type-safe, validated forms generated automatically from response type annotations. * improve HIL support, improve workflow execution view * ui updates * ui updates * improve HIL for workflows, add auth and view modes * update workflow * security improvements , ui fixes * fix mypy error * update loading spinner in ui --------- Co-authored-by: Mark Wallace <[email protected]> * .NET: Remove launchSettings.json from .gitignore in dotnet/samples (#2006) * Remove launchSettings.json from .gitignore in dotnet/samples * Update dotnet/samples/GettingStarted/DevUI/DevUI_Step01_BasicUsage/Properties/launchSettings.json Co-authored-by: Copilot <[email protected]> * Update dotnet/samples/AGUIClientServer/AGUIServer/Properties/launchSettings.json Co-authored-by: Copilot <[email protected]> --------- Co-authored-by: Copilot <[email protected]> * DevUI: Serialize workflow input as string to maintain conformance with OpenAI Responses format (#2021) Co-authored-by: Victor Dibia <[email protected]> * Add Microsoft Agent Framework logo to assets (#2007) * Updated package versions (#2027) * DevUI: Prevent line breaks within words in the agent view (#2024) Co-authored-by: Victor Dibia <[email protected]> * .NET [AG-UI]: Adds support for shared state. (#1996) * Product changes * Tests * Dojo project * Cleanups * Python: Fix underlying tool choice bug and all for return to previous Handoff subagent (#2037) * Fix tool_choice override bug and add enable_return_to_previous support * Add unit test for handoff checkpointing * Handle tools when we have them * added missing chatAgent params (#2044) * .NET: fix ChatCompletions Tools serialization (#2043) * fix serialization in chat completions on tools * nit * .NET: assign AgentCard's URL to mapped-endpoint if not defined explicitly (#2047) * fix serialization in chat completions on tools * nit * write e2e test for agent card resolve + adjust behavior * nit * Version 1.0.0-preview.251110.1 (#2048) * .NET: Remove moved OpenAPI sample and point to SK one. (#1997) * Remove moved OpenAPI sample and point to SK one. * Update dotnet/samples/GettingStarted/Agents/README.md Co-authored-by: Copilot <[email protected]> --------- Co-authored-by: Copilot <[email protected]> * Bump AWSSDK.Extensions.Bedrock.MEAI from 4.0.4.2 to 4.0.4.6 (#2031) --- updated-dependencies: - dependency-name: AWSSDK.Extensions.Bedrock.MEAI dependency-version: 4.0.4.6 dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * .NET: Separate all memory and rag samples into their own folders (#2000) * Separate all memory and rag samples into their own folders * Fix broken link. * Python: .Net: Dotnet devui compatibility fixes (#2026) * DevUI: Add OpenAI Responses API proxy support with enhanced UI features This commit adds support for proxying requests to OpenAI's Responses API, allowing DevUI to route conversations to OpenAI models when configured to enable testing. Backend changes: - Add OpenAI proxy executor with conversation routing logic - Enhance event mapper to support OpenAI Responses API format - Extend server endpoints to handle OpenAI proxy mode - Update models with OpenAI-specific response types - Remove emojis from logging and CLI output for cleaner text Frontend changes: - Add settings modal with OpenAI proxy configuration UI - Enhance agent and workflow views with improved state management - Add new UI components (separator, switch) for settings - Update debug panel with better event filtering - Improve message renderers for OpenAI content types - Update types and API client for OpenAI integration * update ui, settings modal and workflow input form, add register cleanup hooks. * add workflow HIL support, user mode, other fixes * feat(devui): add human-in-the-loop (HIL) support with dynamic response schemas Implement HIL workflow support allowing workflows to pause for user input with dynamically generated JSON schemas based on response handler type hints. Key Features: - Automatic response schema extraction from @response_handler decorators - Dynamic form generation in UI based on Pydantic/dataclass response types - Checkpoint-based conversation storage for HIL requests/responses - Resume workflow execution after user provides HIL response Backend Changes: - Add extract_response_type_from_executor() to introspect response handlers - Enrich RequestInfoEvent with response_schema via _enrich_request_info_event_with_response_schema() - Map RequestInfoEvent to response.input.requested OpenAI event format - Store HIL responses in conversation history and restore checkpoints Frontend Changes: - Add HILInputModal component with SchemaFormRenderer for dynamic forms - Support Pydantic BaseModel and dataclass response types - Render enum fields as dropdowns, strings as text/textarea, numbers, booleans, arrays, objects - Display original request context alongside response form Testing: - Add tests for checkpoint storage (test_checkpoints.py) - Add schema generation tests for all input types (test_schema_generation.py) - Validate end-to-end HIL flow with spam workflow sample This enables workflows to seamlessly pause execution and request structured user input with type-safe, validated forms generated automatically from response type annotations. * improve HIL support, improve workflow execution view * ui updates * ui updates * improve HIL for workflows, add auth and view modes * update workflow * security improvements , ui fixes * fix mypy error * update loading spinner in ui * DevUI: Serialize workflow input as string to maintain conformance with OpenAI Responses format * Phase 1: Add /meta endpoint and fix workflow event naming for .NET DevUI compatibility * additional fixes for .NET DevUI workflow visualization item ID tracking **Problem:** .NET DevUI was generating different item IDs for ExecutorInvokedEvent and ExecutorCompletedEvent, causing only the first executor to highlight in the workflow graph. Long executor names and error messages also broke UI layout. **Changes:** - Add ExecutorActionItemResource to match Python DevUI implementation - Track item IDs per executor using dictionary in AgentRunResponseUpdateExtensions - Reuse same item ID across invoked/completed/failed events for proper pairing - Add truncateText() utility to workflow-utils.ts - Truncate executor names to 35 chars in execution timeline - Truncate error messages to 150 chars in workflow graph nodes ** Details:** - ExecutorActionItemResource registered with JSON source generation context - Dictionary cleaned up after executor completion/failure to prevent memory leaks - Frontend item tracking by unique item.id supports multiple executor runs - All changes follow existing codebase patterns and conventions Tested with review-workflow showing correct executor highlighting and state transitions for sequential and concurrent executors. * format fixes, remove cors tests * remove unecessary attributes --------- Co-authored-by: Mark Wallace <[email protected]> Co-authored-by: Reuben Bond <[email protected]> * DevUI: support having both an agent and a workflow with the same id in discovery (#2023) * Python: Fix Model ID attribute not showing up in `invoke_agent` span (#2061) * Best effort to surface the model id to invoke agent span * Fix tests * Fix tests * Version 1.0.0-preview.251107.2 (#2065) * Version 1.0.0-preview.251110.2 (#2067) * Update README.md to change Grafana links to Azure portal links for dashboard access (#1983) * .NET - Enable build & test on branch `feature-foundry-agents` (#2068) * Tests good, mkay * Update .github/workflows/dotnet-build-and-test.yml Co-authored-by: Copilot <[email protected]> * Enable feature build pipelines --------- Co-authored-by: Copilot <[email protected]> Co-authored-by: Roger Barreto <[email protected]> * Python: Add concrete AGUIChatClient (#2072) * Add concrete AGUIChatClient * Update logging docstrings and conventions * PR feedback * Updates to support client-side tool calls * .NET: Move catalog samples to the HostedAgents folder (#2090) * move catalog samples to the HostedAgents folder * move the catalog samples' projects to the HostedAgents folder * Bump OpenTelemetry.Instrumentation.Runtime from 1.12.0 to 1.13.0 (#1856) --- updated-dependencies: - dependency-name: OpenTelemetry.Instrumentation.Runtime dependency-version: 1.13.0 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * .NET: Bump Microsoft.SemanticKernel.Agents.Abstractions from 1.66.0 to 1.67.0 (#1962) * Bump Microsoft.SemanticKernel.Agents.Abstractions from 1.66.0 to 1.67.0 --- updated-dependencies: - dependency-name: Microsoft.SemanticKernel.Agents.Abstractions dependency-version: 1.67.0 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> * .NET: Bump all Microsoft.SemanticKernel packages from 1.66.* to 1.67.* (#1969) * Initial plan * Update all Microsoft.SemanticKernel packages to 1.67.* Co-authored-by: rogerbarreto <[email protected]> * Remove unrelated changes to package-lock.json and yarn.lock Co-authored-by: markwallace-microsoft <[email protected]> --------- Co-authored-by: copilot-swe-agent[bot] <[email protected]> Co-authored-by: rogerbarreto <[email protected]> Co-authored-by: markwallace-microsoft <[email protected]> --------- Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Copilot <[email protected]> Co-authored-by: rogerbarreto <[email protected]> Co-authored-by: markwallace-microsoft <[email protected]> * .NET: fix: WorkflowAsAgent Sample (#1787) * fix: WorkflowAsAgent Sample * Also makes ChatForwardingExecutor public * feat: Expand ChatForwardingExecutor handled types Make ChatForwardingExecutor match the input types of ChatProtocolExecutor. * fix: Update for the new AgentRunResponseUpdate merge logic AIAgent always sends out List<ChatMessage> now. * Updated (#2076) * Bump vite in /python/samples/demos/chatkit-integration/frontend (#1918) Bumps [vite](https://github.com/vitejs/vite/tree/HEAD/packages/vite) from 7.1.9 to 7.1.12. - [Release notes](https://github.com/vitejs/vite/releases) - [Changelog](https://github.com/vitejs/vite/blob/v7.1.12/packages/vite/CHANGELOG.md) - [Commits](https://github.com/vitejs/vite/commits/v7.1.12/packages/vite) --- updated-dependencies: - dependency-name: vite dependency-version: 7.1.12 dependency-type: direct:development ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Bump Roslynator.Analyzers from 4.14.0 to 4.14.1 (#1857) --- updated-dependencies: - dependency-name: Roslynator.Analyzers dependency-version: 4.14.1 dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Bump MishaKav/pytest-coverage-comment from 1.1.57 to 1.1.59 (#2034) Bumps [MishaKav/pytest-coverage-comment](https://github.com/mishakav/pytest-coverage-comment) from 1.1.57 to 1.1.59. - [Release notes](https://github.com/mishakav/pytest-coverage-comment/releases) - [Changelog](https://github.com/MishaKav/pytest-coverage-comment/blob/main/CHANGELOG.md) - [Commits](MishaKav/pytest-coverage-comment@v1.1.57...v1.1.59) --- updated-dependencies: - dependency-name: MishaKav/pytest-coverage-comment dependency-version: 1.1.59 dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Chris <[email protected]> * Python: Handle agent user input request in AgentExecutor (#2022) * Handle agent user input request in AgentExecutor * fix test * Address comments * Fix tests * Fix tests * Address comments * Address comments * Python: OpenAI Responses Image Generation Stream Support, Sample and Unit Tests (#1853) * support for image gen streaming * small fixes * fixes * added comment * Python: Fix MCP Tool Parameter Descriptions Not Propagated to LLMs (#1978) * mcp tool description fix * small fix * .NET: Allow extending agent run options via additional properties (#1872) * Allow extending agent run options via additional properties This mirrors the M.E.AI model in ChatOptions.AdditionalProperties which is very useful when building functionality pipelines. Fixes #1815 * Expand XML documentation Co-authored-by: Copilot <[email protected]> * Add AdditionalProperties tests to AgentRunOptions Co-authored-by: kzu <[email protected]> --------- Co-authored-by: Copilot <[email protected]> Co-authored-by: copilot-swe-agent[bot] <[email protected]> Co-authored-by: kzu <[email protected]> * Python: Use the last entry in the task history to avoid empty responses (#2101) * Use the last entry in the task history to avoid empty responses * History only contains Messages * Updated package versions (#2104) --------- Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: Reuben Bond <[email protected]> Co-authored-by: Peter Ibekwe <[email protected]> Co-authored-by: Jeff Handley <[email protected]> Co-authored-by: Daniel Roth <[email protected]> Co-authored-by: Victor Dibia <[email protected]> Co-authored-by: Mark Wallace <[email protected]> Co-authored-by: Copilot <[email protected]> Co-authored-by: Shawn Henry <[email protected]> Co-authored-by: Javier Calvarro Nelson <[email protected]> Co-authored-by: Evan Mattson <[email protected]> Co-authored-by: Eduard van Valkenburg <[email protected]> Co-authored-by: Korolev Dmitry <[email protected]> Co-authored-by: westey <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Reuben Bond <[email protected]> Co-authored-by: Tao Chen <[email protected]> Co-authored-by: wuweng <[email protected]> Co-authored-by: Chris <[email protected]> Co-authored-by: Roger Barreto <[email protected]> Co-authored-by: SergeyMenshykh <[email protected]> Co-authored-by: Copilot <[email protected]> Co-authored-by: Jacob Alber <[email protected]> Co-authored-by: Giles Odigwe <[email protected]> Co-authored-by: Daniel Cazzulino <[email protected]> Co-authored-by: kzu <[email protected]> * Updated azure-ai-projects package version and small fixes (#2139) * Python: [Feature Branch] Resolve CI issues (#2143) * Small documentation and code fixes * Small fix in documentation * Addressed PR feedback * Added AI Search example --------- Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: Copilot <[email protected]> Co-authored-by: Evan Mattson <[email protected]> Co-authored-by: Chris <[email protected]> Co-authored-by: Reuben Bond <[email protected]> Co-authored-by: Peter Ibekwe <[email protected]> Co-authored-by: Jeff Handley <[email protected]> Co-authored-by: Daniel Roth <[email protected]> Co-authored-by: Victor Dibia <[email protected]> Co-authored-by: Mark Wallace <[email protected]> Co-authored-by: Shawn Henry <[email protected]> Co-authored-by: Javier Calvarro Nelson <[email protected]> Co-authored-by: Eduard van Valkenburg <[email protected]> Co-authored-by: Korolev Dmitry <[email protected]> Co-authored-by: westey <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Reuben Bond <[email protected]> Co-authored-by: Tao Chen <[email protected]> Co-authored-by: wuweng <[email protected]> Co-authored-by: Roger Barreto <[email protected]> Co-authored-by: SergeyMenshykh <[email protected]> Co-authored-by: Copilot <[email protected]> Co-authored-by: Jacob Alber <[email protected]> Co-authored-by: Giles Odigwe <[email protected]> Co-authored-by: Daniel Cazzulino <[email protected]> Co-authored-by: kzu <[email protected]>
1 parent 4bffe1e commit edb367a

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

42 files changed

+2536
-486
lines changed

python/packages/azure-ai/agent_framework_azure_ai/__init__.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@
33
import importlib.metadata
44

55
from ._chat_client import AzureAIAgentClient
6+
from ._client import AzureAIClient
67
from ._shared import AzureAISettings
78

89
try:
@@ -12,6 +13,7 @@
1213

1314
__all__ = [
1415
"AzureAIAgentClient",
16+
"AzureAIClient",
1517
"AzureAISettings",
1618
"__version__",
1719
]
Lines changed: 354 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,354 @@
1+
# Copyright (c) Microsoft. All rights reserved.
2+
3+
import sys
4+
from collections.abc import MutableSequence
5+
from typing import Any, ClassVar, TypeVar
6+
7+
from agent_framework import (
8+
AGENT_FRAMEWORK_USER_AGENT,
9+
ChatMessage,
10+
ChatOptions,
11+
HostedMCPTool,
12+
TextContent,
13+
get_logger,
14+
use_chat_middleware,
15+
use_function_invocation,
16+
)
17+
from agent_framework.exceptions import ServiceInitializationError
18+
from agent_framework.observability import use_observability
19+
from agent_framework.openai._responses_client import OpenAIBaseResponsesClient
20+
from azure.ai.projects.aio import AIProjectClient
21+
from azure.ai.projects.models import (
22+
MCPTool,
23+
PromptAgentDefinition,
24+
PromptAgentDefinitionText,
25+
ResponseTextFormatConfigurationJsonSchema,
26+
)
27+
from azure.core.credentials_async import AsyncTokenCredential
28+
from azure.core.exceptions import ResourceNotFoundError
29+
from openai.types.responses.parsed_response import (
30+
ParsedResponse,
31+
)
32+
from openai.types.responses.response import Response as OpenAIResponse
33+
from pydantic import BaseModel, ValidationError
34+
35+
from ._shared import AzureAISettings
36+
37+
if sys.version_info >= (3, 11):
38+
from typing import Self # pragma: no cover
39+
else:
40+
from typing_extensions import Self # pragma: no cover
41+
42+
43+
logger = get_logger("agent_framework.azure")
44+
45+
46+
TAzureAIClient = TypeVar("TAzureAIClient", bound="AzureAIClient")
47+
48+
49+
@use_function_invocation
50+
@use_observability
51+
@use_chat_middleware
52+
class AzureAIClient(OpenAIBaseResponsesClient):
53+
"""Azure AI Agent client."""
54+
55+
OTEL_PROVIDER_NAME: ClassVar[str] = "azure.ai" # type: ignore[reportIncompatibleVariableOverride, misc]
56+
57+
def __init__(
58+
self,
59+
*,
60+
project_client: AIProjectClient | None = None,
61+
agent_name: str | None = None,
62+
agent_version: str | None = None,
63+
conversation_id: str | None = None,
64+
project_endpoint: str | None = None,
65+
model_deployment_name: str | None = None,
66+
async_credential: AsyncTokenCredential | None = None,
67+
use_latest_version: bool | None = None,
68+
env_file_path: str | None = None,
69+
env_file_encoding: str | None = None,
70+
**kwargs: Any,
71+
) -> None:
72+
"""Initialize an Azure AI Agent client.
73+
74+
Keyword Args:
75+
project_client: An existing AIProjectClient to use. If not provided, one will be created.
76+
agent_name: The name to use when creating new agents.
77+
agent_version: The version of the agent to use.
78+
conversation_id: Default conversation ID to use for conversations. Can be overridden by
79+
conversation_id property when making a request.
80+
project_endpoint: The Azure AI Project endpoint URL.
81+
Can also be set via environment variable AZURE_AI_PROJECT_ENDPOINT.
82+
Ignored when a project_client is passed.
83+
model_deployment_name: The model deployment name to use for agent creation.
84+
Can also be set via environment variable AZURE_AI_MODEL_DEPLOYMENT_NAME.
85+
async_credential: Azure async credential to use for authentication.
86+
use_latest_version: Boolean flag that indicates whether to use latest agent version
87+
if it exists in the service.
88+
env_file_path: Path to environment file for loading settings.
89+
env_file_encoding: Encoding of the environment file.
90+
kwargs: Additional keyword arguments passed to the parent class.
91+
92+
Examples:
93+
.. code-block:: python
94+
95+
from agent_framework.azure import AzureAIClient
96+
from azure.identity.aio import DefaultAzureCredential
97+
98+
# Using environment variables
99+
# Set AZURE_AI_PROJECT_ENDPOINT=https://your-project.cognitiveservices.azure.com
100+
# Set AZURE_AI_MODEL_DEPLOYMENT_NAME=gpt-4
101+
credential = DefaultAzureCredential()
102+
client = AzureAIClient(async_credential=credential)
103+
104+
# Or passing parameters directly
105+
client = AzureAIClient(
106+
project_endpoint="https://your-project.cognitiveservices.azure.com",
107+
model_deployment_name="gpt-4",
108+
async_credential=credential,
109+
)
110+
111+
# Or loading from a .env file
112+
client = AzureAIClient(async_credential=credential, env_file_path="path/to/.env")
113+
"""
114+
try:
115+
azure_ai_settings = AzureAISettings(
116+
project_endpoint=project_endpoint,
117+
model_deployment_name=model_deployment_name,
118+
env_file_path=env_file_path,
119+
env_file_encoding=env_file_encoding,
120+
)
121+
except ValidationError as ex:
122+
raise ServiceInitializationError("Failed to create Azure AI settings.", ex) from ex
123+
124+
# If no project_client is provided, create one
125+
should_close_client = False
126+
if project_client is None:
127+
if not azure_ai_settings.project_endpoint:
128+
raise ServiceInitializationError(
129+
"Azure AI project endpoint is required. Set via 'project_endpoint' parameter "
130+
"or 'AZURE_AI_PROJECT_ENDPOINT' environment variable."
131+
)
132+
133+
# Use provided credential
134+
if not async_credential:
135+
raise ServiceInitializationError("Azure credential is required when project_client is not provided.")
136+
project_client = AIProjectClient(
137+
endpoint=azure_ai_settings.project_endpoint,
138+
credential=async_credential,
139+
user_agent=AGENT_FRAMEWORK_USER_AGENT,
140+
)
141+
should_close_client = True
142+
143+
# Initialize parent
144+
super().__init__(
145+
**kwargs,
146+
)
147+
148+
# Initialize instance variables
149+
self.agent_name = agent_name
150+
self.agent_version = agent_version
151+
self.use_latest_version = use_latest_version
152+
self.project_client = project_client
153+
self.credential = async_credential
154+
self.model_id = azure_ai_settings.model_deployment_name
155+
self.conversation_id = conversation_id
156+
self._should_close_client = should_close_client # Track whether we should close client connection
157+
158+
async def setup_azure_ai_observability(self, enable_sensitive_data: bool | None = None) -> None:
159+
"""Use this method to setup tracing in your Azure AI Project.
160+
161+
This will take the connection string from the project project_client.
162+
It will override any connection string that is set in the environment variables.
163+
It will disable any OTLP endpoint that might have been set.
164+
"""
165+
try:
166+
conn_string = await self.project_client.telemetry.get_application_insights_connection_string()
167+
except ResourceNotFoundError:
168+
logger.warning(
169+
"No Application Insights connection string found for the Azure AI Project, "
170+
"please call setup_observability() manually."
171+
)
172+
return
173+
from agent_framework.observability import setup_observability
174+
175+
setup_observability(
176+
applicationinsights_connection_string=conn_string, enable_sensitive_data=enable_sensitive_data
177+
)
178+
179+
async def __aenter__(self) -> "Self":
180+
"""Async context manager entry."""
181+
return self
182+
183+
async def __aexit__(self, exc_type: type[BaseException] | None, exc_val: BaseException | None, exc_tb: Any) -> None:
184+
"""Async context manager exit."""
185+
await self.close()
186+
187+
async def close(self) -> None:
188+
"""Close the project_client."""
189+
await self._close_client_if_needed()
190+
191+
async def _get_agent_reference_or_create(
192+
self, run_options: dict[str, Any], messages_instructions: str | None
193+
) -> dict[str, str]:
194+
"""Determine which agent to use and create if needed.
195+
196+
Returns:
197+
str: The agent_name to use
198+
"""
199+
agent_name = self.agent_name or "UnnamedAgent"
200+
201+
# If no agent_version is provided, either use latest version or create a new agent:
202+
if self.agent_version is None:
203+
# Try to use latest version if requested and agent exists
204+
if self.use_latest_version:
205+
try:
206+
existing_agent = await self.project_client.agents.get(agent_name)
207+
self.agent_name = existing_agent.name
208+
self.agent_version = existing_agent.versions.latest.version
209+
return {"name": self.agent_name, "version": self.agent_version, "type": "agent_reference"}
210+
except ResourceNotFoundError:
211+
# Agent doesn't exist, fall through to creation logic
212+
pass
213+
214+
if "model" not in run_options or not run_options["model"]:
215+
raise ServiceInitializationError(
216+
"Model deployment name is required for agent creation, "
217+
"can also be passed to the get_response methods."
218+
)
219+
220+
args: dict[str, Any] = {"model": run_options["model"]}
221+
222+
if "tools" in run_options:
223+
args["tools"] = run_options["tools"]
224+
225+
if "response_format" in run_options:
226+
response_format = run_options["response_format"]
227+
args["text"] = PromptAgentDefinitionText(
228+
format=ResponseTextFormatConfigurationJsonSchema(
229+
name=response_format.__name__,
230+
schema=response_format.model_json_schema(),
231+
)
232+
)
233+
234+
# Combine instructions from messages and options
235+
combined_instructions = [
236+
instructions
237+
for instructions in [messages_instructions, run_options.get("instructions")]
238+
if instructions
239+
]
240+
if combined_instructions:
241+
args["instructions"] = "".join(combined_instructions)
242+
243+
created_agent = await self.project_client.agents.create_version(
244+
agent_name=agent_name, definition=PromptAgentDefinition(**args)
245+
)
246+
247+
self.agent_name = created_agent.name
248+
self.agent_version = created_agent.version
249+
250+
return {"name": agent_name, "version": self.agent_version, "type": "agent_reference"}
251+
252+
async def _close_client_if_needed(self) -> None:
253+
"""Close project_client session if we created it."""
254+
if self._should_close_client:
255+
await self.project_client.close()
256+
257+
def _prepare_input(self, messages: MutableSequence[ChatMessage]) -> tuple[list[ChatMessage], str | None]:
258+
"""Prepare input from messages and convert system/developer messages to instructions."""
259+
result: list[ChatMessage] = []
260+
instructions_list: list[str] = []
261+
instructions: str | None = None
262+
263+
# System/developer messages are turned into instructions, since there is no such message roles in Azure AI.
264+
for message in messages:
265+
if message.role.value in ["system", "developer"]:
266+
for text_content in [content for content in message.contents if isinstance(content, TextContent)]:
267+
instructions_list.append(text_content.text)
268+
else:
269+
result.append(message)
270+
271+
if len(instructions_list) > 0:
272+
instructions = "".join(instructions_list)
273+
274+
return result, instructions
275+
276+
async def prepare_options(
277+
self, messages: MutableSequence[ChatMessage], chat_options: ChatOptions
278+
) -> dict[str, Any]:
279+
chat_options.store = bool(chat_options.store or chat_options.store is None)
280+
prepared_messages, instructions = self._prepare_input(messages)
281+
run_options = await super().prepare_options(prepared_messages, chat_options)
282+
agent_reference = await self._get_agent_reference_or_create(run_options, instructions)
283+
284+
run_options["extra_body"] = {"agent": agent_reference}
285+
286+
conversation_id = chat_options.conversation_id or self.conversation_id
287+
288+
# Handle different conversation ID formats
289+
if conversation_id:
290+
if conversation_id.startswith("resp_"):
291+
# For response IDs, set previous_response_id and remove conversation property
292+
run_options.pop("conversation", None)
293+
run_options["previous_response_id"] = conversation_id
294+
elif conversation_id.startswith("conv_"):
295+
# For conversation IDs, set conversation and remove previous_response_id property
296+
run_options.pop("previous_response_id", None)
297+
run_options["conversation"] = conversation_id
298+
299+
# Remove properties that are not supported on request level
300+
# but were configured on agent level
301+
exclude = ["model", "tools", "response_format"]
302+
303+
for property in exclude:
304+
run_options.pop(property, None)
305+
306+
return run_options
307+
308+
async def initialize_client(self) -> None:
309+
"""Initialize OpenAI client asynchronously."""
310+
self.client = await self.project_client.get_openai_client() # type: ignore
311+
312+
def _update_agent_name(self, agent_name: str | None) -> None:
313+
"""Update the agent name in the chat client.
314+
315+
Args:
316+
agent_name: The new name for the agent.
317+
"""
318+
# This is a no-op in the base class, but can be overridden by subclasses
319+
# to update the agent name in the client.
320+
if agent_name and not self.agent_name:
321+
self.agent_name = agent_name
322+
323+
def get_mcp_tool(self, tool: HostedMCPTool) -> Any:
324+
"""Get MCP tool from HostedMCPTool."""
325+
mcp = MCPTool(server_label=tool.name.replace(" ", "_"), server_url=str(tool.url))
326+
327+
if tool.allowed_tools:
328+
mcp["allowed_tools"] = list(tool.allowed_tools)
329+
330+
if tool.approval_mode:
331+
match tool.approval_mode:
332+
case str():
333+
mcp["require_approval"] = "always" if tool.approval_mode == "always_require" else "never"
334+
case _:
335+
if always_require_approvals := tool.approval_mode.get("always_require_approval"):
336+
mcp["require_approval"] = {"always": {"tool_names": list(always_require_approvals)}}
337+
if never_require_approvals := tool.approval_mode.get("never_require_approval"):
338+
mcp["require_approval"] = {"never": {"tool_names": list(never_require_approvals)}}
339+
340+
return mcp
341+
342+
def get_conversation_id(
343+
self, response: OpenAIResponse | ParsedResponse[BaseModel], store: bool | None
344+
) -> str | None:
345+
"""Get the conversation ID from the response if store is True."""
346+
if store:
347+
# If conversation ID exists, it means that we operate with conversation
348+
# so we use conversation ID as input and output.
349+
if response.conversation and response.conversation.id:
350+
return response.conversation.id
351+
# If conversation ID doesn't exist, we operate with responses
352+
# so we use response ID as input and output.
353+
return response.id
354+
return None

python/packages/azure-ai/pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ classifiers = [
2424
]
2525
dependencies = [
2626
"agent-framework-core",
27-
"azure-ai-projects >= 1.0.0b11",
27+
"azure-ai-projects >= 2.0.0b1",
2828
"azure-ai-agents == 1.2.0b5",
2929
"aiohttp",
3030
]

0 commit comments

Comments
 (0)