Skip to content

feat: add azure ai llm provider#191

Open
alpott-cot wants to merge 1 commit intominitap-ai:mainfrom
alpott-cot:feat/add-azure-provider
Open

feat: add azure ai llm provider#191
alpott-cot wants to merge 1 commit intominitap-ai:mainfrom
alpott-cot:feat/add-azure-provider

Conversation

@alpott-cot
Copy link
Copy Markdown

@alpott-cot alpott-cot commented Mar 19, 2026

🚀 What's new?

Adds Azure AI Foundry as a new LLM provider (azure), enabling models deployed on Azure to be used for any agent node via llm-config.override.jsonc.

Changes

  • config.py — Added "azure" to LLMProvider, new AZURE_API_KEY and
    AZURE_BASE_URL settings, provider validation
  • llm.py — Added get_azure_llm() using AzureAIOpenAIApiChatModel from
    langchain-azure-ai, wired into get_llm() dispatch
  • .env.example — Documented Azure env vars with endpoint format for both auth methods

Auth support

  • API key — Set AZURE_API_KEY and AZURE_BASE_URL pointing to the OpenAI-compatible endpoint
  • Entra ID / managed identity — Omit AZURE_API_KEY and the provider falls back to DefaultAzureCredential, using the project endpoint format

What kind of change is this? Mark with an x

  • Bug fix (non-breaking change that solves an issue)
  • New feature (non-breaking change that adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation (update to the docs)

✅ Checklist

Before you submit, please make sure you've done the following. If you have any questions, we're here to help!

  • I have read the Contributing Guide.
  • My code follows the project's style guidelines (ruff check . and ruff format . pass).
  • I have added necessary documentation (if applicable).

💬 Any questions or comments?

Have a question or need some help? Join us on Discord!

Summary by CodeRabbit

  • New Features

    • Azure AI added as a supported LLM provider with both API-key and managed-identity authentication; provider validation now requires a configured Azure base URL.
  • Documentation

    • .env example updated with optional Azure entries and two example Azure URL formats.
  • Tests

    • Test setup updated to mock Azure-related imports to avoid loading real Azure chat modules.
  • Chores

    • Project updated to include Azure client support.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 19, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 6ed2a3c6-6e19-4d56-be3b-25934f37b913

📥 Commits

Reviewing files that changed from the base of the PR and between efd50b3 and efca1f4.

⛔ Files ignored due to path filters (1)
  • uv.lock is excluded by !**/*.lock
📒 Files selected for processing (6)
  • .env.example
  • llm-config.override.template.jsonc
  • minitap/mobile_use/agents/outputter/test_outputter.py
  • minitap/mobile_use/config.py
  • minitap/mobile_use/services/llm.py
  • pyproject.toml
✅ Files skipped from review due to trivial changes (4)
  • llm-config.override.template.jsonc
  • pyproject.toml
  • minitap/mobile_use/agents/outputter/test_outputter.py
  • .env.example
🚧 Files skipped from review as they are similar to previous changes (2)
  • minitap/mobile_use/services/llm.py
  • minitap/mobile_use/config.py

📝 Walkthrough

Walkthrough

Adds Azure AI Foundry support: env examples, settings fields and provider enum update, Azure LLM factory selecting API key or DefaultAzureCredential, test import-mocking for Azure modules, and a runtime dependency on langchain-azure-ai.

Changes

Cohort / File(s) Summary
Environment & Template
\.env.example, llm-config.override.template.jsonc
Added commented "Azure AI Foundry" block with AZURE_BASE_URL and optional AZURE_API_KEY; added "azure" to provider list in the LLM override template.
Configuration Model
minitap/mobile_use/config.py
Added `AZURE_BASE_URL: str
LLM Service
minitap/mobile_use/services/llm.py
Added get_azure_llm(model_name, temperature) constructing AzureAIOpenAIApiChatModel; selects API key when set or DefaultAzureCredential otherwise; get_llm() dispatches to Azure for provider "azure".
Tests
minitap/mobile_use/agents/outputter/test_outputter.py
Mocked langchain_azure_ai and langchain_azure_ai.chat_models in sys.modules to avoid importing real Azure modules during tests.
Dependencies
pyproject.toml
Added runtime dependency langchain-azure-ai>=1.1.1.

Sequence Diagram(s)

sequenceDiagram
  participant Client
  participant LLMService as "LLM Service\n(minitap/mobile_use/services/llm.py)"
  participant Settings as "Settings\n(minitap/mobile_use/config.py)"
  participant AzureSDK as "langchain_azure_ai\nAzureAIOpenAIApiChatModel"
  participant Creds as "Credentials\n(DefaultAzureCredential / API Key)"

  Client->>LLMService: request LLM(provider="azure", model)
  LLMService->>Settings: read AZURE_BASE_URL, AZURE_API_KEY
  alt AZURE_API_KEY set
    Settings-->>LLMService: api key present
    LLMService->>AzureSDK: instantiate with api_key
  else AZURE_API_KEY missing
    Settings-->>LLMService: no api key
    LLMService->>Creds: import DefaultAzureCredential()
    Creds-->>LLMService: credential instance
    LLMService->>AzureSDK: instantiate with DefaultAzureCredential + endpoint
  end
  AzureSDK-->>LLMService: chat model ready
  LLMService-->>Client: returns configured chat model
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Suggested reviewers

  • cguiguet

Poem

🐰 I hopped through env and config fields,
Brought Azure endpoints, keys unsealed,
A chat model bloomed with creds to choose,
Tests stayed quiet, no import blues,
Cloud carrots found — hooray, I squealed! 🥕🐇

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The pull request title 'feat: add azure ai llm provider' directly and clearly describes the main change: adding Azure AI as a new LLM provider to the codebase.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@alpott-cot alpott-cot force-pushed the feat/add-azure-provider branch from afad776 to eb03d80 Compare March 19, 2026 09:31
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
minitap/mobile_use/services/llm.py (2)

7-7: Top-level import adds startup cost even when Azure isn't used.

This unconditional import means langchain_azure_ai is loaded at module import time, even for users who only use other providers. Consider lazy importing inside get_azure_llm() similar to how DefaultAzureCredential is handled, especially if this package has heavy dependencies.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@minitap/mobile_use/services/llm.py` at line 7, The top-level import of
AzureAIOpenAIApiChatModel causes langchain_azure_ai to load on module import;
move that import into get_azure_llm() and perform a lazy import there (similar
to how DefaultAzureCredential is handled) so the module is only imported when
Azure is actually used; update get_azure_llm() to import
AzureAIOpenAIApiChatModel locally and preserve existing error handling and
credential logic.

243-257: Consider using elif for consistent control flow.

The provider dispatch switches from elif to standalone if statements for azure and minitap. While this works because earlier matches return early, the inconsistent style makes the flow harder to reason about. The else clause only pairs with the final if, which could confuse readers.

Proposed refactor for consistent elif chain
     elif llm.provider == "xai":
         return get_grok_llm(llm.model, temperature)
-    if llm.provider == "azure":
+    elif llm.provider == "azure":
         return get_azure_llm(llm.model, temperature)
-    if llm.provider == "minitap":
+    elif llm.provider == "minitap":
         remote_tracing = False
         if ctx.execution_setup:
             remote_tracing = ctx.execution_setup.enable_remote_tracing
         return get_minitap_llm(
             trace_id=ctx.trace_id,
             remote_tracing=remote_tracing,
             model=llm.model,
             temperature=temperature,
             api_key=ctx.minitap_api_key,
         )
     else:
         raise ValueError(f"Unsupported provider: {llm.provider}")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@minitap/mobile_use/services/llm.py` around lines 243 - 257, The provider
dispatch uses a standalone second `if` which breaks the intended
mutually-exclusive chain; update the conditional in the function handling
`llm.provider` so the `minitap` branch is `elif llm.provider == "minitap":`
(keeping the remote_tracing extraction from `ctx.execution_setup`, and the
returns calling `get_azure_llm` and `get_minitap_llm` with `ctx.trace_id` and
`ctx.minitap_api_key`) so the final `else` correctly handles unsupported
providers.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@minitap/mobile_use/agents/outputter/test_outputter.py`:
- Line 13: The test currently inserts a mock only for
sys.modules["langchain_azure_ai.chat_models"] which can fail because Python
imports the parent package first; update the setup to also insert a Mock for
sys.modules["langchain_azure_ai"] before setting
sys.modules["langchain_azure_ai.chat_models"] (mirroring the pattern used for
the other LangChain top-level mocks on lines above) so the parent package exists
and import-time side effects are avoided.

---

Nitpick comments:
In `@minitap/mobile_use/services/llm.py`:
- Line 7: The top-level import of AzureAIOpenAIApiChatModel causes
langchain_azure_ai to load on module import; move that import into
get_azure_llm() and perform a lazy import there (similar to how
DefaultAzureCredential is handled) so the module is only imported when Azure is
actually used; update get_azure_llm() to import AzureAIOpenAIApiChatModel
locally and preserve existing error handling and credential logic.
- Around line 243-257: The provider dispatch uses a standalone second `if` which
breaks the intended mutually-exclusive chain; update the conditional in the
function handling `llm.provider` so the `minitap` branch is `elif llm.provider
== "minitap":` (keeping the remote_tracing extraction from
`ctx.execution_setup`, and the returns calling `get_azure_llm` and
`get_minitap_llm` with `ctx.trace_id` and `ctx.minitap_api_key`) so the final
`else` correctly handles unsupported providers.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: ccce16f9-292e-43c4-9424-079d725f8896

📥 Commits

Reviewing files that changed from the base of the PR and between 1fd5723 and afad776.

⛔ Files ignored due to path filters (1)
  • uv.lock is excluded by !**/*.lock
📒 Files selected for processing (6)
  • .env.example
  • llm-config.override.template.jsonc
  • minitap/mobile_use/agents/outputter/test_outputter.py
  • minitap/mobile_use/config.py
  • minitap/mobile_use/services/llm.py
  • pyproject.toml

@alpott-cot alpott-cot force-pushed the feat/add-azure-provider branch from eb03d80 to efd50b3 Compare March 19, 2026 09:44
@alpott-cot alpott-cot force-pushed the feat/add-azure-provider branch from efd50b3 to efca1f4 Compare March 24, 2026 09:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant