Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (6)
✅ Files skipped from review due to trivial changes (4)
🚧 Files skipped from review as they are similar to previous changes (2)
📝 WalkthroughWalkthroughAdds Azure AI Foundry support: env examples, settings fields and provider enum update, Azure LLM factory selecting API key or DefaultAzureCredential, test import-mocking for Azure modules, and a runtime dependency on Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant LLMService as "LLM Service\n(minitap/mobile_use/services/llm.py)"
participant Settings as "Settings\n(minitap/mobile_use/config.py)"
participant AzureSDK as "langchain_azure_ai\nAzureAIOpenAIApiChatModel"
participant Creds as "Credentials\n(DefaultAzureCredential / API Key)"
Client->>LLMService: request LLM(provider="azure", model)
LLMService->>Settings: read AZURE_BASE_URL, AZURE_API_KEY
alt AZURE_API_KEY set
Settings-->>LLMService: api key present
LLMService->>AzureSDK: instantiate with api_key
else AZURE_API_KEY missing
Settings-->>LLMService: no api key
LLMService->>Creds: import DefaultAzureCredential()
Creds-->>LLMService: credential instance
LLMService->>AzureSDK: instantiate with DefaultAzureCredential + endpoint
end
AzureSDK-->>LLMService: chat model ready
LLMService-->>Client: returns configured chat model
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
afad776 to
eb03d80
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
minitap/mobile_use/services/llm.py (2)
7-7: Top-level import adds startup cost even when Azure isn't used.This unconditional import means
langchain_azure_aiis loaded at module import time, even for users who only use other providers. Consider lazy importing insideget_azure_llm()similar to howDefaultAzureCredentialis handled, especially if this package has heavy dependencies.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@minitap/mobile_use/services/llm.py` at line 7, The top-level import of AzureAIOpenAIApiChatModel causes langchain_azure_ai to load on module import; move that import into get_azure_llm() and perform a lazy import there (similar to how DefaultAzureCredential is handled) so the module is only imported when Azure is actually used; update get_azure_llm() to import AzureAIOpenAIApiChatModel locally and preserve existing error handling and credential logic.
243-257: Consider usingeliffor consistent control flow.The provider dispatch switches from
elifto standaloneifstatements forazureandminitap. While this works because earlier matches return early, the inconsistent style makes the flow harder to reason about. Theelseclause only pairs with the finalif, which could confuse readers.Proposed refactor for consistent elif chain
elif llm.provider == "xai": return get_grok_llm(llm.model, temperature) - if llm.provider == "azure": + elif llm.provider == "azure": return get_azure_llm(llm.model, temperature) - if llm.provider == "minitap": + elif llm.provider == "minitap": remote_tracing = False if ctx.execution_setup: remote_tracing = ctx.execution_setup.enable_remote_tracing return get_minitap_llm( trace_id=ctx.trace_id, remote_tracing=remote_tracing, model=llm.model, temperature=temperature, api_key=ctx.minitap_api_key, ) else: raise ValueError(f"Unsupported provider: {llm.provider}")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@minitap/mobile_use/services/llm.py` around lines 243 - 257, The provider dispatch uses a standalone second `if` which breaks the intended mutually-exclusive chain; update the conditional in the function handling `llm.provider` so the `minitap` branch is `elif llm.provider == "minitap":` (keeping the remote_tracing extraction from `ctx.execution_setup`, and the returns calling `get_azure_llm` and `get_minitap_llm` with `ctx.trace_id` and `ctx.minitap_api_key`) so the final `else` correctly handles unsupported providers.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@minitap/mobile_use/agents/outputter/test_outputter.py`:
- Line 13: The test currently inserts a mock only for
sys.modules["langchain_azure_ai.chat_models"] which can fail because Python
imports the parent package first; update the setup to also insert a Mock for
sys.modules["langchain_azure_ai"] before setting
sys.modules["langchain_azure_ai.chat_models"] (mirroring the pattern used for
the other LangChain top-level mocks on lines above) so the parent package exists
and import-time side effects are avoided.
---
Nitpick comments:
In `@minitap/mobile_use/services/llm.py`:
- Line 7: The top-level import of AzureAIOpenAIApiChatModel causes
langchain_azure_ai to load on module import; move that import into
get_azure_llm() and perform a lazy import there (similar to how
DefaultAzureCredential is handled) so the module is only imported when Azure is
actually used; update get_azure_llm() to import AzureAIOpenAIApiChatModel
locally and preserve existing error handling and credential logic.
- Around line 243-257: The provider dispatch uses a standalone second `if` which
breaks the intended mutually-exclusive chain; update the conditional in the
function handling `llm.provider` so the `minitap` branch is `elif llm.provider
== "minitap":` (keeping the remote_tracing extraction from
`ctx.execution_setup`, and the returns calling `get_azure_llm` and
`get_minitap_llm` with `ctx.trace_id` and `ctx.minitap_api_key`) so the final
`else` correctly handles unsupported providers.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: ccce16f9-292e-43c4-9424-079d725f8896
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (6)
.env.examplellm-config.override.template.jsoncminitap/mobile_use/agents/outputter/test_outputter.pyminitap/mobile_use/config.pyminitap/mobile_use/services/llm.pypyproject.toml
eb03d80 to
efd50b3
Compare
efd50b3 to
efca1f4
Compare
🚀 What's new?
Adds Azure AI Foundry as a new LLM provider (
azure), enabling models deployed on Azure to be used for any agent node viallm-config.override.jsonc.Changes
config.py— Added"azure"toLLMProvider, newAZURE_API_KEYandAZURE_BASE_URLsettings, provider validationllm.py— Addedget_azure_llm()usingAzureAIOpenAIApiChatModelfromlangchain-azure-ai, wired intoget_llm()dispatch.env.example— Documented Azure env vars with endpoint format for both auth methodsAuth support
AZURE_API_KEYandAZURE_BASE_URLpointing to the OpenAI-compatible endpointAZURE_API_KEYand the provider falls back toDefaultAzureCredential, using the project endpoint formatWhat kind of change is this? Mark with an
x✅ Checklist
Before you submit, please make sure you've done the following. If you have any questions, we're here to help!
ruff check .andruff format .pass).💬 Any questions or comments?
Have a question or need some help? Join us on Discord!
Summary by CodeRabbit
New Features
Documentation
Tests
Chores