Skip to content

Conversation

@markshao
Copy link

@markshao markshao commented Nov 1, 2025

FIX THE BUG #396

@mdrxy mdrxy changed the title bufix: add bind_tools impl for MoonshotChat fix(chat_models): add bind_tools impl for MoonshotChat Nov 10, 2025
@github-actions github-actions bot added the fix label Nov 10, 2025
TBice123123 and others added 14 commits November 22, 2025 23:30
…streaming output is enabled. (langchain-ai#111)

In the latest Alibaba Cloud Qwen models, such as Qwen-Plus-latest and
Qwen3-235B-A22B, these models must use incremental_output for streaming
output in reasoning mode. However, there will be issues when using
parallel_tool_calls at the same time. The sample code is as follows:

```python
import asyncio
from langchain_community.chat_models import ChatTongyi
from langchain_core.tools import tool


@tool
def get_weather(city: str):
    """Get the weather of a city"""
    return f"{city} is sunny"


model = ChatTongyi(
    model="qwen-plus-latest",
    streaming=True,
    model_kwargs={"incremental_output": True, "enable_thinking": True},
)  # pyright:ignore

model = model.bind_tools([get_weather])

print(
    model.invoke(
        "Check the weather in San Francisco and New York", parallel_tool_calls=True
    )
)

```



![image](https://github.com/user-attachments/assets/8ed24300-aa37-43c7-b781-fa96b8005a38)


The following output will be generated, showing that there is a problem
with tool_calls.


```base
 tool_calls=[{'name': 'get_weatherget_weather', 'args': {'city': 'San Francisco'}, 'id': 'call_4835d51c1d444b0886ade5call_5a2c75ffc0ca4beb9a9eef', 'type': 'tool_call'}]
```

It should originally contain two tools' JSON schemas, but there is only
one, and even that one is problematic. The specific reason is that the
same field in the two JSON schemas has been merged. This is clearly an
issue.


The reason is that this source code uses the array's subscript as the
index, but in the case of incremental streaming output, this index is
almost always 0. That is, this code does not adapt well to incremental
streaming output.


However, the documentation of BaiLian Qwen already states that it will
return the corresponding index,
https://help.aliyun.com/zh/model-studio/deep-thinking


![image](https://github.com/user-attachments/assets/172432c8-ef13-450d-a6fb-68e16c7b6dae)


Therefore, the following modifications were made, resulting in this PR.
```python
  tool_calls.append(
                            {
                                "name": value["function"].get("name"),
                                "args": value["function"].get("arguments"),
                                "id": value.get("id"),
                                # Tongyi does not respond with index,
                                # use index in the list instead
                                "index": value.get("index", index),
                            }
                        )
```
After correction, it was found to be successfully parsed.

![image](https://github.com/user-attachments/assets/c35af748-7396-42f4-bb05-f601464ee2f0)

---------

Co-authored-by: Mason Daugherty <[email protected]>
Co-authored-by: Mason Daugherty <[email protected]>
…angchain-ai#145)

This PR enables injecting a
[`WebClient`](https://tools.slack.dev/python-slack-sdk/api-docs/slack_sdk/web/client.html)
instance into all tools within the Slack toolkit. This simplifies usage
for agents that need to work with [user
tokens](https://api.slack.com/concepts/token-types#user).

---------

Co-authored-by: Mason Daugherty <[email protected]>
…langchain-ai#104)

## Description
This PR enhances the `PlaywrightURLLoader` by adding configurable
timeout and page load strategy options, making it more flexible for
handling dynamic web pages. This addresses issue langchain-ai#103.

### Changes
- Added `timeout` parameter (default: 30000ms) to control page
navigation timeout
- Added `wait_until` parameter to control when navigation is considered
complete
- Supported `wait_until` options:
  - `"load"` (default): wait for the "load" event
  - `"domcontentloaded"`: wait for the "DOMContentLoaded" event
- `"networkidle"`: wait until there are no network connections for at
least 500ms
  - `"commit"`: wait for the first network request to be sent

### Why
The current implementation has a hardcoded 30-second timeout, which can
be insufficient for heavy dynamic pages. This change allows users to:
- Set longer timeouts for complex pages
- Choose appropriate page load strategies based on their needs
- Better handle dynamic content loading

### Real-World Examples
This PR solves timeout issues with various types of websites:

1. Weather websites:
```python
loader = PlaywrightURLLoader(
    urls=["https://weather.com/en-IN/weather/tenday/l/Chennai+Tamil+Nadu?canonicalCityId=251b7b4afedf19f747b425e048038eb1"],
    timeout=60000,  # 60 second timeout
    wait_until="domcontentloaded"
)
```

2. Dynamic news sites:
```python
loader = PlaywrightURLLoader(
    urls=["https://www.reuters.com/markets/"],
    timeout=45000,
    wait_until="networkidle"
)
```

3. E-commerce sites:
```python
loader = PlaywrightURLLoader(
    urls=["https://www.amazon.com/dp/B08N5KWB9H"],
    timeout=90000,  # 90 second timeout for complex product pages
    wait_until="load"
)
```

### Testing
- Added new test cases for both sync and async methods
- Maintained backward compatibility
- All existing tests pass
- Tested with various real-world websites

### Related Issues
Closes langchain-ai#103

---------

Co-authored-by: Parth Pathak <[email protected]>
Co-authored-by: Mason Daugherty <[email protected]>
Co-authored-by: Mason Daugherty <[email protected]>
…in-ai#46)

In the current implementation of the method _get_child_links_recursive,
the requests.get call doesn't accept verify as a parameter. This does
not allow users to disable SSL certificate verification when needed.
Please consider exposing the verify parameter as a configurable argument
to the method, defaulting to True for safety, but allowing users to
override it when necessary.


https://github.com/langchain-ai/langchain-community/blob/bc87773064735e649cfd798185502e156d5e948a/libs/community/langchain_community/document_loaders/recursive_url_loader.py#L376-L377

---------

Co-authored-by: Mason Daugherty <[email protected]>
Fixes inconsistent behaviour across `add_embeddings` and
`aadd_embeddings` for Azure Search.

Fixes langchain-ai#417
…ai#416)

# feat(vectorstores): add routing support for hybrid search

## Summary

This PR adds routing parameter support for hybrid search
(`HYBRID_SEARCH`) in the `similarity_search_with_score` method of the
`OpenSearchVectorSearch` class.

## Changes

### Modified Files
-
`libs/community/langchain_community/vectorstores/opensearch_vector_search.py`

### Change Details

Modified the hybrid search processing within the
`similarity_search_with_score` method to add the `routing` parameter to
the request parameters when `self.routing` is set.

**Before:**
```python
response = self.client.transport.perform_request(
    method="GET", url=path, body=payload
)
```

**After:**
```python
request_args: Dict[str, Any] = {
    "method": "GET",
    "url": path,
    "body": payload,
}
if self.routing:
    request_args["params"] = {"routing": self.routing}

response = self.client.transport.perform_request(**request_args)
```

## Background

OpenSearch's routing feature is used to route requests to specific
shards, which helps optimize performance and improve data locality.

In the existing code, routing was already supported for
`APPROXIMATE_SEARCH`, `SCRIPT_SCORING_SEARCH`, and
`PAINLESS_SCRIPTING_SEARCH` search types (lines 1320-1322), but it was
not supported for hybrid search (`HYBRID_SEARCH`).

This change ensures that routing functionality is consistently available
across all search types.

## Compatibility

- **Backward Compatibility**: No impact on existing APIs. When the
`routing` parameter is not set, the behavior remains unchanged.
- **Breaking Changes**: None
@markshao
Copy link
Author

@mdrxy have fixed lint problem

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

9 participants