Skip to content

Conversation

@Short-Hop
Copy link

@Short-Hop Short-Hop commented Jan 5, 2026

Pulls the latest LangChainGo Changes into our fork.

The only merge conflict that came up was the Go version update to 1.24.4.

@vendasta/dont-ask-lets-experiment

tmc and others added 8 commits October 15, 2025 14:45
* vectorstores/milvus: complete migration to new SDK v2 client (tmc#1397)

Complete the migration from the archived milvus-sdk-go/v2 to the new
github.com/milvus-io/milvus/client/v2 SDK as tracked in issue tmc#1397.

**New Implementation**:
- Add complete vectorstores/milvus/v2/ package with new SDK client
- Implement all core vectorstore operations (Add, Search, Delete, etc.)
- Add comprehensive test suite with unit and integration tests
- Include migration example and documentation

**Key Changes**:
- New milvus.go with updated client initialization and operations
- Updated options.go with v2 SDK configuration patterns
- Added example_migration.go demonstrating upgrade path
- Comprehensive README.md with migration guide
- Updated go.mod/go.sum with new SDK dependencies

**Documentation**:
- Add docs/package-lock.json for documentation build dependencies
- Provide clear migration path from v1 to v2 implementation
- Maintain backward compatibility where possible

This completes the migration work started in earlier commits and provides
a full replacement for the deprecated SDK while maintaining the same
vectorstore interface.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <[email protected]>

* style: apply gofmt formatting to v2 package files

* fix: remove unused async field from v2 Store struct

---------

Co-authored-by: Claude <[email protected]>
…mc#1418)

llms/anthropic: add streaming thinking/reasoning token support

Implement StreamingReasoningFunc support in the Anthropic client to enable
real-time streaming of thinking tokens during extended thinking responses.

Changes:
- Add StreamingReasoningFunc field to messagePayload, MessageRequest structs
- Modify handleThinkingDelta() to call StreamingReasoningFunc when thinking
  chunks arrive during streaming
- Wire up StreamingReasoningFunc from llms.CallOptions through to the
  Anthropic client payload
- Update setMessageDefaults to enable streaming when StreamingReasoningFunc
  is provided

This follows the same pattern as the OpenAI client (chat.go:638-663) and
enables thinking tokens to stream in real-time at the BEGINNING of the
response, rather than appearing after the response completes.

Fixes issue where thinking_delta events were not calling the streaming
reasoning callback, causing thinking content to only be available after
response completion.
* llms/openai: sanitize HTTP errors to prevent API key exposure (tmc#1393)

Fix security issue where context deadline errors could expose API keys and
sensitive request details in error messages. Added sanitizeHTTPError function
to detect context timeouts and network errors, then return generic error
messages without exposing sensitive information.

Changes:
- Added sanitizeHTTPError() function to sanitize HTTP client errors
- Updated chat.go to use sanitizeHTTPError() for http.Do() errors
- Updated embeddings.go to use sanitizeHTTPError() for http.Do() errors
- Added comprehensive test cases to prevent regression

* agents: fix ChainCallOption silent failure (tmc#1416)

Fix issue where ChainCallOption parameters were silently ignored by Executor.Call() and Agent implementations.

Changes:
- Updated Agent.Plan() interface signature to accept variadic ChainCallOption parameters
- Updated Executor.Call() to accept and propagate options to Agent.Plan()
- Updated Executor.doIteration() to propagate options through the chain
- Updated OneShotZeroAgent.Plan() to accept and pass options to chains.Predict()
- Updated ConversationalAgent.Plan() to accept and pass options to chains.Predict()
- Updated OpenAIFunctionsAgent.Plan() to accept and pass options to LLM.GenerateContent()
- Exported GetLLMCallOptions() function for option conversion (was getLLMCallOptions)
- Updated test mock to match new Agent interface signature

Now users can pass LLM configuration options (temperature, max tokens, etc.) through executors to agents.
llms/openai: sanitize HTTP errors to prevent API key exposure (tmc#1393)

Fix security issue where context deadline errors could expose API keys and
sensitive request details in error messages. Added sanitizeHTTPError function
to detect context timeouts and network errors, then return generic error
messages without exposing sensitive information.

Changes:
- Added sanitizeHTTPError() function to sanitize HTTP client errors
- Updated chat.go to use sanitizeHTTPError() for http.Do() errors
- Updated embeddings.go to use sanitizeHTTPError() for http.Do() errors
- Added comprehensive test cases to prevent regression
@Short-Hop Short-Hop marked this pull request as ready for review January 5, 2026 17:53
@Short-Hop Short-Hop merged commit 9b8de4c into main Jan 6, 2026
163 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants