Checked other resources
Example Code
/**
* Dev script for testing Gemini's built-in code execution tool.
* Invokes the model with a prompt that requires computation and saves the raw
* AIMessage JSON to a timestamped file in this directory.
*/
import { HumanMessage, type MessageOutputVersion } from "@langchain/core/messages";
import { ChatGoogle } from "@langchain/google/node";
const MODEL = "gemini-2.5-flash";
const PROMPT = "Render the normal distribution of IQ scores for me";
async function runCodeExecution(outputVersion: MessageOutputVersion): Promise<void> {
const llm = new ChatGoogle({
model: MODEL,
outputVersion,
}).bindTools([
{
codeExecution: {},
},
]);
console.log(`Invoking model (outputVersion="${outputVersion}") with: "${PROMPT}"`);
const result = await llm.invoke([new HumanMessage(PROMPT)]);
const timestamp = new Date().toISOString().replace(/[:.]/g, "-");
const outPath = `${import.meta.dir}/gemini-code-execution-${timestamp}-${outputVersion}.json`;
await Bun.write(outPath, JSON.stringify(result.toJSON(), null, 2));
console.log(`Result saved to: ${outPath}`);
}
// await runCodeExecution("v0");
await runCodeExecution("v1");
Error Message and Stack Trace (if applicable)
No response
Description
I'm attaching 3 example outputs of the previous script.
The issue becomes doubly problematic if the model output includes inlined data like images.
To spell out the problem, in outputVersion: "v1", the .toJSON() function saves the content of the AIMessage / AIMessageChunk
under:
.kwargs > .lc_kwargs > .lc_kwargs > .content
.kwargs > .lc_kwargs > .contentBlocks
.kwargs > .content
.kwargs > .content_blocks
As long as you are serializing messages directly into a DB store, even if you filter out inline base64 data, then whether you use .toJSON() or .toDict() or .mapChatMessagesToStoredMessages, the data is being inlined 4 times.
Now, there is a chance this is a @langchain/google specific bug, and somebody should try to reproduce this with other providers.
gemini-code-execution-2026-03-31T12-40-05-527Z-v1.json
gemini-code-execution-2026-03-31T12-37-09-729Z-v1.json
gemini-code-execution-2026-03-31T12-36-32-889Z-v0.json
System Info
Package: @langchain/google
Version: 0.1.9
Companion: @langchain/core 1.1.37 (peerDependency enforced)
Runtime: bun 1.3.11
Platform: Ubuntu 25.10 (Linux 6.17.0-14-generic)
Checked other resources
Example Code
Error Message and Stack Trace (if applicable)
No response
Description
I'm attaching 3 example outputs of the previous script.
The issue becomes doubly problematic if the model output includes inlined data like images.
To spell out the problem, in
outputVersion: "v1", the.toJSON()function saves the content of theAIMessage/AIMessageChunkunder:
.kwargs > .lc_kwargs > .lc_kwargs > .content.kwargs > .lc_kwargs > .contentBlocks.kwargs > .content.kwargs > .content_blocksAs long as you are serializing messages directly into a DB store, even if you filter out inline base64 data, then whether you use
.toJSON()or.toDict()or.mapChatMessagesToStoredMessages, the data is being inlined 4 times.Now, there is a chance this is a
@langchain/googlespecific bug, and somebody should try to reproduce this with other providers.gemini-code-execution-2026-03-31T12-40-05-527Z-v1.json
gemini-code-execution-2026-03-31T12-37-09-729Z-v1.json
gemini-code-execution-2026-03-31T12-36-32-889Z-v0.json
System Info
Package:
@langchain/googleVersion:
0.1.9Companion:
@langchain/core1.1.37(peerDependency enforced)Runtime:
bun1.3.11Platform:
Ubuntu 25.10 (Linux 6.17.0-14-generic)