fix: preserve Anthropic thinking blocks and signatures in LiteLLM round-trip#4811
fix: preserve Anthropic thinking blocks and signatures in LiteLLM round-trip#4811giulio-leone wants to merge 4 commits intogoogle:mainfrom
Conversation
|
Warning You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again! |
|
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). View this failed invocation of the CLA check for more information. For the most up to date status, view the checks section at the bottom of the pull request. |
|
Response from ADK Triaging Agent Hello @giulio-leone, thank you for your contribution! Before we can merge this PR, we need you to sign our Contributor License Agreement (CLA). You can find more information and sign the CLA at https://cla.developers.google.com/. Thanks! |
|
Hi @giulio-leone , Thank you for your contribution! It appears you haven't yet signed the Contributor License Agreement (CLA). Please visit https://cla.developers.google.com/ to complete the signing process. Once the CLA is signed, we'll be able to proceed with the review of your PR. Thank you! |
9b92452 to
4b10202
Compare
|
@rohityan Thanks for the heads-up! I'll get the Google CLA signed. Will follow up once it's done. |
|
I have signed the Google CLA. Could you please re-run the CLA check? Thank you! |
b3c35fc to
59e6e04
Compare
|
Hi @rohityan — the CLA is now signed and passing ✅ (it was a |
|
Hi @giulio-leone, This PR has merge conflicts that require changes from your end. Could you please rebase your branch with the latest main branch to address these? Once this is complete, please let us know so we can proceed with the review. |
1d6844c to
f5f7cc8
Compare
|
Rebased this branch onto the latest Local validation after the rebase is clean:
Updated head: |
f5f7cc8 to
1ed2e51
Compare
|
Rebased onto current Local validation on the rebased branch:
Runtime proof beyond unit tests (real source + installed LiteLLM prompt templates, Bedrock Claude path):
This should leave the PR both fresh and with concrete local proof for the Claude/LiteLLM thinking-block path. |
1ed2e51 to
1dc20fe
Compare
|
Small follow-up: I amended the top commit message only to restore a green CLA check after the force-push. No code changes from the validation comment above. Current head: |
1dc20fe to
6193fad
Compare
|
Refreshed onto the latest Fresh local validation on the new head
|
…nd-trip When using Claude models through LiteLLM, extended thinking blocks (with signatures) were lost after the first turn because: 1. _extract_reasoning_value() only read reasoning_content (flattened string without signatures), ignoring thinking_blocks 2. _content_to_message_param() set reasoning_content on the outgoing message, which LiteLLM's anthropic_messages_pt() template silently drops This fix: - Adds _is_anthropic_provider() helper to detect anthropic/bedrock/ vertex_ai providers - Updates _extract_reasoning_value() to prefer thinking_blocks (with per-block signatures) over reasoning_content - Updates _convert_reasoning_value_to_parts() to handle ChatCompletionThinkingBlock dicts, preserving thought_signature - Updates _content_to_message_param() to embed thinking blocks directly in the message content list for Anthropic providers, bypassing the broken reasoning_content path Fixes google#4801
Cover the model-driven LiteLLM Anthropic round-trip with a regression test.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
6193fad to
2015c6f
Compare
|
Refreshed this branch onto current Validation:
Direct behavioral proof against current
So the refreshed branch is preserving the Anthropic-family signature in the provider-compatible encoded form, while current |
Summary
Fixes #4801 — Adaptive thinking is broken when using Claude models through LiteLLM.
Root Cause
When Claude produces extended thinking with
thinking_blocks(each containing atype,thinkingtext, andsignature), the round-trip through ADK's LiteLLM integration silently loses them:_extract_reasoning_value()only readreasoning_content(a flattened string without signatures), ignoring the richerthinking_blocksfield_content_to_message_param()setreasoning_contenton the outgoingChatCompletionAssistantMessage, but LiteLLM'santhropic_messages_pt()prompt template silently drops thereasoning_contentfield entirelyFix
Three coordinated changes in
lite_llm.py:_is_anthropic_provider()helperanthropic,bedrock,vertex_aiproviders_extract_reasoning_value()thinking_blocks(with per-block signatures) overreasoning_content_convert_reasoning_value_to_parts()ChatCompletionThinkingBlockdicts, preservingthought_signature_content_to_message_param()contentlist as{"type": "thinking", ...}dicts — this format passes through LiteLLM'santhropic_messages_pt()correctlyFor non-Anthropic providers (OpenAI, etc.), behavior is unchanged —
reasoning_contentis still used.Verification
anthropic_messages_pt()was tested to confirm:reasoning_contentfield → DROPPED (existing LiteLLM bug)contentas list with{"type": "thinking", ...}→ PRESERVED ✅Tests
Added 7 targeted tests covering:
_is_anthropic_provider()— provider detection_extract_reasoning_value()— prefersthinking_blocksoverreasoning_content_convert_reasoning_value_to_parts()— signature preservation from block dicts_convert_reasoning_value_to_parts()— plain string fallback (no signature)_content_to_message_param()— Anthropic: thinking blocks embedded in content list_content_to_message_param()— OpenAI: reasoning_content field used (unchanged)_content_to_message_param()— Anthropic thinking + tool calls combinedFull test suite: 4732 passed, 0 failures