Skip to content

feat(langchain): Record run_name as gen_ai.function_id on Invoke Agent Spans#5926

Open
alexander-alderman-webb wants to merge 9 commits intowebb/langchain/tool-pipeline-namefrom
webb/langchain/agent-run-name
Open

feat(langchain): Record run_name as gen_ai.function_id on Invoke Agent Spans#5926
alexander-alderman-webb wants to merge 9 commits intowebb/langchain/tool-pipeline-namefrom
webb/langchain/agent-run-name

Conversation

@alexander-alderman-webb
Copy link
Copy Markdown
Contributor

@alexander-alderman-webb alexander-alderman-webb commented Apr 1, 2026

Description

Set run_name as the gen_ai.function_id attribute instead of gen_ai.agent.name.

Add tests for AgentExecutor.invoke() based on AgentExecutor.stream() test.
Add tests with create_openai_tools_agent().with_config() to test all branches for extracting the run_name.

Issues

Reminders

@alexander-alderman-webb alexander-alderman-webb changed the title feat(langchain): Record run_name as gen_ai.pipeline.name on Invoke Agent Spans feat(langchain): Record run_name as gen_ai.pipeline.name on Invoke Agent Spans Apr 1, 2026
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 1, 2026

Semver Impact of This PR

🟡 Minor (new features)

📋 Changelog Preview

This is how your changes will appear in the changelog.
Entries from this PR are highlighted with a left border (blockquote style).


New Features ✨

Langchain

  • Record run_name as gen_ai.function_id on Invoke Agent Spans by alexander-alderman-webb in #5926
  • Record run_name in on_chat_model_start by alexander-alderman-webb in #5924
  • Record run_name in on_tool_start by alexander-alderman-webb in #5925

Other

  • (ci) Cancel in-progress PR workflows on new commit push by joshuarli in #5994

Bug Fixes 🐛

  • (langchain) Set agent name as gen_ai.agent.name for chat and tool spans by alexander-alderman-webb in #5877

🤖 This preview updates automatically when you update the PR.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 1, 2026

Codecov Results 📊

13 passed | Total: 13 | Pass Rate: 100% | Execution Time: 10.22s

All tests are passing successfully.

❌ Patch coverage is 0.00%. Project has 14945 uncovered lines.

Files with missing lines (1)
File Patch % Lines
langchain.py 3.20% ⚠️ 574 Missing

Generated by Codecov Action

@alexander-alderman-webb alexander-alderman-webb marked this pull request as ready for review April 1, 2026 08:37
@alexander-alderman-webb alexander-alderman-webb requested a review from a team as a code owner April 1, 2026 08:37
@alexander-alderman-webb alexander-alderman-webb changed the title feat(langchain): Record run_name as gen_ai.pipeline.name on Invoke Agent Spans feat(langchain): Record run_name as gen_ai.function_id on Invoke Agent Spans Apr 14, 2026
Copy link
Copy Markdown

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

There are 2 total unresolved issues (including 1 from previous review).

Fix All in Cursor

❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

Reviewed by Cursor Bugbot for commit a9d5e15. Configure here.

Comment on lines +983 to +1044
if send_default_pii and include_prompts:
assert "5" in chat_spans[0]["data"][SPANDATA.GEN_AI_RESPONSE_TEXT]
assert "word" in tool_exec_span["data"][SPANDATA.GEN_AI_TOOL_INPUT]
assert 5 == int(tool_exec_span["data"][SPANDATA.GEN_AI_TOOL_OUTPUT])

param_id = request.node.callspec.id
if "string" in param_id:
assert [
{
"type": "text",
"content": "You are very powerful assistant, but don't know current events",
}
] == json.loads(chat_spans[0]["data"][SPANDATA.GEN_AI_SYSTEM_INSTRUCTIONS])
else:
assert [
{
"type": "text",
"content": "You are a helpful assistant.",
},
{
"type": "text",
"content": "Be concise and clear.",
},
] == json.loads(chat_spans[0]["data"][SPANDATA.GEN_AI_SYSTEM_INSTRUCTIONS])

assert "5" in chat_spans[1]["data"][SPANDATA.GEN_AI_RESPONSE_TEXT]

# Verify tool calls are recorded when PII is enabled
assert SPANDATA.GEN_AI_RESPONSE_TOOL_CALLS in chat_spans[0].get("data", {}), (
"Tool calls should be recorded when send_default_pii=True and include_prompts=True"
)
tool_calls_data = chat_spans[0]["data"][SPANDATA.GEN_AI_RESPONSE_TOOL_CALLS]
assert isinstance(tool_calls_data, (list, str)) # Could be serialized
if isinstance(tool_calls_data, str):
assert "get_word_length" in tool_calls_data
elif isinstance(tool_calls_data, list) and len(tool_calls_data) > 0:
# Check if tool calls contain expected function name
tool_call_str = str(tool_calls_data)
assert "get_word_length" in tool_call_str
else:
assert SPANDATA.GEN_AI_SYSTEM_INSTRUCTIONS not in chat_spans[0].get("data", {})
assert SPANDATA.GEN_AI_REQUEST_MESSAGES not in chat_spans[0].get("data", {})
assert SPANDATA.GEN_AI_RESPONSE_TEXT not in chat_spans[0].get("data", {})
assert SPANDATA.GEN_AI_SYSTEM_INSTRUCTIONS not in chat_spans[1].get("data", {})
assert SPANDATA.GEN_AI_REQUEST_MESSAGES not in chat_spans[1].get("data", {})
assert SPANDATA.GEN_AI_RESPONSE_TEXT not in chat_spans[1].get("data", {})
assert SPANDATA.GEN_AI_TOOL_INPUT not in tool_exec_span.get("data", {})
assert SPANDATA.GEN_AI_TOOL_OUTPUT not in tool_exec_span.get("data", {})

# Verify tool calls are NOT recorded when PII is disabled
assert SPANDATA.GEN_AI_RESPONSE_TOOL_CALLS not in chat_spans[0].get(
"data", {}
), (
f"Tool calls should NOT be recorded when send_default_pii={send_default_pii} "
f"and include_prompts={include_prompts}"
)
assert SPANDATA.GEN_AI_RESPONSE_TOOL_CALLS not in chat_spans[1].get(
"data", {}
), (
f"Tool calls should NOT be recorded when send_default_pii={send_default_pii} "
f"and include_prompts={include_prompts}"
)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand why we have some conditionals in this test, but the size of this conditional and the various assertions within the respective blocks looks like a code smell to me.

My suggestion would be to either break this test into 2 separate parameterized test cases - one group which asserts the first part, the second that asserts the contents in the else block



@pytest.fixture
def streaming_chat_completions_model_responses():
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this fixture needs to be placed in the global fixtures because it's very specific to tests within a particular file.

If you can foresee us re-using this across multiple test files within the AI integrations, then we should move this into a conftest.py file placed within tests/integrations or, if even better if this would only be used by the langchain tests, within the langchain folder

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants