feat(langchain): Record run_name as gen_ai.function_id on Invoke Agent Spans#5926
Conversation
run_name as gen_ai.pipeline.name on Invoke Agent Spans
Semver Impact of This PR🟡 Minor (new features) 📋 Changelog PreviewThis is how your changes will appear in the changelog. New Features ✨Langchain
Other
Bug Fixes 🐛
🤖 This preview updates automatically when you update the PR. |
Codecov Results 📊✅ 13 passed | Total: 13 | Pass Rate: 100% | Execution Time: 10.22s All tests are passing successfully. ❌ Patch coverage is 0.00%. Project has 14945 uncovered lines. Files with missing lines (1)
Generated by Codecov Action |
run_name as gen_ai.pipeline.name on Invoke Agent Spansrun_name as gen_ai.function_id on Invoke Agent Spans
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
There are 2 total unresolved issues (including 1 from previous review).
❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
Reviewed by Cursor Bugbot for commit a9d5e15. Configure here.
| if send_default_pii and include_prompts: | ||
| assert "5" in chat_spans[0]["data"][SPANDATA.GEN_AI_RESPONSE_TEXT] | ||
| assert "word" in tool_exec_span["data"][SPANDATA.GEN_AI_TOOL_INPUT] | ||
| assert 5 == int(tool_exec_span["data"][SPANDATA.GEN_AI_TOOL_OUTPUT]) | ||
|
|
||
| param_id = request.node.callspec.id | ||
| if "string" in param_id: | ||
| assert [ | ||
| { | ||
| "type": "text", | ||
| "content": "You are very powerful assistant, but don't know current events", | ||
| } | ||
| ] == json.loads(chat_spans[0]["data"][SPANDATA.GEN_AI_SYSTEM_INSTRUCTIONS]) | ||
| else: | ||
| assert [ | ||
| { | ||
| "type": "text", | ||
| "content": "You are a helpful assistant.", | ||
| }, | ||
| { | ||
| "type": "text", | ||
| "content": "Be concise and clear.", | ||
| }, | ||
| ] == json.loads(chat_spans[0]["data"][SPANDATA.GEN_AI_SYSTEM_INSTRUCTIONS]) | ||
|
|
||
| assert "5" in chat_spans[1]["data"][SPANDATA.GEN_AI_RESPONSE_TEXT] | ||
|
|
||
| # Verify tool calls are recorded when PII is enabled | ||
| assert SPANDATA.GEN_AI_RESPONSE_TOOL_CALLS in chat_spans[0].get("data", {}), ( | ||
| "Tool calls should be recorded when send_default_pii=True and include_prompts=True" | ||
| ) | ||
| tool_calls_data = chat_spans[0]["data"][SPANDATA.GEN_AI_RESPONSE_TOOL_CALLS] | ||
| assert isinstance(tool_calls_data, (list, str)) # Could be serialized | ||
| if isinstance(tool_calls_data, str): | ||
| assert "get_word_length" in tool_calls_data | ||
| elif isinstance(tool_calls_data, list) and len(tool_calls_data) > 0: | ||
| # Check if tool calls contain expected function name | ||
| tool_call_str = str(tool_calls_data) | ||
| assert "get_word_length" in tool_call_str | ||
| else: | ||
| assert SPANDATA.GEN_AI_SYSTEM_INSTRUCTIONS not in chat_spans[0].get("data", {}) | ||
| assert SPANDATA.GEN_AI_REQUEST_MESSAGES not in chat_spans[0].get("data", {}) | ||
| assert SPANDATA.GEN_AI_RESPONSE_TEXT not in chat_spans[0].get("data", {}) | ||
| assert SPANDATA.GEN_AI_SYSTEM_INSTRUCTIONS not in chat_spans[1].get("data", {}) | ||
| assert SPANDATA.GEN_AI_REQUEST_MESSAGES not in chat_spans[1].get("data", {}) | ||
| assert SPANDATA.GEN_AI_RESPONSE_TEXT not in chat_spans[1].get("data", {}) | ||
| assert SPANDATA.GEN_AI_TOOL_INPUT not in tool_exec_span.get("data", {}) | ||
| assert SPANDATA.GEN_AI_TOOL_OUTPUT not in tool_exec_span.get("data", {}) | ||
|
|
||
| # Verify tool calls are NOT recorded when PII is disabled | ||
| assert SPANDATA.GEN_AI_RESPONSE_TOOL_CALLS not in chat_spans[0].get( | ||
| "data", {} | ||
| ), ( | ||
| f"Tool calls should NOT be recorded when send_default_pii={send_default_pii} " | ||
| f"and include_prompts={include_prompts}" | ||
| ) | ||
| assert SPANDATA.GEN_AI_RESPONSE_TOOL_CALLS not in chat_spans[1].get( | ||
| "data", {} | ||
| ), ( | ||
| f"Tool calls should NOT be recorded when send_default_pii={send_default_pii} " | ||
| f"and include_prompts={include_prompts}" | ||
| ) |
There was a problem hiding this comment.
I understand why we have some conditionals in this test, but the size of this conditional and the various assertions within the respective blocks looks like a code smell to me.
My suggestion would be to either break this test into 2 separate parameterized test cases - one group which asserts the first part, the second that asserts the contents in the else block
|
|
||
|
|
||
| @pytest.fixture | ||
| def streaming_chat_completions_model_responses(): |
There was a problem hiding this comment.
I don't think this fixture needs to be placed in the global fixtures because it's very specific to tests within a particular file.
If you can foresee us re-using this across multiple test files within the AI integrations, then we should move this into a conftest.py file placed within tests/integrations or, if even better if this would only be used by the langchain tests, within the langchain folder

Description
Set
run_nameas thegen_ai.function_idattribute instead ofgen_ai.agent.name.Add tests for
AgentExecutor.invoke()based onAgentExecutor.stream()test.Add tests with
create_openai_tools_agent().with_config()to test all branches for extracting therun_name.Issues
Reminders
tox -e linters.feat:,fix:,ref:,meta:)