feat(profiling): Add pipeline workflow to perfetto profiling #5932
feat(profiling): Add pipeline workflow to perfetto profiling #5932markushi wants to merge 8 commits intofeat/markushi/perfetto-profiling-supportfrom
Conversation
Dav1dde
left a comment
There was a problem hiding this comment.
Thanks this addresses my biggest concern of the other PR.
I think there may still be some stricter typing we can do, and maybe de-duplicate some of the filtering logic, but this goes into the territory of maybe not worth it at this time. I tried some stuff locally and realized it'll need more changes quickly.
So ended up just leaving some nits.
For order of PRs, happy to merge them separately into master, as each one of the PRs is functional standalone.
Should also give other reviewers some time to take a look!
…add missing tests As ProfileChunkOutput::Expanded is only used in processing mode, there's no need to carry around the headers / platform fields. - added platform validation for perfetto profiles - added test for existing JSON-only profiles, ensuring no change in behavior - refactored validation / quantities handling to be more re-usable across profile formats
| if ctx.should_filter(Feature::ContinuousProfilingPerfetto) { | ||
| return Err(Error::FilterFeatureFlag); | ||
| return Err(err(Error::FilterFeatureFlag)); | ||
| } |
There was a problem hiding this comment.
Bug: Rejections for perfetto profile chunks missing a platform are silently dropped because item.quantities() is called before the platform check, resulting in empty quantities for the rejection record.
Severity: MEDIUM
Suggested Fix
In expand_perfetto_profile_chunk, move the call to item.quantities() to be after the platform check. This will ensure that if the platform check fails, the rejection logic can still compute the correct quantities and record the rejection outcome properly.
Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent. Verify if this is a real issue. If it is, propose a fix; if not, explain why it's
not valid.
Location: relay-server/src/processing/profile_chunks/process.rs#L73
Potential issue: When a `perfetto` profile chunk is processed without a `platform`
header, the rejection outcome is silently dropped. This occurs in
`expand_perfetto_profile_chunk` because `item.quantities()` is called before the
platform is checked. For an item missing a platform, `item.quantities()` returns an
empty list. When the platform check subsequently fails, the error is passed to
`records.reject_err` with these empty quantities. The function then has nothing to
iterate over, so no rejection outcome is recorded, and the event is lost without being
tracked.
Did we get this right? 👍 / 👎 to inform future reviews.
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
Reviewed by Cursor Bugbot for commit 2858a57. Configure here.
|
|
||
| #[derive(Debug)] | ||
| #[cfg_attr(all(not(feature = "processing"), not(test)), expect(dead_code))] | ||
| pub struct RawProfile { |
There was a problem hiding this comment.
Really small nit, would move these structs under SerializedProfileChunks, structure mirrored by other processors and kind-of a logical order: Processor -> Serialized -> Expanded.
| #[expect( | ||
| clippy::large_enum_variant, | ||
| reason = "variants are sized by Managed<T> which wraps different pipeline stages" | ||
| )] |
There was a problem hiding this comment.
Urgh, the old problem again where EnvelopeHeaders is 700+ bytes.
Something we need to figure out in general.
| item.platform() | ||
| .ok_or_else(|| err(relay_profiling::ProfileError::PlatformNotSupported.into()))?; |
There was a problem hiding this comment.
That's neat but I think I'd prefer:
| item.platform() | |
| .ok_or_else(|| err(relay_profiling::ProfileError::PlatformNotSupported.into()))?; | |
| if item.platform.is_none() { | |
| return Err(..); | |
| } |
| _ => return Err(relay_profiling::ProfileError::PlatformNotSupported.into()), | ||
| } | ||
| payload: Bytes, | ||
| ) -> Result<ExpandedProfileChunk, (Error, Quantities)> { |
There was a problem hiding this comment.
The quantity dance isn't necessary here, the function should return a normal error, then the caller can deal with the outcomes and quantities (see comment below how to).
| chunks: Managed<SerializedProfileChunks>, | ||
| ctx: Context<'_>, | ||
| ) -> Managed<ExpandedProfileChunks> { | ||
| chunks.map(|serialized, records| { |
There was a problem hiding this comment.
We have two options here and I think either are fine:
- We consider the entire envelope invalid if it contains a single broken profile chunk. If that's the case we want to use
try_maphere and just?the errors. - We only want to skip broken chunks but process other chunks in the same envelope, in this case we need to keep
maphere but reject items (with their outcomes) like:
match result {
Ok(chunk) => expanded.push(chunk),
Err(err) => drop(records.reject_err(err, &item)),
}Latter should not require that dance with quantities and track_outcomes.

As a follow up to:
https://github.com/getsentry/relay/pull/5659/changes/BASE..db555e68ad45debd66f46d28e84aa6952b7498b7#r3167376271, introduces a pipeline workflow instead of doing everything in one place.