Skip to content

feat(profiling): Add pipeline workflow to perfetto profiling #5932

Open
markushi wants to merge 8 commits intofeat/markushi/perfetto-profiling-supportfrom
feat/markushi/perfetto-profiling-support-pipeline
Open

feat(profiling): Add pipeline workflow to perfetto profiling #5932
markushi wants to merge 8 commits intofeat/markushi/perfetto-profiling-supportfrom
feat/markushi/perfetto-profiling-support-pipeline

Conversation

@markushi
Copy link
Copy Markdown
Member

@markushi markushi commented May 5, 2026

As a follow up to:
https://github.com/getsentry/relay/pull/5659/changes/BASE..db555e68ad45debd66f46d28e84aa6952b7498b7#r3167376271, introduces a pipeline workflow instead of doing everything in one place.

Copy link
Copy Markdown
Member

@Dav1dde Dav1dde left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks this addresses my biggest concern of the other PR.

I think there may still be some stricter typing we can do, and maybe de-duplicate some of the filtering logic, but this goes into the territory of maybe not worth it at this time. I tried some stuff locally and realized it'll need more changes quickly.

So ended up just leaving some nits.

For order of PRs, happy to merge them separately into master, as each one of the PRs is functional standalone.

Should also give other reviewers some time to take a look!

Comment thread relay-server/src/processing/profile_chunks/mod.rs
Comment thread relay-server/src/processing/profile_chunks/process.rs Outdated
Comment thread relay-server/src/processing/profile_chunks/process.rs Outdated
Comment thread relay-server/src/processing/profile_chunks/mod.rs Outdated
Comment thread relay-server/src/processing/profile_chunks/mod.rs Outdated
Comment thread relay-server/src/processing/profile_chunks/store.rs Outdated
@markushi markushi marked this pull request as ready for review May 5, 2026 09:18
@markushi markushi requested a review from a team as a code owner May 5, 2026 09:18
Comment thread relay-server/src/processing/profile_chunks/mod.rs
Comment thread relay-server/src/processing/profile_chunks/mod.rs Outdated
Comment thread relay-server/src/envelope/content_type.rs
Comment thread relay-server/src/processing/profile_chunks/mod.rs Outdated
Comment thread relay-server/src/processing/profile_chunks/mod.rs
Comment thread relay-server/src/processing/profile_chunks/mod.rs Outdated
Comment thread relay-server/src/processing/profile_chunks/process.rs Outdated
markushi added 5 commits May 6, 2026 09:44
…add missing tests

As ProfileChunkOutput::Expanded is only used in processing mode, there's
no need to carry around the headers / platform fields.

- added platform validation for perfetto profiles
- added test for existing JSON-only profiles, ensuring no change in
behavior
- refactored validation / quantities handling to be more re-usable
across profile formats
Comment thread relay-server/src/processing/profile_chunks/process.rs
if ctx.should_filter(Feature::ContinuousProfilingPerfetto) {
return Err(Error::FilterFeatureFlag);
return Err(err(Error::FilterFeatureFlag));
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Rejections for perfetto profile chunks missing a platform are silently dropped because item.quantities() is called before the platform check, resulting in empty quantities for the rejection record.
Severity: MEDIUM

Suggested Fix

In expand_perfetto_profile_chunk, move the call to item.quantities() to be after the platform check. This will ensure that if the platform check fails, the rejection logic can still compute the correct quantities and record the rejection outcome properly.

Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent. Verify if this is a real issue. If it is, propose a fix; if not, explain why it's
not valid.

Location: relay-server/src/processing/profile_chunks/process.rs#L73

Potential issue: When a `perfetto` profile chunk is processed without a `platform`
header, the rejection outcome is silently dropped. This occurs in
`expand_perfetto_profile_chunk` because `item.quantities()` is called before the
platform is checked. For an item missing a platform, `item.quantities()` returns an
empty list. When the platform check subsequently fails, the error is passed to
`records.reject_err` with these empty quantities. The function then has nothing to
iterate over, so no rejection outcome is recorded, and the event is lost without being
tracked.

Did we get this right? 👍 / 👎 to inform future reviews.

Copy link
Copy Markdown

@cursor cursor Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Fix All in Cursor

❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

Reviewed by Cursor Bugbot for commit 2858a57. Configure here.

Comment thread relay-server/src/services/store.rs
@markushi
Copy link
Copy Markdown
Member Author

markushi commented May 7, 2026

@Dav1dde @jjbayer thanks for the detailed PR feedback (and bearing with me 😅 ). I've applied all suggestions, so this is ready for review again.


#[derive(Debug)]
#[cfg_attr(all(not(feature = "processing"), not(test)), expect(dead_code))]
pub struct RawProfile {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really small nit, would move these structs under SerializedProfileChunks, structure mirrored by other processors and kind-of a logical order: Processor -> Serialized -> Expanded.

Comment on lines +164 to +167
#[expect(
clippy::large_enum_variant,
reason = "variants are sized by Managed<T> which wraps different pipeline stages"
)]
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Urgh, the old problem again where EnvelopeHeaders is 700+ bytes.

Something we need to figure out in general.

Comment on lines +74 to +75
item.platform()
.ok_or_else(|| err(relay_profiling::ProfileError::PlatformNotSupported.into()))?;
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's neat but I think I'd prefer:

Suggested change
item.platform()
.ok_or_else(|| err(relay_profiling::ProfileError::PlatformNotSupported.into()))?;
if item.platform.is_none() {
return Err(..);
}

_ => return Err(relay_profiling::ProfileError::PlatformNotSupported.into()),
}
payload: Bytes,
) -> Result<ExpandedProfileChunk, (Error, Quantities)> {
Copy link
Copy Markdown
Member

@Dav1dde Dav1dde May 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The quantity dance isn't necessary here, the function should return a normal error, then the caller can deal with the outcomes and quantities (see comment below how to).

chunks: Managed<SerializedProfileChunks>,
ctx: Context<'_>,
) -> Managed<ExpandedProfileChunks> {
chunks.map(|serialized, records| {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have two options here and I think either are fine:

  1. We consider the entire envelope invalid if it contains a single broken profile chunk. If that's the case we want to use try_map here and just ? the errors.
  2. We only want to skip broken chunks but process other chunks in the same envelope, in this case we need to keep map here but reject items (with their outcomes) like:
            match result {
                Ok(chunk) => expanded.push(chunk),
                Err(err) => drop(records.reject_err(err, &item)),
            }

Latter should not require that dance with quantities and track_outcomes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants