Skip to content

cloud_topics: convert generator callsites to consume() / for_each_ref()#30430

Open
ballard26 wants to merge 1 commit into
redpanda-data:devfrom
ballard26:CORE-16220-callsites
Open

cloud_topics: convert generator callsites to consume() / for_each_ref()#30430
ballard26 wants to merge 1 commit into
redpanda-data:devfrom
ballard26:CORE-16220-callsites

Conversation

@ballard26
Copy link
Copy Markdown
Contributor

Convert the remaining cloud_topics callsites away from the generator API; rvalue consume()/for_each_ref() ensure finally() runs on the underlying impl when iteration completes. This avoids a potential UAG that occurs from finally not being called for the generator when either the iterator is not exhausted or an exception is thrown.

Backports Required

  • none - not a bug fix
  • none - this is a backport
  • none - issue does not exist in previous branches
  • none - papercut/not impactful enough to backport
  • v26.1.x
  • v25.3.x
  • v25.2.x

Release Notes

  • none

Convert the remaining cloud_topics callsites away from the generator
API; rvalue consume()/for_each_ref() ensure finally() runs on the
underlying impl when iteration completes.
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR finishes migrating cloud_topics callsites from the coroutine generator API to record_batch_reader’s rvalue-qualified consume() / for_each_ref() APIs, ensuring impl::finally() is reliably invoked when iteration ends (including early-exit and exception paths). This closes a potential lifetime/UAF hazard when a generator is not fully exhausted.

Changes:

  • Refactor reconciliation object-building to use record_batch_reader::consume() with a stateful consumer.
  • Refactor L0 metadata reading to use consume() and return accumulated batches via end_of_stream().
  • Refactor frontend timequery paths to use for_each_ref() / consume() and early-stop via ss::stop_iteration.

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated no comments.

File Description
src/v/cloud_topics/reconciler/reconciliation_consumer.cc Replace slice_generator() loop with consume() to guarantee finally() and avoid generator lifetime pitfalls.
src/v/cloud_topics/level_zero/frontend_reader/level_zero_reader.cc Replace generator() loop with consume() while accumulating metadata batches in a consumer.
src/v/cloud_topics/frontend/frontend.cc Replace timequery generator loops with for_each_ref()/consume() consumers that can early-stop and still trigger finally().

@vbotbuildovich
Copy link
Copy Markdown
Collaborator

CI test results

test results on build#84265
test_status test_class test_method test_arguments test_kind job_url passed reason test_history
FLAKY(PASS) ShadowLinkBasicTests test_link_creation_checks {"source_cluster_spec": {"cluster_type": "redpanda"}} integration https://buildkite.com/redpanda/redpanda/builds/84265#019e0f1a-6732-4d59-89be-70c4e1436119 10/11 Test PASSES after retries.No significant increase in flaky rate(baseline=0.0048, p0=1.0000, reject_threshold=0.0100. adj_baseline=0.1000, p1=0.3487, trust_threshold=0.5000) https://redpanda.metabaseapp.com/dashboard/87-tests?tab=142-dt-individual-test-history&test_class=ShadowLinkBasicTests&test_method=test_link_creation_checks

@WillemKauf
Copy link
Copy Markdown
Contributor

WillemKauf commented May 11, 2026

Convert the remaining cloud_topics callsites away

while (auto extent_res_opt = co_await gen()) {

FYI, unless this is specifcally an issue with readers that is being addressed?

@ballard26
Copy link
Copy Markdown
Contributor Author

FYI, unless this is specifcally an issue with readers that is being addressed?

There isn't an issue with seastar's generators in general. Rather it's the generator for the record batch readers that's problematic. The readers require an async finally() be called/completed before they are destroyed and the generator pattern makes it rather easy to accidentally not do that. #30370 aims to remove the generator impl from the record batch reader. This PR is mainly for backport purposes.

@dotnwat dotnwat requested review from Lazin, andrwng and nvartolomei May 11, 2026 22:56
Comment on lines +506 to +519
if (batch.header().max_timestamp < time) {
co_return ss::stop_iteration::no;
}
// NOTE: we can't just return this offset verbatim, since we
// don't record the same timestamp deltas inside batches for
// placeholder batches (this would require unpacking batches
// during produce).
result = coarse_grained_timequery_result{
.time = time,
.start_offset = model::offset_cast(
ot_state->from_log_offset(batch.base_offset())),
.last_offset = model::offset_cast(
ot_state->from_log_offset(batch.last_offset())),
};
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice fix. I was trying to think of other ways to keep the succinct procedural code, but ultimately it does end up adding a good amount of bloat while also introducing room for error. Was thinking something like the following, but I also don't love it:

auto [gen, close] = std::move(reader).generator(timeout);
std::optional<result_t> res;
std::exception_ptr eptr;
try {
    while (auto batch = co_await gen()) {
        if (done) { res = result; break; }
        ...
    }
} catch (...) { eptr = std::current_exception(); }
co_await close();
if (eptr) std::rethrow_exception(eptr);
if (res) co_return std::move(*res);
co_return std::nullopt;

It's unfortunate we don't have schedule_at_exit capabilities in seastar yet. I'm fine with this going in as is.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants