Skip to content

[SPARK-56065][SQL] Add AQE fallback from failed broadcast joins to shuffle joins#54925

Open
sunchao wants to merge 8 commits intoapache:masterfrom
sunchao:SPARK-56065
Open

[SPARK-56065][SQL] Add AQE fallback from failed broadcast joins to shuffle joins#54925
sunchao wants to merge 8 commits intoapache:masterfrom
sunchao:SPARK-56065

Conversation

@sunchao
Copy link
Member

@sunchao sunchao commented Mar 20, 2026

What changes were proposed in this pull request?

This PR adds an opt-in AQE fallback path for broadcast join failures caused by broadcast table size/row limits.

When spark.sql.adaptive.broadcastJoin.fallbackToShuffle.enabled is enabled, AQE catches qualifying broadcast stage failures and retries adaptive replanning with broadcast joins disabled, so the query can proceed with shuffle-based joins (for example SMJ/SHJ) instead of failing immediately.

The change also includes safety checks to avoid repeatedly retrying the same failed broadcast relation, and test coverage in AdaptiveQueryExecSuite.

Why are the changes needed?

Today, a query can fail if a planned broadcast side exceeds runtime broadcast limits, even though a shuffle join strategy could still complete successfully.

This PR provides a controlled fallback path to improve robustness for those cases while keeping existing behavior unchanged by default.

Does this PR introduce any user-facing change?

Yes.

A new SQL config is added:

  • spark.sql.adaptive.broadcastJoin.fallbackToShuffle.enabled (default: false)

When enabled, queries that would otherwise fail due to broadcast table row/size limits may instead continue with shuffle-based joins. When disabled, existing behavior remains unchanged.

How was this patch tested?

  • Added/updated unit tests in sql/core/src/test/scala/org/apache/spark/sql/execution/adaptive/AdaptiveQueryExecSuite.scala, including:
    • fallback enable/disable behavior
    • preserving required input broadcast exchange behavior
    • rejecting fallback replans that still contain the failed broadcast relation
  • Ran AQE adaptive query test coverage relevant to the new fallback path.
  • Manually validated with real jobs that the plan can transition from initial BHJ to final SMJ under AQE when fallback is enabled.

Was this patch authored or co-authored using generative AI tooling?

Yes, Codex 5.3 High with a lot of harnessing

Copy link
Member

@dongjoon-hyun dongjoon-hyun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like a good improvement for 4.2.0, doesn't it?

Seq(exchangeStage.plan.canonicalized, exchangeStage.canonicalized).distinct.foreach { key =>
context.stageCache.remove(key)
}
val keysToRemove = context.stageCache.collect {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this full scan remove guarding against a specific scenario, or is it purely defensive?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is purely defensive, but I agree that it seems unlikely to happen as the cache is always key'd by canonical plan, so there shouldn't be a case where a materialized stage exec is cached under some extra keys. I can remove it to simplify this code path.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated.

currentPhysicalPlan = newPhysicalPlan
currentLogicalPlan = newLogicalPlan
stagesToReplace = Seq.empty[QueryStageExec]
} else if (forceAdoptBroadcastFallbackPlan) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this seems unreachable

// customizations) while keeping fallback conf changes isolated from the original session.
val fallbackSession = context.session.cloneSession()
val fallbackConf = fallbackSession.sessionState.conf
fallbackConf.setConfString(SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key, "-1")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for adding this very helpful feature!
Instead of globally disabling broadcast joins during fallback, would it be possible to inject NO_BROADCAST_HASH only on the failed join side, while preserving broadcast for other joins that haven’t exceeded the limit?
I understand this could add complexity, just curious whether this approach was considered and intentionally deferred.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a good suggestion. Currently this PR only does a coarse full plan fallback, so if we have multiple joins using BHJ in a query plan, even if only one BHJ failed due to the limits, other BHJs will also be disabled when we enable the fallback, even if they are not on the same input table.

I initially considered this but decided to defer it because it will add extra complexities to this PR, but let me take a look again see if we can still add this into the current PR.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK I've updated the PR to use NO_BROADCAST_HASH - please take a look!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants