[SPARK-56065][SQL] Add AQE fallback from failed broadcast joins to shuffle joins#54925
[SPARK-56065][SQL] Add AQE fallback from failed broadcast joins to shuffle joins#54925sunchao wants to merge 8 commits intoapache:masterfrom
Conversation
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
Outdated
Show resolved
Hide resolved
dongjoon-hyun
left a comment
There was a problem hiding this comment.
This looks like a good improvement for 4.2.0, doesn't it?
| Seq(exchangeStage.plan.canonicalized, exchangeStage.canonicalized).distinct.foreach { key => | ||
| context.stageCache.remove(key) | ||
| } | ||
| val keysToRemove = context.stageCache.collect { |
There was a problem hiding this comment.
Is this full scan remove guarding against a specific scenario, or is it purely defensive?
There was a problem hiding this comment.
It is purely defensive, but I agree that it seems unlikely to happen as the cache is always key'd by canonical plan, so there shouldn't be a case where a materialized stage exec is cached under some extra keys. I can remove it to simplify this code path.
| currentPhysicalPlan = newPhysicalPlan | ||
| currentLogicalPlan = newLogicalPlan | ||
| stagesToReplace = Seq.empty[QueryStageExec] | ||
| } else if (forceAdoptBroadcastFallbackPlan) { |
There was a problem hiding this comment.
this seems unreachable
| // customizations) while keeping fallback conf changes isolated from the original session. | ||
| val fallbackSession = context.session.cloneSession() | ||
| val fallbackConf = fallbackSession.sessionState.conf | ||
| fallbackConf.setConfString(SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key, "-1") |
There was a problem hiding this comment.
Thanks for adding this very helpful feature!
Instead of globally disabling broadcast joins during fallback, would it be possible to inject NO_BROADCAST_HASH only on the failed join side, while preserving broadcast for other joins that haven’t exceeded the limit?
I understand this could add complexity, just curious whether this approach was considered and intentionally deferred.
There was a problem hiding this comment.
This is a good suggestion. Currently this PR only does a coarse full plan fallback, so if we have multiple joins using BHJ in a query plan, even if only one BHJ failed due to the limits, other BHJs will also be disabled when we enable the fallback, even if they are not on the same input table.
I initially considered this but decided to defer it because it will add extra complexities to this PR, but let me take a look again see if we can still add this into the current PR.
There was a problem hiding this comment.
OK I've updated the PR to use NO_BROADCAST_HASH - please take a look!
What changes were proposed in this pull request?
This PR adds an opt-in AQE fallback path for broadcast join failures caused by broadcast table size/row limits.
When
spark.sql.adaptive.broadcastJoin.fallbackToShuffle.enabledis enabled, AQE catches qualifying broadcast stage failures and retries adaptive replanning with broadcast joins disabled, so the query can proceed with shuffle-based joins (for example SMJ/SHJ) instead of failing immediately.The change also includes safety checks to avoid repeatedly retrying the same failed broadcast relation, and test coverage in
AdaptiveQueryExecSuite.Why are the changes needed?
Today, a query can fail if a planned broadcast side exceeds runtime broadcast limits, even though a shuffle join strategy could still complete successfully.
This PR provides a controlled fallback path to improve robustness for those cases while keeping existing behavior unchanged by default.
Does this PR introduce any user-facing change?
Yes.
A new SQL config is added:
spark.sql.adaptive.broadcastJoin.fallbackToShuffle.enabled(default:false)When enabled, queries that would otherwise fail due to broadcast table row/size limits may instead continue with shuffle-based joins. When disabled, existing behavior remains unchanged.
How was this patch tested?
sql/core/src/test/scala/org/apache/spark/sql/execution/adaptive/AdaptiveQueryExecSuite.scala, including:Was this patch authored or co-authored using generative AI tooling?
Yes, Codex 5.3 High with a lot of harnessing