Skip to content

Simplify Interop Spec ; Add Supernode#891

Draft
axelKingsley wants to merge 1 commit intomainfrom
interop/supernodeUpdate
Draft

Simplify Interop Spec ; Add Supernode#891
axelKingsley wants to merge 1 commit intomainfrom
interop/supernodeUpdate

Conversation

@axelKingsley
Copy link
Copy Markdown
Contributor

@axelKingsley axelKingsley commented Feb 11, 2026

(Sorry, this should have been "Draft")

Tool Use Notice

All edited files were edited by hand with no generative help.

supernode.md was generated, but mostly used as scaffold and was rewritten.

What

Throws out a bunch of simplifications to the spec which I think are available, though they all desvere to be challenged.

Probably more to come, but wanted to start getting this rolling alongside everything else.

Comment on lines +1 to +7
## Network Upgrade Transactions

Could use some advice on how accurate this deployment plan is. I think it is *not*

The interop network upgrade timestamp defines the timestamp at which all functionality in this document is considered
the consensus rules for an OP Stack based network.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Notice ⚠️ . Not sure what contract deployments are looking like right now.

Copy link
Copy Markdown
Contributor

@ajsutton ajsutton left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generally looks good. I'll leave it to someone else from the interop team to actually approve but I don't see anything terrifying here from a proofs perspective.

Comment on lines +26 to +29
Chains implicitly belong to their own Dependency Set.

The chain id of the local chain MUST be considered as part of its own dependency set. This allows a chain
to consume logs that it has produced much more cheaply than providing a block hash proof.
Chains define their own dependency sets, but by convention chains are intended to
depend on one another in interop.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd probably revert this unless we're actually going to make the change to support incomplete dependency set graphs. Right now the implementation is very much that all chains in the dependency set can accept initiating messages from any other chain in the dependency set.

The activation block is the first block that has a timestamp higher or equal to the
interop network upgrade timestamp.
The activation block timestamp may not exactly match the upgrade timestamp.
For Chains featuring interop, an L2 contract called the "<>" is available.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"For Chains featuring interop" feels ambiguous. Is that chains which have activated the interop hard fork or chains with a dependency set > 1. And in either case it would only be true at or after the interop activation block.

interop network upgrade timestamp.
The activation block timestamp may not exactly match the upgrade timestamp.
For Chains featuring interop, an L2 contract called the "<>" is available.
A transaction which invokes the contract emits an Executing Message, which make reference to Log Events which have
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
A transaction which invokes the contract emits an Executing Message, which make reference to Log Events which have
A transaction which invokes the contract emits an Executing Message, which makes reference to Log Events which have

Comment on lines +73 to +76
Interop Activation is represented as a shared L2 timestamp, representing
the first timestamp to be scrutinized by Interop.

```plaintext
MaxResourceLimit + SystemTxMaxGas + UpgradeGasUsage ≤ L2GasLimit
```
For chains which create a block with timestamp _equal_ to the Activation Time, that block is the first to be subject to Interop.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if it was intentional but the wording here makes me start to think of the interop hard fork and activation of interop as two potentially different things. That's actually not a bad concept given there are a number of chains that will be in a dependency set of 1 - they have no CrossL2Inbox deployed and so there can be no interop messages (and thus no need to do interop validation). But they have activated the interop hard fork. Later those chains might be added into a dependency set with other chains and at that point they'd actually activate interop, deploy the CrossL2Inbox contract and start performing interop validation.

Feel free to ignore this, but I hadn't thought of it that way before and it could be a useful mental model especially going forward when additional chains are added into the dependency set.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I'm not sure how much flexibility we have to make that happen, but I was thinking along similar lines when I wrote it.

Verification Activities (ie Interop) are to consider verification as something which occurs at a shared timestamp level. For that reason, their participation in the protocol can be suspended until that shared timestamp occurs, which allows for any Verification Activity to seamlessly start operating on the activation timestamp. This has the benefit that the activation block itself doesn't have to be special, it's just the first block with the protocol running.

This is also probably a needed way of looking at things if we ever build more complex verifiers:
Consider a case where Chains A and B are interoperating, and Chain B wants to adopt an additional protocol verification, VerifiedFoo. The new verification materially affects Chain A's finalization because until fully verified, VerifiedFoo may invalidate the chain for some failed constraint. The first payload of Chain A's which may be affected by this new Verifier is whatever the state was at that timestamp (meaning the prior block if no block was built at exactly that timestamp). The activation of this VerifiedFoo protocol on Chain B is happening at a Superchain level.

What I think this means is that when we talk about this type of protocol extension, it must be done from the shared L2 timestamp perspective.

We can talk about it in either direction (as block heights and timestamps are trivially converted), but the timestamp being the shared value gives a basis for coordination.

Comment on lines +82 to +84
// I just deleted a bunch of constraints... I don't think we need them if
we have a superchain-timestamp to define the activation.
Need to know more about this.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have generally had restrictions on hard fork activation blocks not containing any non-deposit transactions. I'm not entirely sure why but I think the implementation in op-node has been generalised now. If we haven't got a general statement on that anywhere else in the specs we probably do want to keep that part (or at least find out why we've been enforcing it for other hard forks).

I'd be ok with removing the restriction:

Any contract log events MUST NOT count as valid initiating messages. Verifiers may use the activation block as an anchor point, without indexing the block-contents.

but we should make sure that kona-proofs isn't still enforcing that rule as well. Probably worth logging an issue to check that if we do remove the restriction.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The constraint on activation blocks not containing user transactions was promoted from a sequencer policy to a consensus rule with Jovian, and so would likely apply to future hardforks because of the "copy paste the specs from the last fork" development model. cc @sebastianst

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@geoknee can you explain how you mean this? Is it the case:

  • We just always list all constraints from prior hardforks but most of those come for free in prior spec definition.
  • We always list all constraints from prior hardforks and if they weren't listed spec would break.

(sorry, drafted this comment this morning, but lost it.)

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I mean is that we put a new, explicit constraint in the definition of the Jovian hardfork (it was the first such fork to get this constraint). So there is now a precedent for including this constraint in future forks.

So future forks (Karst, or Interop say) should also list that explicit constraint in their part of the spec (unless we somehow decide to drop the constraint in future hard forks which seems unlikely).

Comment on lines +88 to +91
When the [cross chain dependency resolution](./messaging.md#resolving-cross-chain-safety) determines
that a block contains an [invalid message](./messaging.md#invalid-messages), the block is replaced
using Holocene Replacement Rules, which is a block containing
only Deposit Transactions, replacing the original block.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, this lost the optimistic block info deposit transaction. I know we talked about it before. I think we landed on tracking the original invalid output root out of protocol so that it was available to op-challenger right?
Definitely want to migrate the action tests that test block replacement, including that op-challenger can get all the data it needs even if replaced blocks are pruned from op-geth.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Precisely yes. We had discussed it and decided:

  • Optimistic Block Info Deposit Tx serves as a derivation record so that challenges and stuff can work correctly.
  • The prior art for such a thing is SafeDB, which is a record kept by the CL in order to explain derivation to the challenger.
  • And actually, Chain Containers already maintain a database of denylisted blocks.
  • Therefore, the application can serve replacement information. The invalid original root can be served out of protocol.

So yes the test will have to change, good callout.

BUT! I did just realize that what we are currently recording for Denylist is not sufficient to build an output root. We are just holding the block hash.
This is fine, I can persist the output root (or its components) alongside the block hash. Expect that to get written only after we land the PR containing the denylist itself.

@@ -0,0 +1,123 @@
# Supernode
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we actually want this in specs? It seems like good content for a super node README or similar but it's an implementation detail, not part of the specs themselves. The supervisor doc was pretty much all RPCs because they were meant to be standard so we could interop between op-supervisor and op-node or kona-node.

That said, our specs already include so much implementation specific detail that maybe this makes sense.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that we should move implementation specific stuff into the readme and just specify the essential APIs and architecture here:

  • RPCs
  • direct connection to the ELs and reset behaviour
  • in-memory management of virtual nodes (at a high level)
  • superAuthority interface

I think we should be considering how the spec can help define how we plug supernode into various ELs, and also what an alternative (Rust) implementation must do (and leave out bits it does not need to do). For example, the "activity" abstraction is a very nice thing but not actually part of the specification of interop. An alt-supernode need not implement activities in order to satisfy the interop spec.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can agree to drop it, sure. I had included it as foil to the previous supervisor.md

I actually think that the only thing we really need to define as an essential interface on the supernode are:

  • the fact that it advances verification (safety) by some timestampy rules. This is described in the "New Rules" section, but can be spelled out with more clarity here
  • the super-root API definition, particularly superroot_atTimestamp, as well as the SLAs each piece of data has
  • new signals which go to the EL (if any)

I'll whittle it down to just these 3 focuses. Everything below it is implementation detail.

but redundant access is eliminated.


### RPC Router
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, specifying the RPC patterns makes sense. Should also include the superroot_atTimestamp RPC method in here somewhere.

@@ -0,0 +1,257 @@
## Network Upgrade Transactions

Could use some advice on how accurate this deployment plan is. I think it is *not*
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✂️

On the [activation block](#activation-block), a set of deposit transaction
based upgrade transactions are deterministically generated by the derivation pipeline in the following order:

<!-- To regenerate the deploy/upgrade tx docs below, run `./scripts/upgrades/gen_interop_upgrade_tx_specs.sh | pbcopy` and then paste here -->
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this up to date?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, it likely isn't. This entire file was ripped out of the "derivation.md" section.

Old activation plans had the contracts being deployed on a particular block, but I don't think that's the plan anymore, so I moved it into contract.md and will need to figure out what to do with this whole file.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should keep an eye on the L2 Contract Manager project which could impact this section e.g. ethereum-optimism/optimism#19165

Copy link
Copy Markdown
Contributor

@maurelian maurelian Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I have noticed that there are not-yet executed NUTs in interop_upgrade_transactions.go, we'll need to make a decision about how to proceed there.

Comment on lines +224 to +230
## Expiry Window

The expiry window is the time period after which an initiating message is no longer considered valid.

| Constant | Value |
| --------------- | ---------------------- |
| `EXPIRY_WINDOW` | `604800 secs` (7 days) |
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure this belongs in this section? Ideally we describe where this validity check is enforced.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it is in the smart contract layer?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose this section used to be in "derivation", where this made a bit more sense.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah yes this needs to be moved back

The source-hash is thus computed as:
`keccak256(bytes32(uint256(4)), outputRoot))`.

## Network Upgrade Transactions
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see you moved this to a separate file -- that's fine but it does imply a departure from the pattern we have used in the protocol/forkname/ directories. Also, I don't see a reference to the new file in the top level overview.md file, which is like the table of contents for this entire section.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could consider pinching off this change to clean up the diff of the semantically significant changes you are making.

Comment on lines +78 to +80
For chains which do not have a block _equal_ to the Activation Time,
their state _at_ the Activation Time is the prior block. Meaning that the
block imediately preceeding the timestamp is the first block subject to Interop.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the safest rule to have? I would have thought choosing the first block which has a timestamp equal or greater than the activation block would lead to a lower chance of footguns?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In appearance I agree, but I think functionally these are comparable, assuming this behavior is understood by operators. I'm going to be pedantic for the sake of exploring this idea (which is really the point of this spec PR anyway).

If not all chains produce a block on the Activation Timestamp:

  • Some Chains will have produced a block on the Interop Activation
  • Some Chains won't have produced a block on the Interop Activation

The ways to handle this are essentially whether we "Left Align" or "Right Align" or "No Align" activation.

No Alignment (Current Strategy)
With No Alignment, the Superchain activates interop at a timestamp, but each chain is only included in interop once it produces a block at or above the timestamp.
This leads to a gradient where across timestamps chains will be added to the set.

I find this strategy hard to reason about. By some definitions you'd count the chain "in" and by others you'd count it "out"

Right Aligned (My Suggestion)
My argument is that "Chains have state, even at timestamps where blocks aren't produced". And by that argument, chains which enter into a Superchain protocol are agreeing to modify behavior at a timestamp, which is the minimum shared granularity amongst chains.

In practice, this means that a block can be evaluated with both old and new protocol rules, if the block straddles activation.
And since protocol rules take effect at a per-block-level, this effectively means that each chain is exposed to interop at the block preceeding activation.

Even though these blocks are processed by both sets of rules, the block wouldn't have any Executing Messages in it because any valid message for the block would have a timestamp prior to interop anyway.

Left Aligned (For Completeness)
Taking a different argument: "The Superchain is consistent at a Timestamp".
In this model, we positively say that "Yes, you must provide a block At-Or-Beyond Activation". But we ALSO say that "A timestamp can only be verified if all chains are present to verify it".

In this case, Interop would functionally not activate until the last block to be posted lands. This delays the effective activation timestamp by the most delayed chain, and also gives the last chain a bit of an advantage since they can create Executing Messages beyond the Activation Timestamp.

@@ -0,0 +1,123 @@
# Supernode
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that we should move implementation specific stuff into the readme and just specify the essential APIs and architecture here:

  • RPCs
  • direct connection to the ELs and reset behaviour
  • in-memory management of virtual nodes (at a high level)
  • superAuthority interface

I think we should be considering how the spec can help define how we plug supernode into various ELs, and also what an alternative (Rust) implementation must do (and leave out bits it does not need to do). For example, the "activity" abstraction is a very nice thing but not actually part of the specification of interop. An alt-supernode need not implement activities in order to satisfy the interop spec.

###### -321501 `DATA_CORRUPTION`

Happens when the underlying DB has some I/O issue.
*yoink* No newline at end of file
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✂️ we can just delete the file

@axelKingsley axelKingsley marked this pull request as draft February 12, 2026 15:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants