Draft
Conversation
- Add isBatchV2Upgraded hook to BatchCache so V2 header is always generated once the upgrade is activated, regardless of blob count. Previously the code fell back to V1 for single-blob batches, which is incompatible with the V2 public_input_hash (keccak(hash[0]) ≠ hash[0]). - Remove the MAX_BLOB_PER_BLOCK = 6 constant from Rollup.sol and rely solely on blobhash(i) == bytes32(0) to terminate the blob-count loop. Per spec §9 design decision, blob count limits should be controlled by tx-submitter MaxBlobCount config, not a hardcoded contract constant, so Ethereum protocol upgrades (e.g. EIP-7691) require no contract change. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
… challenge handler - Change ExecutorInput.blob_info to blob_infos (Vec<BlobInfo>) with batch_version field - Add BlobVerifier::verify_blobs for multi-blob KZG verification - Add BatchInfo::public_input_hash_v2 using keccak256(hash[0]||...||hash[N-1]) - Add multi-blob encoding (encode_multi_blob, encode_blob_from_bytes) in host blob.rs - Route verify() on batch_version: V2 uses aggregated blob hashes, V0/V1 unchanged - Update shadow_rollup calc_batch_pi to parse V2 header with blob_count at offset 257 - Add blob_count and extra_blob_hashes to challenge handler BatchInfo and encode Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
… entrypoints - Add batch_version to ProveRequest and thread through gen_client_input/execute_batch - Use verify_blobs (all blobs) instead of verify (first blob only) in server queue - Compute blobHashesHash for V2 in server batch_header_ex; pass individual hashes for fill_ext - fill_ext parses V2 blob_count + per-blob hashes from extended batch_header_ex - Add batch_version param to try_execute_batch; callers extract version from batch_header[0] - Add --batch-version CLI arg to host binary - Add blob_count param to execute_batch for correct PI hash routing Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…V0/V1 compat fields Replace blob_versioned_hash + blob_count + extra_blob_hashes with a single blob_hashes: Vec<[u8; 32]>. fill_ext parses all hashes from batch_header_ex, encode writes blob_hashes[0] at offset 57 and appends count + remaining hashes for V2. No backward-compatibility shims needed since prover components upgrade together. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Contributor
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…mpress once Split get_origin_batch into unpack_blob (field-element unpack) and decompress_batch (zstd decompress). verify_blobs now KZG-verifies each blob independently, unpacks all compressed chunks, concatenates them, then calls decompress_batch once. Previously each blob was independently decompressed which fails for N>1 since the zstd frame spans all chunks. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…fset 57 V2 headers now use the same 257-byte format as V1 with the aggregated blob hash (keccak256 of all blob hashes) at offset 57. This eliminates BatchHeaderCodecV2, simplifies contracts/prover/submitter, and fixes the multi-blob decompression bug in blob_verifier. - Delete BatchHeaderCodecV2.sol; V2 commitBatch computes aggregated hash inline - Unify _verifyProof and _loadBatchHeader for all versions - Remove BatchHeaderV2 struct in Go; V2 uses V1 format + version override - Simplify Rust challenge handler, queue, shadow_rollup (uniform 96-byte batch_header_ex) - Fix verify_blobs: decode BLS scalars per blob, concatenate, decompress once Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
V2 should store the aggregated blob hash (keccak256 of all blob hashes) in batchBlobVersionedHashes, consistent with the header offset 57 value, instead of blobhash(0). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…struction Move blobVersionedHash computation out of _commitBatchWithBatchData into callers via a new _computeBlobVersionedHash(version) helper: - V0/V1: blobhash(0) or ZERO_VERSIONED_HASH - V2: keccak256(blobhash(0)||...||blobhash(N-1)), requires >=1 blob _commitBatchWithBatchData now has a single unified header construction path for all versions — no more V2/V0V1 branch split. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Remove the V2 restriction in commitState — with the simplified V2 header format (aggregated hash at offset 57), the stored batchBlobVersionedHashes value is sufficient to recommit without a blob, same as V0/V1. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.