Skip to content

feat: multi blob#935

Draft
chengwenxi wants to merge 12 commits intomainfrom
feat/multi_batch
Draft

feat: multi blob#935
chengwenxi wants to merge 12 commits intomainfrom
feat/multi_batch

Conversation

@chengwenxi
Copy link
Copy Markdown
Collaborator

No description provided.

Kukoomomo and others added 6 commits April 15, 2026 15:04
- Add isBatchV2Upgraded hook to BatchCache so V2 header is always
  generated once the upgrade is activated, regardless of blob count.
  Previously the code fell back to V1 for single-blob batches, which
  is incompatible with the V2 public_input_hash (keccak(hash[0]) ≠ hash[0]).

- Remove the MAX_BLOB_PER_BLOCK = 6 constant from Rollup.sol and rely
  solely on blobhash(i) == bytes32(0) to terminate the blob-count loop.
  Per spec §9 design decision, blob count limits should be controlled
  by tx-submitter MaxBlobCount config, not a hardcoded contract constant,
  so Ethereum protocol upgrades (e.g. EIP-7691) require no contract change.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
… challenge handler

- Change ExecutorInput.blob_info to blob_infos (Vec<BlobInfo>) with batch_version field
- Add BlobVerifier::verify_blobs for multi-blob KZG verification
- Add BatchInfo::public_input_hash_v2 using keccak256(hash[0]||...||hash[N-1])
- Add multi-blob encoding (encode_multi_blob, encode_blob_from_bytes) in host blob.rs
- Route verify() on batch_version: V2 uses aggregated blob hashes, V0/V1 unchanged
- Update shadow_rollup calc_batch_pi to parse V2 header with blob_count at offset 257
- Add blob_count and extra_blob_hashes to challenge handler BatchInfo and encode

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
… entrypoints

- Add batch_version to ProveRequest and thread through gen_client_input/execute_batch
- Use verify_blobs (all blobs) instead of verify (first blob only) in server queue
- Compute blobHashesHash for V2 in server batch_header_ex; pass individual hashes for fill_ext
- fill_ext parses V2 blob_count + per-blob hashes from extended batch_header_ex
- Add batch_version param to try_execute_batch; callers extract version from batch_header[0]
- Add --batch-version CLI arg to host binary
- Add blob_count param to execute_batch for correct PI hash routing

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…V0/V1 compat fields

Replace blob_versioned_hash + blob_count + extra_blob_hashes with a single
blob_hashes: Vec<[u8; 32]>. fill_ext parses all hashes from batch_header_ex,
encode writes blob_hashes[0] at offset 57 and appends count + remaining hashes
for V2. No backward-compatibility shims needed since prover components upgrade together.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 17, 2026

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: dc6af8b0-07a8-46db-a771-6f1ed5d734cc

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/multi_batch

Comment @coderabbitai help to get the list of available commands and usage tips.

chengwenxi and others added 6 commits April 17, 2026 10:35
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…mpress once

Split get_origin_batch into unpack_blob (field-element unpack) and decompress_batch
(zstd decompress). verify_blobs now KZG-verifies each blob independently, unpacks
all compressed chunks, concatenates them, then calls decompress_batch once.
Previously each blob was independently decompressed which fails for N>1 since the
zstd frame spans all chunks.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…fset 57

V2 headers now use the same 257-byte format as V1 with the aggregated
blob hash (keccak256 of all blob hashes) at offset 57. This eliminates
BatchHeaderCodecV2, simplifies contracts/prover/submitter, and fixes the
multi-blob decompression bug in blob_verifier.

- Delete BatchHeaderCodecV2.sol; V2 commitBatch computes aggregated hash inline
- Unify _verifyProof and _loadBatchHeader for all versions
- Remove BatchHeaderV2 struct in Go; V2 uses V1 format + version override
- Simplify Rust challenge handler, queue, shadow_rollup (uniform 96-byte batch_header_ex)
- Fix verify_blobs: decode BLS scalars per blob, concatenate, decompress once

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
V2 should store the aggregated blob hash (keccak256 of all blob hashes)
in batchBlobVersionedHashes, consistent with the header offset 57 value,
instead of blobhash(0).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…struction

Move blobVersionedHash computation out of _commitBatchWithBatchData into
callers via a new _computeBlobVersionedHash(version) helper:
- V0/V1: blobhash(0) or ZERO_VERSIONED_HASH
- V2: keccak256(blobhash(0)||...||blobhash(N-1)), requires >=1 blob

_commitBatchWithBatchData now has a single unified header construction
path for all versions — no more V2/V0V1 branch split.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Remove the V2 restriction in commitState — with the simplified V2 header
format (aggregated hash at offset 57), the stored batchBlobVersionedHashes
value is sufficient to recommit without a blob, same as V0/V1.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants