fix(hang/consumer, watch/decoder): handle non-sequential groups and AVC description fallback#1413
Conversation
ebdb841 to
0f9fa2b
Compare
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (4)
🚧 Files skipped from review as they are similar to previous changes (3)
WalkthroughThe Consumer now advances its active group and merges buffered ranges using array position and the previous element’s done/latest state instead of assuming contiguous sequence numbers; next() and 🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
✨ Simplify code
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 ESLint
ESLint skipped: no ESLint configuration detected in root package.json. To enable, add Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
js/hang/src/container/consumer.ts (1)
256-266: ⚡ Quick winKeep
#updateBuffered()linear here.
#updateBuffered()runs on every frame append and pop, so doingthis.#groups.indexOf(group)inside the loop makes this hot path O(n²). An indexed loop keeps the same adjacency check without the repeated lookup and is easier to follow.Proposed refactor
- let prev: Group | undefined; - - for (const group of this.#groups) { + let previousBufferedGroup: Group | undefined; + + for (let i = 0; i < this.#groups.length; i++) { + const group = this.#groups[i]; const first = group.frames.at(0); if (!first || group.latest === undefined) continue; const start = Moq.Time.Milli.fromMicro(first.timestamp); const end = Moq.Time.Milli.fromMicro(group.latest); const last = ranges.at(-1); - const contiguous = prev?.done && prev === this.#groups[this.#groups.indexOf(group) - 1]; + const previousGroup = this.#groups[i - 1]; + const contiguous = previousGroup?.done === true && previousBufferedGroup === previousGroup; if (last && (last.end >= start || contiguous)) { last.end = Moq.Time.Milli.max(last.end, end); } else { ranges.push({ start, end }); } - prev = group; + previousBufferedGroup = group; }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@js/hang/src/container/consumer.ts` around lines 256 - 266, The adjacency check inside `#updateBuffered`() is making the hot path O(n²) by calling this.#groups.indexOf(group) for every group; rewrite the for-loop over this.#groups to an indexed loop (for (let i = 0; i < this.#groups.length; i++)) so you can compute prev = this.#groups[i-1] (or set prev after iteration) and replace the contiguous calculation that uses this.#groups.indexOf(group) with a direct index comparison (e.g., prev?.done && prev === this.#groups[i-1]); ensure you preserve the existing uses of group, prev, first, start, end, last and contiguous semantics.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@js/hang/src/container/consumer.ts`:
- Around line 131-135: The fallback that sets this.#active =
group.consumer.sequence + 1 can deadlock non-sequential streams; change both
places (the block around group.consumer.sequence === this.#active and the
similar logic at lines ~221-224) so that when there is no contiguous next group
(next is undefined) you promote using the first queued group's real sequence
(e.g. this.#groups[0].consumer.sequence) if any queued group exists, otherwise
fall back to group.consumer.sequence + 1; ensure the same promotion logic is
used so next() and `#checkLatency`() can advance when a later non-sequential epoch
is buffered.
---
Nitpick comments:
In `@js/hang/src/container/consumer.ts`:
- Around line 256-266: The adjacency check inside `#updateBuffered`() is making
the hot path O(n²) by calling this.#groups.indexOf(group) for every group;
rewrite the for-loop over this.#groups to an indexed loop (for (let i = 0; i <
this.#groups.length; i++)) so you can compute prev = this.#groups[i-1] (or set
prev after iteration) and replace the contiguous calculation that uses
this.#groups.indexOf(group) with a direct index comparison (e.g., prev?.done &&
prev === this.#groups[i-1]); ensure you preserve the existing uses of group,
prev, first, start, end, last and contiguous semantics.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: c593118f-f8b7-4702-8e00-36d278f2371d
📒 Files selected for processing (1)
js/hang/src/container/consumer.ts
a59d142 to
acd351a
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
js/hang/src/container/consumer.test.ts (2)
490-513: ⚡ Quick winAdd explicit group number assertions for thoroughness.
The test correctly verifies that all 6 frames are delivered, but for consistency with other multi-group tests in this file (e.g., lines 314-316, 338-339), explicitly asserting the group numbers would make the test more thorough and self-documenting.
📊 Suggested assertions to add after line 511
expect(frames[0].timestamp).toBe(0 as Time.Micro); expect(frames[2].timestamp).toBe(40_000 as Time.Micro); expect(frames[4].timestamp).toBe(80_000 as Time.Micro); +expect(frames[0].group).toBe(EPOCH_BASE); +expect(frames[2].group).toBe(EPOCH_BASE + GAP); +expect(frames[4].group).toBe(EPOCH_BASE + GAP * 2); consumer.close();🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@js/hang/src/container/consumer.test.ts` around lines 490 - 513, The test "Consumer handles epoch-based group sequences without skipping" currently checks timestamps but not explicit group indices; add assertions after draining frames to verify each frame's group number (e.g., frames[0], frames[1] belong to group EPOCH_BASE, frames[2], frames[3] to group EPOCH_BASE + GAP, frames[4], frames[5] to group EPOCH_BASE + GAP * 2) so the Consumer/Track handling of epoch-based groups (created via writeGroupWithLegacyFrames and consumed via Consumer) is explicitly validated.
503-505: ⚡ Quick winConsider adding a delay before draining frames for consistency.
Other multi-group tests in this file (e.g., lines 310, 410) add a small delay between closing the track and draining frames to ensure groups are fully processed. Adding a similar wait would make this test more consistent with the established pattern and reduce potential flakiness.
⏱️ Suggested wait pattern
track.close(); +await new Promise((resolve) => setTimeout(resolve, 100)); + const frames = await drainFrames(consumer, 500);🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@js/hang/src/container/consumer.test.ts` around lines 503 - 505, The test closes the track with track.close() and immediately calls drainFrames(consumer, 500), which can be flaky; insert a short await delay (e.g., 10–50ms) after track.close() and before calling drainFrames to match other multi-group tests and ensure groups finish processing; apply this change in the same test that references track.close() and drainFrames(consumer, 500) so the consumer has time to settle before draining.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@js/watch/src/video/decoder.ts`:
- Around line 365-366: Replace the em dash in the comment that starts "MSF/CMSF
catalogs don't carry a separate description field — the codec description (avcC,
hvcC, etc.) lives inside the init segment's moov box." by splitting into two
sentences or using a comma; e.g., end the first clause with a period and start a
new sentence ("MSF/CMSF catalogs don't carry a separate description field. The
codec description...") so the comment conforms to the no-em-dash rule and
preserves the original meaning.
---
Nitpick comments:
In `@js/hang/src/container/consumer.test.ts`:
- Around line 490-513: The test "Consumer handles epoch-based group sequences
without skipping" currently checks timestamps but not explicit group indices;
add assertions after draining frames to verify each frame's group number (e.g.,
frames[0], frames[1] belong to group EPOCH_BASE, frames[2], frames[3] to group
EPOCH_BASE + GAP, frames[4], frames[5] to group EPOCH_BASE + GAP * 2) so the
Consumer/Track handling of epoch-based groups (created via
writeGroupWithLegacyFrames and consumed via Consumer) is explicitly validated.
- Around line 503-505: The test closes the track with track.close() and
immediately calls drainFrames(consumer, 500), which can be flaky; insert a short
await delay (e.g., 10–50ms) after track.close() and before calling drainFrames
to match other multi-group tests and ensure groups finish processing; apply this
change in the same test that references track.close() and drainFrames(consumer,
500) so the consumer has time to settle before draining.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: a6726f93-93e8-4d14-a809-bbd254102fc4
📒 Files selected for processing (4)
js/hang/src/container/consumer.test.tsjs/hang/src/container/consumer.tsjs/watch/src/audio/decoder.tsjs/watch/src/video/decoder.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- js/hang/src/container/consumer.ts
eea1358 to
8028f70
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@js/hang/src/container/consumer.ts`:
- Around line 275-277: The current merge condition treats any prev?.done as
contiguous even when that group contributed no buffered media; change the
contiguous guard so it only treats a done group as contiguous if it actually
contains buffered data (e.g. replace contiguous = prev?.done with contiguous =
prev?.done && (prev.end > prev.start) or, if a helper exists, prev?.done &&
prev.hasBufferedData()), then keep the existing if check using last.end >= start
|| contiguous so empty/fully-drained done groups no longer bridge gaps in
this.#groups.
- Around line 202-210: The promotion of `#active` must happen immediately when the
first frame for a later group is buffered so playback isn't stalled; inside
`#runGroup`(), when you append the very first frame to a group's buffer (detect
buffer length going from 0 to 1), check if this group's consumer.sequence is
less than the current `#active` (or if `#active` is undefined or greater than the
group's sequence) and if so set this.#active to that sequence and call
this.#notify() to wake any pending next() calls; keep existing fallback in the
other branch but move this eager promotion+notify into the frame delivery path
in `#runGroup`() so `#checkLatency`() no-op cases don't introduce a full-group
stall.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 5d9f29b7-2dd4-4dd5-ba40-f6ac5f282c04
📒 Files selected for processing (4)
js/hang/src/container/consumer.test.tsjs/hang/src/container/consumer.tsjs/watch/src/audio/decoder.tsjs/watch/src/video/decoder.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- js/hang/src/container/consumer.test.ts
- js/watch/src/audio/decoder.ts
…VC description fallback Two player fixes: 1. Consumer: handle non-sequential group sequences The consumer assumed group sequences increment by 1 (0, 1, 2, ...) but CMSF/EML uses epoch-based sequences with large gaps. After consuming a group, #active += 1 would set #active to a value far below the next group's sequence, causing next() to block until #checkLatency fired a skip. Every group went through the skip path, causing ~1s audio underflows. Fix: advance #active to the next group's actual sequence in the buffer. 2. Decoder: fall back to init segment description for AVC/HEVC When the catalog doesn't provide a description field (MSF/CMSF path), fall back to the codec description (avcC, hvcC, esds) extracted from the CMAF init segment's moov box. Without this, WebCodecs rejects AVC frames with "A key frame is required... fill out the description field."
8028f70 to
0530807
Compare
|
No sorry this won't work. MSF/CMSF doesn't require using the unix timestamp as the group ID. There's a recommendation (SHOULD) in the text regarding resumption, but it's not required. The problem with using non-sequential IDs is that... we have no idea if a gap is present. Like if we receive group 6 and then group 8... maybe we should wait for group 7? Will group 7 ever exist, or did the publisher arbitrarily skip it? The current jitter buffer will only stop waiting for group 7 if we can prove that the entire group 7 would fall outside the max latency configured with the jitter buffer. And really the only way that's possible to know is if group 8+ is at least max_latency duration. The next += 1 is actually just an optimization. We know definitively that there's no gap, so we can flush immediately instead of waiting for the jitter buffer to be full. But any gaps will require waiting for the max latency. I didn't look at your fix completely, but advancing next to what we have in memory would mean almost immediately skipping group 7, even if it could have arrived in time. It's a downgrade not a fix. Sequences numbers should be += 1 just like HLS/DASH. I strongly disagree with the IETF drafts that allow, and even encourage, sequence number gaps. |
Problems
1. Consumer blocks on non-sequential group sequences
The consumer assumed group sequences increment by 1 (0, 1, 2, ...) but CMSF/EML uses epoch-based sequences with large gaps (e.g. 85386781784064, 85386781832192). After consuming a group,
#active += 1would set#activeto a value far below the next group's sequence, causingnext()to block until#checkLatencyfired a skip.Every group transition went through the skip path, causing ~1s audio underflows and choppy playback.
2. WebCodecs rejects AVC frames without description
When the MSF/CMSF catalog doesn't carry a separate
descriptionfield, the video and audio decoders passedundefinedto WebCodecs. For AVC (length-prefixed NALUs), WebCodecs requires the avcC box as thedescriptionin theVideoDecoderConfig. Without it, every keyframe is rejected.Fixes
Consumer: Advance
#activeto the next group's actual sequence in the buffer rather than incrementing by 1. Also fixes the contiguity check in#updateBuffered.Decoder: Fall back to
init.description(extracted from the CMAF init segment's moov box) when the catalog doesn't provide a description field.Testing