Skip to content

Conversation

@sebastianpiq
Copy link
Contributor

This PR introduces HLS import functionality. The import module now supports a manifest-based import type for HLS streams (with potential DASH support in the future).

Key Changes:

HLS Import Implementation

  • Added manifest type support to the import module for HLS streams
  • Added media type support for the current existing imports (aac, fmp4, and anexb)
  • Created a generic interface for manifest import (like the generic for media)
  • Implemented a basic HLS importer that parses manifest data and pushes it using the existing fmp4 import implementation
  • The importer continuously pulls segments from the manifest in a loop for a specified duration

CLI Support

Added CLI command for HLS import: just ingest-hls

e.g nix develop -c just ingest-hls http://example.com/manifest.m3u8 example

The HLS importer provides a first basic implementation that parses manifest data and uses the existing fmp4 import. The manifest polling loop uses the segment target duration to determine the refresh interval, though this timing mechanism could be further refined.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 10, 2025

Walkthrough

The PR refactors the publish path to a new Publish workflow (formats: Avc3, Fmp4, Hls) and removes the prior Import-based AsyncRead ingest. It adds an HLS ingest module that parses master/media playlists, selects variants, fetches init/segment data, and feeds per-variant FMP4 importers into a shared broadcast. Decoder APIs were changed to optional timestamp handling and a DecoderFormat enum; error types were extended for HTTP and URL errors; multiple Cargo dependencies and CLI signatures were updated.

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title 'feat: hls import' directly and concisely describes the main change: adding HLS import functionality to the project.
Description check ✅ Passed The description is well-detailed and related to the changeset, explaining HLS import implementation, manifest support, CLI changes, and the key technical approach.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (6)
rs/justfile (1)

110-112: Consider adding variadic *args passthrough to ingest-hls

The wiring to hang publish looks consistent with other recipes, but unlike pub / pub-h264 / clock, this recipe doesn’t allow forwarding extra CLI flags. That makes it harder to experiment with additional publish options (e.g. auth, logging tweaks) without editing the Justfile.

You could keep behavior identical while allowing future flexibility:

-# Ingest a live HLS media playlist and publish it via hang (full ladder).
-ingest-hls url name="demo" relay="http://localhost:4443/anon":
-	cargo run --bin hang -- publish --url "{{relay}}" --name "{{name}}" --format hls --hls-url "{{url}}"
+# Ingest a live HLS media playlist and publish it via hang (full ladder).
+ingest-hls url name="demo" relay="http://localhost:4443/anon" *args:
+	cargo run --bin hang -- publish \
+		--url "{{relay}}" \
+		--name "{{name}}" \
+		--format hls \
+		--hls-url "{{url}}" \
+		{{args}}
rs/hang/src/error.rs (1)

69-79: New HLS/HTTP/URL error variants integrate cleanly

The additional Hls, Http, and Url variants plus From<reqwest::Error> follow the existing error design (Arc-wrapped inner errors, Clone-able enum) and should compose well with callers.

If you later need richer diagnostics for HLS failures, consider introducing a small HlsError enum/newtype instead of a bare String, but that’s purely optional at this stage.

Also applies to: 95-99

rs/hang-cli/src/main.rs (1)

67-69: Have clap enforce --hls-url when --format hls

The new hls_url option is correctly plumbed into client, and the runtime check there ensures it’s present for ImportType::Hls. You can tighten the UX by having clap enforce this at parse time:

-        /// URL of an HLS playlist to ingest (for --format hls).
-        #[arg(long)]
-        hls_url: Option<Url>,
+        /// URL of an HLS playlist to ingest (for --format hls).
+        #[arg(long, required_if_eq("format", "hls"))]
+        hls_url: Option<Url>,

Please double-check the exact required_if_eq attribute spelling/usage for your clap version before applying.

Also applies to: 90-91

rs/hang/src/import/media/fmp4.rs (1)

96-103: Consider accepting Bytes directly to avoid an extra allocation.

The HLS caller already has Bytes from the HTTP response. Using Bytes::copy_from_slice here creates an unnecessary copy. Consider adding an overload or changing the signature:

-	pub fn parse(&mut self, bytes: &[u8]) -> anyhow::Result<()> {
-		use std::io::Cursor;
-		let mut cursor = Cursor::new(Bytes::copy_from_slice(bytes));
-		self.decode(&mut cursor)
+	pub fn parse(&mut self, bytes: impl Into<Bytes>) -> anyhow::Result<()> {
+		use std::io::Cursor;
+		let mut cursor = Cursor::new(bytes.into());
+		self.decode(&mut cursor)
	}

This allows callers with Bytes to pass ownership directly, while &[u8] still works via Into<Bytes>.

rs/hang/src/import/manifest/hls.rs (2)

130-144: Consider adding a maximum iteration limit or health check.

The service() loop runs indefinitely until externally cancelled. If the HLS source ends (e.g., VOD playlist with EXT-X-ENDLIST), this will keep polling forever. Consider checking for playlist end conditions.

 	pub async fn service(&mut self) -> Result<()> {
 		loop {
 			let outcome = self.step().await?;
+			
+			// TODO: Check for EXT-X-ENDLIST and break if playlist is complete
+			
 			let delay = self.refresh_delay(outcome.target_duration, outcome.wrote_segments);

 			debug!(
 				wrote = outcome.wrote_segments,
 				delay = ?delay,
 				"HLS ingest step complete"
 			);

 			tokio::time::sleep(delay).await;
 		}
 	}

289-318: Consider adding retry logic for transient HTTP failures.

HLS ingest over the network is prone to transient failures. A single failed segment fetch currently propagates the error and terminates the entire ingest. Consider adding retry with backoff for resilience.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0788fe9 and 8891ee3.

⛔ Files ignored due to path filters (1)
  • rs/Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (16)
  • justfile (1 hunks)
  • rs/hang-cli/Cargo.toml (1 hunks)
  • rs/hang-cli/src/client.rs (3 hunks)
  • rs/hang-cli/src/import.rs (2 hunks)
  • rs/hang-cli/src/main.rs (2 hunks)
  • rs/hang-cli/src/server.rs (2 hunks)
  • rs/hang/Cargo.toml (1 hunks)
  • rs/hang/src/error.rs (2 hunks)
  • rs/hang/src/import/manifest/hls.rs (1 hunks)
  • rs/hang/src/import/manifest/mod.rs (1 hunks)
  • rs/hang/src/import/media/fmp4.rs (1 hunks)
  • rs/hang/src/import/media/mod.rs (3 hunks)
  • rs/hang/src/import/mod.rs (1 hunks)
  • rs/hang/src/lib.rs (1 hunks)
  • rs/justfile (1 hunks)
  • rs/libmoq/src/state.rs (2 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.rs

📄 CodeRabbit inference engine (CLAUDE.md)

In Rust crates, tests should be integrated within source files using inline test modules

Files:

  • rs/hang-cli/src/server.rs
  • rs/hang-cli/src/main.rs
  • rs/hang/src/error.rs
  • rs/hang/src/import/manifest/mod.rs
  • rs/hang/src/import/media/fmp4.rs
  • rs/hang/src/import/mod.rs
  • rs/hang-cli/src/client.rs
  • rs/hang/src/lib.rs
  • rs/libmoq/src/state.rs
  • rs/hang/src/import/manifest/hls.rs
  • rs/hang-cli/src/import.rs
  • rs/hang/src/import/media/mod.rs
rs/**/Cargo.toml

📄 CodeRabbit inference engine (CLAUDE.md)

For Rust development, use the workspace configuration in rs/Cargo.toml

Files:

  • rs/hang-cli/Cargo.toml
  • rs/hang/Cargo.toml
🧬 Code graph analysis (5)
rs/hang-cli/src/server.rs (2)
rs/hang-cli/src/import.rs (3)
  • new (30-33)
  • new (54-64)
  • read_from (44-46)
rs/hang/src/import/media/mod.rs (1)
  • new (36-49)
rs/hang-cli/src/main.rs (1)
rs/hang-cli/src/client.rs (1)
  • client (10-46)
rs/hang/src/import/manifest/mod.rs (3)
rs/hang-cli/src/import.rs (4)
  • new (30-33)
  • new (54-64)
  • init (66-68)
  • service (70-72)
rs/hang/src/import/manifest/hls.rs (5)
  • new (32-34)
  • new (92-99)
  • new (104-115)
  • init (120-128)
  • service (131-144)
rs/hang/src/import/media/mod.rs (1)
  • new (36-49)
rs/libmoq/src/state.rs (1)
rs/hang/src/import/media/mod.rs (1)
  • new (36-49)
rs/hang-cli/src/import.rs (3)
rs/hang/src/import/manifest/hls.rs (5)
  • new (32-34)
  • new (92-99)
  • new (104-115)
  • init (120-128)
  • service (131-144)
rs/hang/src/import/manifest/mod.rs (3)
  • new (18-28)
  • init (30-34)
  • service (36-40)
rs/hang/src/import/media/mod.rs (1)
  • new (36-49)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Check
🔇 Additional comments (19)
rs/hang/src/lib.rs (1)

14-15: Updated import docs match new import modules

The new fMP4 and HLS import bullets accurately reflect the split into media/manifest modules and the new hls path. No changes needed.

rs/hang/src/import/mod.rs (1)

1-5: Import module re-export reorganization looks good

Re-exporting manifest and media at the top level keeps the public hang::import::* surface intact while clarifying internal structure. No issues from this change alone.

rs/hang-cli/src/server.rs (1)

13-14: Server-side switch to ImportMedia keeps ingest flow intact

Using crate::import::ImportMedia here is consistent with the client path and the new import API: construct from ImportType, init_from(input).await?, then drive ingestion via read_from(input) inside select!. This preserves the previous behavior under the new naming.

Also applies to: 37-40, 46-47

rs/hang-cli/Cargo.toml (1)

22-22: New CLI dependencies align with HLS/manifest ingest

Adding bytes, m3u8-rs, and reqwest here fits the new HLS/HTTP ingest path. Just make sure versions/features stay consistent with the workspace (and with the hang crate) so you don’t accidentally pull in multiple major versions or conflicting TLS backends.

Also applies to: 26-26, 28-28

rs/libmoq/src/state.rs (1)

27-28: Track storage migration to ImportMedia is consistent

Using NonZeroSlab<hang::import::ImportMedia> for tracks and constructing via ImportMedia::new(broadcast.clone(), format) cleanly follows the new import API. The existing UnknownFormat and ShortDecode error paths are preserved, so external behavior should remain the same.

Also applies to: 156-167

rs/hang/Cargo.toml (1)

31-33: Core crate dependency additions match new HLS/HTTP functionality

Bringing url, m3u8-rs, and reqwest into the main hang crate lines up with the new manifest/HLS import code and the added Url/Http error variants. Ensure these versions and feature flags stay in sync with the CLI crate and any workspace-level definitions to avoid duplication or TLS backend mismatches.

rs/hang/src/import/media/fmp4.rs (2)

74-89: LGTM!

The with_catalog constructor correctly initializes all fields and enables sharing a catalog across multiple importers, which is essential for the HLS multi-rendition use case.


91-94: LGTM!

Simple accessor that returns a clone of the catalog producer for sharing.

rs/hang-cli/src/client.rs (2)

30-45: LGTM!

Clean branching logic that properly validates the hls_url requirement for HLS format and delegates to the appropriate import path.


48-61: LGTM!

Well-structured helper that handles the common event loop pattern with graceful shutdown via CTRL-C.

rs/hang/src/import/manifest/mod.rs (2)

8-10: LGTM!

The Protocol enum provides a clean extension point for future manifest formats (e.g., DASH as mentioned in PR objectives).


17-41: LGTM!

Clean factory pattern with Option return for unsupported formats and proper delegation to the inner protocol implementation.

rs/hang/src/import/media/mod.rs (2)

6-12: LGTM!

Clean module organization with submodules for each codec format and glob re-exports for convenience.


106-126: Verify partial buffer handling in the decode helpers.

The mem::take pattern swaps out the buffer, but if decode doesn't fully consume it, the remaining bytes are put back. However, the current decode method on line 79 has a TODO and ensure! that requires the buffer to be empty. This means partial data will cause an error rather than being accumulated.

This is fine for fMP4 where atoms are self-contained, but worth verifying this matches the expected behavior for streaming inputs.

rs/hang-cli/src/import.rs (2)

8-23: LGTM!

The ImportType enum cleanly separates the CLI-level format selection from the underlying implementation details. The as_str() method provides the mapping to internal format identifiers.


53-64: LGTM!

Good HTTP client setup with a proper user-agent header for identification, and appropriate error handling with context.

rs/hang/src/import/manifest/hls.rs (3)

24-35: LGTM!

Clean configuration struct with a simple constructor.


461-533: LGTM!

The variant selection logic is well-thought-out: it filters to known codec families (currently H.264), deduplicates by resolution keeping the lowest bandwidth variant, and has clear comments explaining the rationale.


543-565: LGTM!

Good to see inline unit tests following the Rust convention. The tests cover basic construction and initial state verification.

Copy link
Collaborator

@kixelated kixelated left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Initial review, haven't looked closely at import/manifest/hls.rs yet.

Lemme know if you want me to edit the PR for any nit pick Rust stuff. Looks pretty good in general.

Comment on lines 99 to 103
pub fn parse(&mut self, bytes: &[u8]) -> anyhow::Result<()> {
use std::io::Cursor;
let mut cursor = Cursor::new(Bytes::copy_from_slice(bytes));
self.decode(&mut cursor)
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And yeah I read more, fetch_bytes returns Bytes so this parse function is indeed making an extra allocation/copy. Just call decode on the returned Bytes and it'll work like magic.

@kixelated
Copy link
Collaborator

And yeah, I really love the reuse of the fMP4 import. It feels almost like magic.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (4)
justfile (1)

74-78: Quote arguments to prevent shell breakage (already flagged in previous review).

The delegation on line 77 passes arguments without quotes, which will cause failures when the HLS URL contains shell-special characters like &, ?, or spaces. This issue has already been flagged in a previous review with a suggested fix.

rs/hang-cli/src/client.rs (1)

34-34: sd_notify is still called before initialization completes.

Per the previous review feedback, moving sd_notify after the init() / init_from() calls would allow systemctl start to block until the media is fully initialized. Currently, readiness is signaled before the HLS manifest or media stream is parsed.

Consider moving the notification into the branched logic after initialization succeeds:

-	// Notify systemd that we're ready.
-	let _ = sd_notify::notify(true, &[sd_notify::NotifyState::Ready]);
-
 	if format == ImportType::Hls {
 		let hls_url = hls_url.ok_or_else(|| anyhow::anyhow!("--hls-url is required when --format hls is specified"))?;
 
 		let mut manifest = ImportManifest::new(broadcast.producer.into(), hls_url)?;
 		manifest.init().await.context("failed to initialize manifest import")?;
+		let _ = sd_notify::notify(true, &[sd_notify::NotifyState::Ready]);
 
 		run_loop(manifest.service(), session).await
 	} else {
 		let mut media = ImportMedia::new(broadcast.producer.into(), format);
 		media
 			.init_from(input)
 			.await
 			.context("failed to initialize from media stream")?;
+		let _ = sd_notify::notify(true, &[sd_notify::NotifyState::Ready]);
 
 		run_loop(media.read_from(input), session).await
 	}
rs/hang/src/import/manifest/hls.rs (2)

294-305: Consider the suggested optimization for segment iteration.

The current loop iterates all segments checking sequence numbers. Per previous feedback, using .skip() could be more efficient for long playlists:

let skip = track.next_sequence.unwrap_or(0).saturating_sub(playlist.media_sequence) as usize;
for segment in playlist.segments.iter().skip(skip) {
    self.push_segment(kind, track, segment, sequence).await?;
    consumed += 1;
    sequence += 1;
}

This avoids redundant iteration over already-processed segments.


375-380: HTTP requests lack timeout protection.

As previously noted, fetch_bytes can block indefinitely if the remote server becomes unresponsive. Add a request timeout to prevent stalling the entire ingest:

 async fn fetch_bytes(&self, url: &Url) -> Result<Bytes> {
-    let response = self.client.get(url.clone()).send().await?;
+    let response = self.client
+        .get(url.clone())
+        .timeout(std::time::Duration::from_secs(30))
+        .send()
+        .await?;
     let response = response.error_for_status()?;
     let bytes = response.bytes().await?;
     Ok(bytes)
 }

Alternatively, configure a default timeout when constructing the Client.

🧹 Nitpick comments (2)
rs/justfile (1)

110-112: LGTM with optional consistency improvement.

The recipe correctly quotes all arguments and follows the expected pattern for invoking the hang binary with HLS-specific flags.

Optional: Consider adding a pre-build step for consistency.

The existing pub and pub-h264 recipes include a pre-build step (cargo build --bin hang) to avoid compilation delays during media ingestion. While less critical for polling-based HLS ingestion than for stdin-based streaming, adding this step would improve consistency and user experience:

 # Ingest a live HLS media playlist and publish it via hang (full ladder).
 pub-hls url name="demo" relay="http://localhost:4443/anon":
+	cargo build --bin hang
 	cargo run --bin hang -- publish --url "{{relay}}" --name "{{name}}" --format hls --hls-url "{{url}}"
rs/hang/src/import/manifest/hls.rs (1)

1-6: Documentation mentions abstraction that doesn't exist.

The module doc states it's "independent of any particular HTTP client" and mentions a Fetcher trait, but the implementation directly uses reqwest::Client. Consider updating the documentation to reflect the actual design, or remove the reference to Fetcher.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8891ee3 and a9b87d1.

📒 Files selected for processing (4)
  • justfile (1 hunks)
  • rs/hang-cli/src/client.rs (3 hunks)
  • rs/hang/src/import/manifest/hls.rs (1 hunks)
  • rs/justfile (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rs

📄 CodeRabbit inference engine (CLAUDE.md)

In Rust crates, tests should be integrated within source files using inline test modules

Files:

  • rs/hang-cli/src/client.rs
  • rs/hang/src/import/manifest/hls.rs
🔇 Additional comments (11)
rs/hang-cli/src/client.rs (2)

36-51: LGTM!

The branching logic cleanly separates HLS manifest-based import from media-based import. Error handling with .context() provides clear failure messages, and the symmetric structure is easy to follow.


54-67: LGTM!

The run_loop helper cleanly abstracts the multiplexing of the import task, session lifecycle, and graceful shutdown handling. The 100ms delay after close() allows pending writes to flush before termination.

rs/hang/src/import/manifest/hls.rs (9)

23-34: LGTM!

Simple, focused configuration struct with an appropriate constructor.


44-95: LGTM!

The struct hierarchy cleanly separates configuration, state tracking, and runtime data. TrackState properly encapsulates per-track state including sequence tracking and initialization status.


97-122: LGTM!

Constructor properly initializes empty state, and init() provides appropriate logging for both success and warning cases.


124-138: LGTM!

The continuous ingest loop with adaptive delay based on target duration is appropriate for HLS polling.


140-214: LGTM!

The std::mem::take pattern to work around borrow checker limitations is a reasonable approach. Both methods consistently handle video variants and the optional audio track.


216-226: LGTM!

Adaptive delay logic appropriately polls more frequently when no segments are found, while respecting a minimum interval to avoid overwhelming the server.


235-281: LGTM!

The dual-path handling for master vs. media playlists with appropriate fallback is well structured. Logging provides good visibility into the selected renditions.


314-369: LGTM!

Init segment and media segment handling is well implemented with proper state tracking and error handling.


530-552: LGTM!

Tests are appropriately placed in an inline test module per the coding guidelines. The tests validate basic construction and initial state.

Comment on lines +456 to +462
fn codec_family(codec: &str) -> Option<&'static str> {
if codec.starts_with("avc1.") || codec.starts_with("avc3.") {
Some("h264")
} else {
None
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Find FAMILY_PREFERENCE definition
rg "FAMILY_PREFERENCE" rs/hang/src/import/manifest/hls.rs -A 5 -B 2

Repository: moq-dev/moq

Length of output: 423


🏁 Script executed:

# Check select_variants implementation for fallback behavior
rg "fn select_variants" rs/hang/src/import/manifest/hls.rs -A 20

Repository: moq-dev/moq

Length of output: 844


🏁 Script executed:

# Check if there's any error handling for empty codec results
rg "select_variants|codec_family" rs/hang/src/import/manifest/hls.rs -B 3 -A 3

Repository: moq-dev/moq

Length of output: 1085


🌐 Web query:

fMP4 container HEVC AV1 VP9 codec support

💡 Result:

Short answer: fragmented MP4 (fMP4 / ISO‑BMFF/CMAF) can carry HEVC, AV1 and VP9 — the container supports the codec sample entries (hvc1/hev1 for HEVC, av01 for AV1, vp09 for VP9) and commercial toolchains (FFmpeg, Bento4/MP4 libraries, AWS MediaConvert/CMAF workflows, HLS fMP4) support muxing them. Client playback/decoder availability is a separate compatibility issue (platform/browser/OS must have a decoder). [1][2][3][4]

Sources

  • AWS MediaConvert — CMAF / MP4 outputs list AV1, HEVC, VP9 support. [1]
  • AndroidX / MP4 muxing docs — sample entry names: hev1/hvc1, av01, vp09 (vpcC). [2]
  • HLS / fMP4 notes — fMP4 is used for HEVC and LL‑HLS workflows. [3]
  • FFmpeg docs — encoder/decoder and muxer support for AV1, HEVC, VP9 and MP4 tools. [4]

HLS streams with only HEVC, VP9, or AV1 variants will fail to initialize.

The codec_family function only recognizes H.264 (avc1/avc3). During variant selection, streams with HEVC (hvc1/hev1), VP9, or AV1 codecs are filtered out entirely by the filter_map at line 460–461. If a master playlist contains no H.264 variants, select_variants returns an empty list and initialization fails with "no usable variants found in master playlist".

Since fMP4 containers support HEVC, AV1, and VP9, extend codec support:

 fn codec_family(codec: &str) -> Option<&'static str> {
     if codec.starts_with("avc1.") || codec.starts_with("avc3.") {
         Some("h264")
+    } else if codec.starts_with("hvc1.") || codec.starts_with("hev1.") {
+        Some("hevc")
+    } else if codec.starts_with("vp09.") {
+        Some("vp9")
+    } else if codec.starts_with("av01.") {
+        Some("av1")
     } else {
         None
     }
 }

Also update FAMILY_PREFERENCE (currently ["h264"]) to include fallback families:

-const FAMILY_PREFERENCE: &[&str] = &["h264"];
+const FAMILY_PREFERENCE: &[&str] = &["h264", "hevc", "vp9", "av1"];

Committable suggestion skipped: line range outside the PR's diff.

@kixelated
Copy link
Collaborator

I just caused a bunch of merge conflicts, so lemme resolve them.

@kixelated
Copy link
Collaborator

I caused a bunch of merge conflicts so I fixed them and incorporated a lot of Rust nit-picks: qualabs#5

@kixelated
Copy link
Collaborator

(I would push straight to the PR branch but I don't have permission, it's an organization setting)

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (2)
rs/hang/src/import/hls.rs (2)

304-315: Performance optimization available.

This loop iterates through all segments even when skipping already-seen ones. The optimization suggested in a prior review using .skip() would be more efficient for long-running streams with large playlists.


449-455: Limited codec support: only H.264 streams are accepted.

As noted in a prior review, codec_family only recognizes avc1/avc3. Streams with only HEVC, VP9, or AV1 variants will fail with "no usable variants found in master playlist." Since fMP4 supports these codecs, consider extending support.

🧹 Nitpick comments (8)
rs/hang/src/import/decoder.rs (1)

82-83: Consider making AAC stream decoding a compile-time constraint.

The TODO comment correctly identifies that the runtime bail! for AAC stream decoding could be made more type-safe. One option would be to split Decoder into separate types for stream-capable vs frame-only decoders, or use a trait-based approach. This can be deferred to a follow-up.

Would you like me to open an issue to track making this more type-safe?

rs/hang-cli/src/publish.rs (2)

38-48: Consider reducing duplication in Avc3/Fmp4 branches.

The Avc3 and Fmp4 branches are nearly identical, differing only in the format variant. You could consolidate them:

PublishFormat::Avc3 | PublishFormat::Fmp4 => {
    let decoder_format = match format {
        PublishFormat::Avc3 => DecoderFormat::Avc3,
        PublishFormat::Fmp4 => DecoderFormat::Fmp4,
        _ => unreachable!(),
    };
    PublishDecoder::Decoder(Decoder::new(broadcast.clone(), decoder_format))
}

This is a minor nit; the current code is clear and readable.


74-102: Both init and run create separate stdin handles.

Both methods independently call tokio::io::stdin(). While this works (tokio's stdin returns a handle to the same underlying resource), it could be slightly cleaner to store the stdin handle in Publish if you want to avoid the repeated instantiation. However, the current approach is functionally correct.

rs/hang/src/import/hls.rs (5)

109-114: Consider using expect() instead of unwrap() for better panic messages.

While Client::builder().build() rarely fails, using expect() provides a clearer error message if it does.

 			Client::builder()
 				.user_agent(concat!(env!("CARGO_PKG_NAME"), "/", env!("CARGO_PKG_VERSION")))
 				.build()
-				.unwrap()
+				.expect("failed to build default HTTP client")

160-168: Track state may be lost on mid-iteration errors.

If consume_segments fails for a video track, the current track is dropped and subsequent tracks in video_tracks are never processed or restored to self.video. While errors propagate up (making the Hls instance effectively unusable), this could complicate error recovery if retry logic is added later.

Consider restoring all tracks before returning on error, or documenting that errors leave the instance in an undefined state.


373-374: Inconsistent byte consumption check with ensure_init_segment.

ensure_init_segment verifies bytes.is_empty() after decoding, but push_segment does not. If trailing bytes after media segment decoding indicates a problem, consider adding the same check here for consistency.


383-388: HTTP requests lack timeout configuration.

If a server hangs or is slow, fetch_bytes will block indefinitely. Consider configuring a request or client-level timeout to prevent the ingest loop from stalling.

 async fn fetch_bytes(&self, url: Url) -> anyhow::Result<Bytes> {
-	let response = self.client.get(url).send().await?;
+	let response = self.client.get(url)
+		.timeout(std::time::Duration::from_secs(30))
+		.send()
+		.await?;
 	let response = response.error_for_status()?;
 	let bytes = response.bytes().await.context("failed to read response body")?;
 	Ok(bytes)
 }

523-544: Tests are minimal but correctly structured.

The inline test module follows coding guidelines. Current tests only cover initialization; consider adding tests for resolve_uri, select_variants, and select_audio which are pure functions that can be tested without HTTP mocking.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a9b87d1 and bf9e958.

⛔ Files ignored due to path filters (1)
  • rs/Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (16)
  • rs/hang-cli/Cargo.toml (1 hunks)
  • rs/hang-cli/src/client.rs (1 hunks)
  • rs/hang-cli/src/import.rs (0 hunks)
  • rs/hang-cli/src/main.rs (4 hunks)
  • rs/hang-cli/src/publish.rs (1 hunks)
  • rs/hang-cli/src/server.rs (2 hunks)
  • rs/hang/Cargo.toml (1 hunks)
  • rs/hang/src/error.rs (2 hunks)
  • rs/hang/src/import/aac.rs (3 hunks)
  • rs/hang/src/import/avc3.rs (6 hunks)
  • rs/hang/src/import/decoder.rs (5 hunks)
  • rs/hang/src/import/hls.rs (1 hunks)
  • rs/hang/src/import/mod.rs (1 hunks)
  • rs/hang/src/model/broadcast.rs (1 hunks)
  • rs/justfile (2 hunks)
  • rs/libmoq/src/state.rs (2 hunks)
💤 Files with no reviewable changes (1)
  • rs/hang-cli/src/import.rs
🚧 Files skipped from review as they are similar to previous changes (4)
  • rs/hang-cli/Cargo.toml
  • rs/hang/src/import/mod.rs
  • rs/hang/src/error.rs
  • rs/hang/Cargo.toml
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rs

📄 CodeRabbit inference engine (CLAUDE.md)

In Rust crates, tests should be integrated within source files using inline test modules

Files:

  • rs/hang/src/model/broadcast.rs
  • rs/hang/src/import/aac.rs
  • rs/hang-cli/src/client.rs
  • rs/hang/src/import/decoder.rs
  • rs/hang-cli/src/publish.rs
  • rs/libmoq/src/state.rs
  • rs/hang-cli/src/server.rs
  • rs/hang-cli/src/main.rs
  • rs/hang/src/import/hls.rs
  • rs/hang/src/import/avc3.rs
🧬 Code graph analysis (9)
rs/hang/src/model/broadcast.rs (7)
rs/hang/src/catalog/video/vp9.rs (1)
  • default (97-108)
rs/hang-cli/src/publish.rs (1)
  • new (35-66)
rs/hang/src/import/avc3.rs (2)
  • new (31-39)
  • new (278-280)
rs/libmoq/src/state.rs (1)
  • new (79-85)
rs/hang/src/model/track.rs (2)
  • new (30-36)
  • new (181-189)
rs/hang/src/catalog/root.rs (2)
  • new (151-156)
  • new (226-228)
rs/hang/src/model/group.rs (1)
  • new (36-43)
rs/hang/src/import/aac.rs (2)
rs/hang/src/import/avc3.rs (2)
  • new (31-39)
  • new (278-280)
rs/hang/src/model/timestamp.rs (1)
  • from_micros (40-46)
rs/hang-cli/src/client.rs (3)
rs/hang/src/catalog/root.rs (1)
  • produce (86-93)
rs/moq-native/src/client.rs (1)
  • connect (141-205)
rs/hang-cli/src/publish.rs (1)
  • run (89-102)
rs/hang-cli/src/publish.rs (5)
rs/hang/src/import/aac.rs (1)
  • new (14-20)
rs/hang/src/import/avc3.rs (2)
  • new (31-39)
  • new (278-280)
rs/hang/src/import/decoder.rs (1)
  • new (37-45)
rs/hang/src/import/hls.rs (3)
  • new (36-38)
  • new (92-99)
  • new (104-119)
rs/hang/src/model/broadcast.rs (2)
  • new (17-26)
  • default (36-38)
rs/libmoq/src/state.rs (1)
rs/hang/src/import/decoder.rs (1)
  • new (37-45)
rs/hang-cli/src/server.rs (1)
rs/hang-cli/src/publish.rs (2)
  • consume (68-70)
  • run (89-102)
rs/hang-cli/src/main.rs (3)
rs/hang-cli/src/server.rs (1)
  • server (14-41)
rs/hang-cli/src/client.rs (1)
  • client (6-31)
rs/hang-cli/src/publish.rs (1)
  • new (35-66)
rs/hang/src/import/hls.rs (4)
rs/hang/src/import/aac.rs (2)
  • new (14-20)
  • is_initialized (140-142)
rs/hang/src/import/avc3.rs (4)
  • new (31-39)
  • new (278-280)
  • init (41-99)
  • is_initialized (227-229)
rs/hang/src/import/decoder.rs (2)
  • new (37-45)
  • is_initialized (114-120)
rs/hang/src/model/broadcast.rs (2)
  • new (17-26)
  • default (36-38)
rs/hang/src/import/avc3.rs (1)
rs/hang/src/model/timestamp.rs (1)
  • from_micros (40-46)
🔇 Additional comments (28)
rs/hang/src/model/broadcast.rs (1)

35-39: LGTM!

The Default implementation correctly delegates to Self::new(), maintaining consistency with the existing From trait implementation and enabling cleaner instantiation in the new Publish workflow.

rs/justfile (3)

108-108: LGTM!

The explicit fmp4 format flag aligns with the new PublishFormat enum structure.


110-112: LGTM!

The new pub-hls command follows consistent patterns with other publish commands and correctly passes the playlist URL to the HLS importer.


129-129: LGTM!

The avc3 format flag correctly replaces the previous --format annex-b flag.

rs/hang/src/import/avc3.rs (6)

25-27: LGTM!

The addition of zero for wall-clock-based timestamp generation provides a sensible fallback when explicit timestamps aren't provided, enabling the decoder to work in streaming scenarios without external timing.


101-110: LGTM!

The initialize method correctly processes non-slice NALs (SPS/PPS/AUD/SEI) without requiring a timestamp, enabling pre-initialization of the decoder before streaming begins.


116-131: LGTM!

The optional timestamp parameter with early resolution via self.pts() ensures consistent timestamp handling across all NALs in the stream.


212-225: Verify timestamp requirement for frame emission.

The pts.context("missing timestamp")? at line 219 enforces that a timestamp must be present when flushing frames. This is correct for frame emission but means maybe_start_frame can fail if called with None from paths that contain slices.

Tracing the call sites:

  • decode_stream and decode_frame resolve timestamps via self.pts() before calling decode_nal, so Some(pts) is always passed.
  • initialize passes None, but only processes non-slice NALs, so contains_slice remains false and maybe_start_frame returns early.

This is safe as implemented.


231-238: LGTM!

The pts helper provides clean timestamp resolution with wall-clock fallback. The as u64 cast from u128 is safe for practical stream durations (overflow would require ~500,000 years of continuous streaming).


401-634: Excellent test coverage.

The comprehensive unit tests for after_start_code, find_start_code, and NalIterator cover edge cases including invalid start codes, consecutive zeros, embedded zeros in NAL data, and realistic H.264 streams.

rs/hang-cli/src/server.rs (2)

14-19: LGTM!

The refactored signature cleanly encapsulates the publish workflow into the Publish struct, simplifying the function signature and separating concerns.


36-40: LGTM!

The select! block correctly orchestrates the concurrent tasks:

  • accept handles incoming connections using the broadcast consumer
  • publish.run() drives the input source (stdin or HLS fetching)
  • web serves HTTP requests

The publish.consume() call returns a cloneable BroadcastConsumer before publish is moved into run().

rs/hang-cli/src/main.rs (2)

37-38: Good refactor to subcommand-based format selection.

The change from #[arg] to #[command(subcommand)] for PublishFormat addresses the previous review feedback about supporting nested CLI commands (e.g., hang import hls --playlist ...). This approach scales better for format-specific options.

Also applies to: 63-64


73-79: Initialization before command dispatch looks correct.

The pattern of extracting the format, creating the Publish instance, and calling init() before dispatching to server/client ensures the broadcast is ready before connections are established.

rs/hang-cli/src/client.rs (1)

6-30: Clean refactor to Publish-driven flow.

The function now accepts a Publish handle and correctly uses publish.consume() for the broadcast and publish.run() in the select loop. The systemd notification now occurs after initialization (which happens in main.rs before this function is called), addressing the previous review feedback.

rs/hang/src/import/aac.rs (2)

144-151: Timestamp helper looks correct.

The pts() method correctly falls back to wall-clock time when no hint is provided. The as u64 cast from as_micros() (which returns u128) is practically safe since u64 microseconds covers ~584,542 years, and Timestamp::from_micros already handles overflow via its Result return type.


125-138: Optional timestamp parameter aligns with the broader refactor.

Making pts optional in decode() enables timestamp inference when the caller doesn't have timing information, which supports the HLS ingest path where timestamps may need to be derived from wall-clock time.

rs/hang/src/import/decoder.rs (1)

7-16: Good introduction of strongly-typed format enum.

The DecoderFormat enum with strum::EnumString provides type-safe format selection and eliminates the previous string-based parsing with deprecated aliases. This is a cleaner API.

rs/hang-cli/src/publish.rs (2)

11-21: Good use of clap Subcommand for format-specific arguments.

Using #[derive(Subcommand)] for PublishFormat allows the Hls variant to carry its own --playlist argument, addressing the previous review feedback about avoiding "hundreds of --hls prefixed flags." This is a clean design.


49-58: No API mismatch found.

The call site at lines 50–56 correctly passes (broadcast.clone(), HlsConfig { ... }) matching the actual Hls::new signature: pub fn new(broadcast: BroadcastProducer, cfg: HlsConfig) -> Self. The code is correct as written.

Likely an incorrect or invalid review comment.

rs/hang/src/import/hls.rs (8)

24-39: LGTM!

Clean configuration struct with sensible defaults and optional client injection for testing/customization.


134-148: LGTM!

The infinite ingest loop with error propagation is appropriate. External cancellation (via task abort or select!) is the expected shutdown mechanism.


224-234: LGTM!

The adaptive polling logic is well-designed—halving the delay when no segments were consumed helps reduce latency for catching new segments.


246-291: LGTM!

Good fallback behavior: gracefully handles both master and media playlists. The warning logs for missing audio are appropriate since audio is optional.


324-354: LGTM!

Thorough validation of init segment processing: verifying complete consumption and successful initialization is good defensive coding.


390-401: LGTM!

The lazy-initialization pattern ensures importers are created on-demand while maintaining index correspondence with video tracks.


515-521: LGTM!

Standard URI resolution handling both absolute and relative paths correctly.


492-510: Variants without resolution are deduplicated together, affecting only video variants.

Using Option<Resolution> as the key means multiple video variants lacking resolution metadata collapse into a single entry (the lowest-bandwidth one). Audio is selected separately via select_audio() and does not participate in this deduplication. The function filters for video codecs before this logic applies, so audio-only variants do not reach the HashMap.

Likely an incorrect or invalid review comment.

Comment on lines +5 to +6
//! independent of any particular HTTP client; callers provide an implementation
//! of [`Fetcher`] to perform the actual network I/O.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Documentation refers to non-existent Fetcher trait.

The module doc mentions callers should "provide an implementation of Fetcher" but the actual implementation uses reqwest::Client directly via HlsConfig.client. Update the doc to reflect the actual design.

-//! independent of any particular HTTP client; callers provide an implementation
-//! of [`Fetcher`] to perform the actual network I/O.
+//! independent of any particular HTTP client; callers can optionally provide a
+//! custom `reqwest::Client` via [`HlsConfig`] to perform network I/O.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
//! independent of any particular HTTP client; callers provide an implementation
//! of [`Fetcher`] to perform the actual network I/O.
//! independent of any particular HTTP client; callers can optionally provide a
//! custom `reqwest::Client` via [`HlsConfig`] to perform network I/O.
🤖 Prompt for AI Agents
In rs/hang/src/import/hls.rs around lines 5 to 6, the module doc incorrectly
references a non-existent `Fetcher` trait; update the documentation to state
that network I/O is performed using a reqwest::Client provided via
HlsConfig.client (or similar field) instead of a `Fetcher` implementation, and
adjust the phrasing to accurately describe that callers supply an HlsConfig with
a configured HTTP client for network requests.

Comment on lines 154 to 166
pub fn create_track(&mut self, broadcast: Id, format: &str, mut init: &[u8]) -> Result<Id, Error> {
let broadcast = self.broadcasts.get_mut(broadcast).ok_or(Error::NotFound)?;
let mut decoder = hang::import::Decoder::new(broadcast.clone(), format)
.ok_or_else(|| Error::UnknownFormat(format.to_string()))?;
// TODO add support for stream decoders too.
let format =
hang::import::DecoderFormat::from_str(format).map_err(|err| Error::UnknownFormat(err.to_string()))?;
let mut decoder = hang::import::Decoder::new(broadcast.clone(), format);

decoder
.initialize(&mut init)
.map_err(|err| Error::InitFailed(Arc::new(err)))?;
assert!(init.is_empty(), "buffer was not fully consumed");

let id = self.tracks.insert(decoder);
Ok(id)
Ok(self.tracks.insert(decoder))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# First, find and examine DecoderFormat definition and FromStr implementation
rg -n "DecoderFormat" rs/hang/src/import/decoder.rs -A 10 -B 2

Repository: moq-dev/moq

Length of output: 1064


🏁 Script executed:

# Search for FromStr trait implementation for DecoderFormat
rg -n "impl.*FromStr.*DecoderFormat|#\[derive.*FromStr\]" rs/hang/src/import/ -B 3 -A 5

Repository: moq-dev/moq

Length of output: 37


🏁 Script executed:

# Check if there are existing tests in state.rs
rg -n "#\[cfg\(test\)\]|#\[test\]|mod tests" rs/libmoq/src/state.rs

Repository: moq-dev/moq

Length of output: 37


🏁 Script executed:

# View the complete create_track function to understand context
sed -n '154,166p' rs/libmoq/src/state.rs

Repository: moq-dev/moq

Length of output: 661


🏁 Script executed:

# Check the entire state.rs file for any test content or test patterns
wc -l rs/libmoq/src/state.rs

Repository: moq-dev/moq

Length of output: 81


🏁 Script executed:

# Search more broadly for test content in state.rs
cat rs/libmoq/src/state.rs | tail -100 | head -50

Repository: moq-dev/moq

Length of output: 1521


🏁 Script executed:

# Check if there are tests for create_track elsewhere in the codebase
rg -l "create_track" rs/libmoq/

Repository: moq-dev/moq

Length of output: 98


🏁 Script executed:

# Check the end of state.rs (where tests would typically be)
tail -30 rs/libmoq/src/state.rs

Repository: moq-dev/moq

Length of output: 1004


🏁 Script executed:

# Look for any test files related to state.rs
fd -t f "test|spec" rs/libmoq/ | grep -i state

Repository: moq-dev/moq

Length of output: 37


🏁 Script executed:

# Check lib.rs to see if tests are centralized there
rg -n "#\[cfg\(test\)\]|#\[test\]|mod tests" rs/libmoq/src/lib.rs -A 3

Repository: moq-dev/moq

Length of output: 37


Add inline test module to verify create_track behavior per coding guidelines.

The refactor to use DecoderFormat::from_str is correct—the strum::EnumString derive macro provides the FromStr implementation. However, rs/libmoq/src/state.rs should include an inline test module (#[cfg(test)]) with tests for create_track and related public methods per the Rust coding guidelines.

🤖 Prompt for AI Agents
In rs/libmoq/src/state.rs around lines 154 to 166, add an inline test module
(#[cfg(test)] mod tests) that verifies create_track behavior: write tests that
construct a State, insert a broadcast, call create_track with a valid format and
a buffer that decodes successfully and assert it returns Ok(Id) and consumes the
buffer; test invalid format returns Error::UnknownFormat; test initialization
failure returns Error::InitFailed; use the crate's public API/setup helpers to
create broadcasts and assert results with pattern matching (avoid panics),
keeping tests self-contained and using #[test] functions inside the test module.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (1)
rs/hang/src/import/hls.rs (1)

1-6: Module docs still reference a non-existent Fetcher abstraction

The docs say callers provide a Fetcher, but the code now uses an optional reqwest::Client in HlsConfig. This can confuse users of the API.

Consider updating the wording along these lines:

-//! feed their fMP4 segments into a `hang` broadcast. It is designed to be
-//! independent of any particular HTTP client; callers provide an implementation
-//! of [`Fetcher`] to perform the actual network I/O.
+//! feed their fMP4 segments into a `hang` broadcast. Network I/O is performed
+//! using a `reqwest::Client`; callers can optionally provide a custom client
+//! via [`HlsConfig::client`].
🧹 Nitpick comments (1)
rs/hang/src/import/hls.rs (1)

102-119: Avoid panicking on HTTP client construction in Hls::new

Client::builder().build().unwrap() can panic (e.g. bad TLS configuration, missing roots). Since this is part of ingest setup, a fallible constructor would make failures explicit instead of crashing the process.

One option:

-pub fn new(broadcast: BroadcastProducer, cfg: HlsConfig) -> Self {
-    Self {
+pub fn new(broadcast: BroadcastProducer, cfg: HlsConfig) -> anyhow::Result<Self> {
+    let client = cfg.client.clone().unwrap_or_else(|| {
+        Client::builder()
+            .user_agent(concat!(env!("CARGO_PKG_NAME"), "/", env!("CARGO_PKG_VERSION")))
+            .build()
+            .unwrap()
+    });
+
+    Ok(Self {
         broadcast,
         video_importers: Vec::new(),
         audio_importer: None,
-        client: cfg.client.clone().unwrap_or_else(|| {
-            Client::builder()
-                .user_agent(concat!(env!("CARGO_PKG_NAME"), "/", env!("CARGO_PKG_VERSION")))
-                .build()
-                .unwrap()
-        }),
+        client,
         cfg,
         video: Vec::new(),
         audio: None,
-    }
+    })

and propagate the Result at call sites (or, if you want to keep new infallible, at least use expect with a clear message).

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between bf9e958 and 37ae6d0.

📒 Files selected for processing (1)
  • rs/hang/src/import/hls.rs (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rs

📄 CodeRabbit inference engine (CLAUDE.md)

In Rust crates, tests should be integrated within source files using inline test modules

Files:

  • rs/hang/src/import/hls.rs

Comment on lines +121 to +124
/// Fetch the latest playlist, download the init segment, and prime the importer with a buffer of segments.
///
/// Returns the number of segments buffered during initialization.
pub async fn init(&mut self) -> anyhow::Result<()> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

init doc comment does not match its return type

The doc says init "Returns the number of segments buffered during initialization.", but the function returns anyhow::Result<()> and only logs the count.

Either return the count or adjust the docs. For example:

-/// Returns the number of segments buffered during initialization.
+/// Primes the ingest by buffering initial segments, logging how many were
+/// buffered. Returns an error if initialization fails.
🤖 Prompt for AI Agents
In rs/hang/src/import/hls.rs around lines 121-124 the doc comment for init
wrongly claims it "Returns the number of segments buffered during
initialization" while the function signature is pub async fn init(&mut self) ->
anyhow::Result<()> and it only logs the count; fix by either (A) updating the
function to return anyhow::Result<usize> and actually return the buffered
segment count (update the signature, propagate the usize return to callers and
adjust any call sites/tests), or (B) change the doc comment to remove the
"Returns..." sentence so it accurately reflects that init returns Result<()> and
only logs the count; pick one option and update related docs/call sites
accordingly.

Comment on lines 293 to 314
async fn consume_segments(
&mut self,
kind: TrackKind,
track: &mut TrackState,
playlist: &MediaPlaylist,
) -> anyhow::Result<usize> {
self.ensure_init_segment(kind, track, playlist).await?;

// Skip segments we've already seen
let skip = track.next_sequence.unwrap_or(0).saturating_sub(playlist.media_sequence) as usize;
let base_seq = playlist.media_sequence + skip as u64;
for (i, segment) in playlist.segments[skip..].iter().enumerate() {
self.push_segment(kind, track, segment, base_seq + i as u64).await?;
}
let consumed = playlist.segments.len() - skip;

if consumed == 0 {
debug!(media_type = ?track.media_type, "no fresh HLS segments available");
}

Ok(consumed)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Potential panic when skip exceeds playlist.segments.len()

skip is derived from next_sequence and media_sequence and then used both for slicing (playlist.segments[skip..]) and len() - skip. If next_sequence jumps ahead of the current playlist window (e.g. server reset, malformed playlist), skip can be greater than the number of segments, causing an out-of-bounds panic.

Safer approach: clamp skip to the segment count and use iter().skip:

-        // Skip segments we've already seen
-        let skip = track.next_sequence.unwrap_or(0).saturating_sub(playlist.media_sequence) as usize;
-        let base_seq = playlist.media_sequence + skip as u64;
-        for (i, segment) in playlist.segments[skip..].iter().enumerate() {
+        // Skip segments we've already seen.
+        let desired = track.next_sequence.unwrap_or(playlist.media_sequence);
+        let mut skip = desired
+            .saturating_sub(playlist.media_sequence) as usize;
+        let total = playlist.segments.len();
+        if skip > total {
+            skip = total;
+        }
+
+        let base_seq = playlist.media_sequence + skip as u64;
+        for (i, segment) in playlist.segments.iter().skip(skip).enumerate() {
             self.push_segment(kind, track, segment, base_seq + i as u64).await?;
         }
-        let consumed = playlist.segments.len() - skip;
+        let consumed = total - skip;

This preserves the intended behavior while avoiding panics on malformed or reset playlists.

🤖 Prompt for AI Agents
In rs/hang/src/import/hls.rs around lines 293 to 314, the computed skip can
exceed playlist.segments.len(), which would panic when slicing and computing
len() - skip; clamp skip to at most playlist.segments.len() (e.g. let skip =
skip.min(playlist.segments.len());) and iterate safely (either use
playlist.segments.iter().skip(skip) or slice with the clamped value) while
computing base_seq and consumed from the clamped skip so consumed =
playlist.segments.len() - skip; keep existing logic for pushing segments but use
the clamped skip to avoid out-of-bounds panics.

Comment on lines +433 to +505
fn select_variants(master: &MasterPlaylist) -> Vec<&VariantStream> {
// Helper to extract the first video codec token from the CODECS attribute.
fn first_video_codec(variant: &VariantStream) -> Option<&str> {
let codecs = variant.codecs.as_deref()?;
codecs.split(',').map(|s| s.trim()).find(|s| !s.is_empty())
}

// Map codec strings into a coarse "family" so we can prefer H.264 over others.
fn codec_family(codec: &str) -> Option<&'static str> {
if codec.starts_with("avc1.") || codec.starts_with("avc3.") {
Some("h264")
} else {
None
}
}

// Consider only non-i-frame variants with a URI and a known codec family.
let candidates: Vec<(&VariantStream, &str, &str)> = master
.variants
.iter()
.filter(|variant| !variant.is_i_frame && !variant.uri.is_empty())
.filter_map(|variant| {
let codec = first_video_codec(variant)?;
let family = codec_family(codec)?;
Some((variant, codec, family))
})
.collect();

if candidates.is_empty() {
return Vec::new();
}

// Prefer families in this order, falling back to the first available.
const FAMILY_PREFERENCE: &[&str] = &["h264"];

let families_present: Vec<&str> = candidates.iter().map(|(_, _, fam)| *fam).collect();

let target_family = FAMILY_PREFERENCE
.iter()
.find(|fav| families_present.iter().any(|fam| fam == *fav))
.copied()
.unwrap_or(families_present[0]);

// Keep only variants in the chosen family.
let family_variants: Vec<&VariantStream> = candidates
.into_iter()
.filter(|(_, _, fam)| *fam == target_family)
.map(|(variant, _, _)| variant)
.collect();

// Deduplicate by resolution, keeping the lowest-bandwidth variant for each size.
let mut by_resolution: HashMap<Option<Resolution>, &VariantStream> = HashMap::new();

for variant in family_variants {
let key = variant.resolution;
let bandwidth = variant.average_bandwidth.unwrap_or(variant.bandwidth);

match by_resolution.entry(key) {
Entry::Vacant(entry) => {
entry.insert(variant);
}
Entry::Occupied(mut entry) => {
let existing = entry.get();
let existing_bw = existing.average_bandwidth.unwrap_or(existing.bandwidth);
if bandwidth < existing_bw {
entry.insert(variant);
}
}
}
}

by_resolution.values().cloned().collect()
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Variant selection may drop valid H.264 renditions when audio codec appears first in CODECS

first_video_codec currently just takes the first non-empty token from variant.codecs. For manifests where the CODECS attribute lists audio first (e.g. "mp4a.40.2,avc1.640029"), this returns the audio codec, codec_family returns None, and the whole variant is discarded. That can make select_variants return an empty list even though valid H.264 video is present.

You can fix this by making first_video_codec explicitly search for an H.264 video codec:

-    // Helper to extract the first video codec token from the CODECS attribute.
-    fn first_video_codec(variant: &VariantStream) -> Option<&str> {
-        let codecs = variant.codecs.as_deref()?;
-        codecs.split(',').map(|s| s.trim()).find(|s| !s.is_empty())
-    }
+    // Helper to extract a video codec token from the CODECS attribute.
+    fn first_video_codec(variant: &VariantStream) -> Option<&str> {
+        let codecs = variant.codecs.as_deref()?;
+        codecs
+            .split(',')
+            .map(|s| s.trim())
+            // Currently we only support H.264 video.
+            .find(|s| s.starts_with("avc1.") || s.starts_with("avc3."))
+    }

The rest of select_variants can stay the same, but this will keep valid H.264 variants even when audio is listed first in CODECS.

If you later add support for other video codec families, you can generalize this to search for any codec where codec_family returns Some(_).

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
fn select_variants(master: &MasterPlaylist) -> Vec<&VariantStream> {
// Helper to extract the first video codec token from the CODECS attribute.
fn first_video_codec(variant: &VariantStream) -> Option<&str> {
let codecs = variant.codecs.as_deref()?;
codecs.split(',').map(|s| s.trim()).find(|s| !s.is_empty())
}
// Map codec strings into a coarse "family" so we can prefer H.264 over others.
fn codec_family(codec: &str) -> Option<&'static str> {
if codec.starts_with("avc1.") || codec.starts_with("avc3.") {
Some("h264")
} else {
None
}
}
// Consider only non-i-frame variants with a URI and a known codec family.
let candidates: Vec<(&VariantStream, &str, &str)> = master
.variants
.iter()
.filter(|variant| !variant.is_i_frame && !variant.uri.is_empty())
.filter_map(|variant| {
let codec = first_video_codec(variant)?;
let family = codec_family(codec)?;
Some((variant, codec, family))
})
.collect();
if candidates.is_empty() {
return Vec::new();
}
// Prefer families in this order, falling back to the first available.
const FAMILY_PREFERENCE: &[&str] = &["h264"];
let families_present: Vec<&str> = candidates.iter().map(|(_, _, fam)| *fam).collect();
let target_family = FAMILY_PREFERENCE
.iter()
.find(|fav| families_present.iter().any(|fam| fam == *fav))
.copied()
.unwrap_or(families_present[0]);
// Keep only variants in the chosen family.
let family_variants: Vec<&VariantStream> = candidates
.into_iter()
.filter(|(_, _, fam)| *fam == target_family)
.map(|(variant, _, _)| variant)
.collect();
// Deduplicate by resolution, keeping the lowest-bandwidth variant for each size.
let mut by_resolution: HashMap<Option<Resolution>, &VariantStream> = HashMap::new();
for variant in family_variants {
let key = variant.resolution;
let bandwidth = variant.average_bandwidth.unwrap_or(variant.bandwidth);
match by_resolution.entry(key) {
Entry::Vacant(entry) => {
entry.insert(variant);
}
Entry::Occupied(mut entry) => {
let existing = entry.get();
let existing_bw = existing.average_bandwidth.unwrap_or(existing.bandwidth);
if bandwidth < existing_bw {
entry.insert(variant);
}
}
}
}
by_resolution.values().cloned().collect()
}
fn select_variants(master: &MasterPlaylist) -> Vec<&VariantStream> {
// Helper to extract a video codec token from the CODECS attribute.
fn first_video_codec(variant: &VariantStream) -> Option<&str> {
let codecs = variant.codecs.as_deref()?;
codecs
.split(',')
.map(|s| s.trim())
// Currently we only support H.264 video.
.find(|s| s.starts_with("avc1.") || s.starts_with("avc3."))
}
// Map codec strings into a coarse "family" so we can prefer H.264 over others.
fn codec_family(codec: &str) -> Option<&'static str> {
if codec.starts_with("avc1.") || codec.starts_with("avc3.") {
Some("h264")
} else {
None
}
}
// Consider only non-i-frame variants with a URI and a known codec family.
let candidates: Vec<(&VariantStream, &str, &str)> = master
.variants
.iter()
.filter(|variant| !variant.is_i_frame && !variant.uri.is_empty())
.filter_map(|variant| {
let codec = first_video_codec(variant)?;
let family = codec_family(codec)?;
Some((variant, codec, family))
})
.collect();
if candidates.is_empty() {
return Vec::new();
}
// Prefer families in this order, falling back to the first available.
const FAMILY_PREFERENCE: &[&str] = &["h264"];
let families_present: Vec<&str> = candidates.iter().map(|(_, _, fam)| *fam).collect();
let target_family = FAMILY_PREFERENCE
.iter()
.find(|fav| families_present.iter().any(|fam| fam == *fav))
.copied()
.unwrap_or(families_present[0]);
// Keep only variants in the chosen family.
let family_variants: Vec<&VariantStream> = candidates
.into_iter()
.filter(|(_, _, fam)| *fam == target_family)
.map(|(variant, _, _)| variant)
.collect();
// Deduplicate by resolution, keeping the lowest-bandwidth variant for each size.
let mut by_resolution: HashMap<Option<Resolution>, &VariantStream> = HashMap::new();
for variant in family_variants {
let key = variant.resolution;
let bandwidth = variant.average_bandwidth.unwrap_or(variant.bandwidth);
match by_resolution.entry(key) {
Entry::Vacant(entry) => {
entry.insert(variant);
}
Entry::Occupied(mut entry) => {
let existing = entry.get();
let existing_bw = existing.average_bandwidth.unwrap_or(existing.bandwidth);
if bandwidth < existing_bw {
entry.insert(variant);
}
}
}
}
by_resolution.values().cloned().collect()
}
🤖 Prompt for AI Agents
In rs/hang/src/import/hls.rs around lines 433 to 505, first_video_codec
currently returns the first token from variant.codecs which can be an audio
codec and causes H.264 video variants to be dropped; change first_video_codec so
it scans all comma-separated CODECS tokens and returns the first token that
matches an H.264 video codec (i.e. starts_with "avc1." or "avc3.") — keep the
existing as_deref handling and trimming logic, returning None only if no video
codec token is found; you may alternatively call codec_family from inside the
search and return the token when codec_family(token).is_some().

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (5)
rs/hang/src/import/hls.rs (5)

5-6: Documentation refers to non-existent Fetcher trait.

The module doc mentions "callers provide an implementation of [Fetcher]" but the implementation uses reqwest::Client directly via HlsConfig.client. Update the docs to reflect the actual design.


113-124: Doc comment does not match return type.

The doc says "Returns the number of segments buffered during initialization" but the function returns anyhow::Result<()> and only logs the count.


284-305: Potential panic when skip exceeds segment count.

If skip exceeds playlist.segments.len() (e.g., due to playlist reset or malformed data), the slice playlist.segments[skip..] will panic. Clamp skip to the segment count and use iterator methods to avoid out-of-bounds access.


432-438: HLS streams with only HEVC, VP9, or AV1 variants will fail.

The codec_family function only recognizes H.264 (avc1/avc3). Streams with HEVC, VP9, or AV1 codecs are filtered out entirely, causing initialization to fail if no H.264 variants exist.

Since fMP4 supports these codecs (per rs/hang/src/import/fmp4.rs lines 204-276), extend codec recognition to include them.


426-429: Variant selection may drop valid H.264 renditions when audio codec is listed first.

first_video_codec returns the first non-empty token from variant.codecs. If the CODECS attribute lists audio first (e.g., "mp4a.40.2,avc1.640029"), an audio codec is returned, codec_family returns None, and the variant is discarded.

Fix by searching for an H.264 video codec explicitly (tokens starting with avc1. or avc3.) instead of blindly taking the first token.

🧹 Nitpick comments (1)
rs/hang/src/import/hls.rs (1)

506-527: Consider expanding test coverage beyond basic construction.

The existing tests verify basic initialization, but the module would benefit from tests covering core functionality such as:

  • Playlist parsing and variant selection
  • URI resolution edge cases
  • Segment sequencing logic
  • Error handling for malformed playlists

These tests would increase confidence in the implementation and catch regressions.

As per coding guidelines, consider adding inline test modules within the source file for better integration testing of the core logic.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 37ae6d0 and 0bab310.

📒 Files selected for processing (2)
  • rs/hang/src/import/fmp4.rs (4 hunks)
  • rs/hang/src/import/hls.rs (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rs

📄 CodeRabbit inference engine (CLAUDE.md)

In Rust crates, tests should be integrated within source files using inline test modules

Files:

  • rs/hang/src/import/fmp4.rs
  • rs/hang/src/import/hls.rs
🧬 Code graph analysis (2)
rs/hang/src/import/fmp4.rs (4)
rs/hang/src/import/aac.rs (1)
  • new (14-20)
rs/libmoq/src/state.rs (1)
  • new (79-85)
rs/hang/src/import/avc3.rs (2)
  • new (31-39)
  • new (278-280)
rs/hang/src/catalog/root.rs (2)
  • new (151-156)
  • new (226-228)
rs/hang/src/import/hls.rs (4)
rs/hang/src/import/decoder.rs (2)
  • new (37-45)
  • is_initialized (114-120)
rs/hang/src/model/broadcast.rs (2)
  • new (17-26)
  • default (36-38)
rs/hang/src/import/aac.rs (2)
  • new (14-20)
  • is_initialized (140-142)
rs/hang/src/import/avc3.rs (4)
  • new (31-39)
  • new (278-280)
  • init (41-99)
  • is_initialized (227-229)
🔇 Additional comments (6)
rs/hang/src/import/fmp4.rs (3)

31-32: LGTM: Clear explanation of the catalog cloning pattern.

The comment accurately describes that this is the same underlying catalog accessed via a separate binding. The Arc-based sharing is well-documented.


54-58: LGTM: Simplified initialization using shared catalog.

The refactored constructor correctly uses the broadcast's existing catalog instead of creating a new one. Removing mut from the parameter is appropriate since the broadcast is no longer mutated here. This aligns with the pattern used by other importers (Avc3, Aac).


498-498: LGTM: Drop correctly updated to use broadcast's catalog.

The Drop implementation correctly accesses the catalog through self.broadcast.catalog to maintain consistency with the new shared catalog design.

rs/hang/src/import/hls.rs (3)

24-39: LGTM: Well-designed configuration struct.

The HlsConfig structure provides a clean interface with sensible defaults (optional client) and follows Rust conventions.


94-111: LGTM: Robust constructor with sensible defaults.

The constructor properly initializes all fields and provides a fallback reqwest::Client with an appropriate user agent when none is provided.


307-337: LGTM: Robust init segment handling with validation.

The initialization logic properly validates that the init segment was fully consumed and successfully initialized the importer. The guard for init_ready prevents redundant fetches.

Comment on lines +362 to +364
fn find_map<'a>(&self, playlist: &'a MediaPlaylist) -> Option<&'a Map> {
playlist.segments.iter().find_map(|segment| segment.map.as_ref())
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Edge case: Assumes uniform init segment across all segments.

The find_map helper returns the first EXT-X-MAP found and assumes all segments in the playlist share the same init segment. While this is typical, the HLS spec permits per-segment maps with different URIs. If a playlist contains segments with varying maps, only the first will be used, potentially causing decode failures for later segments.

For robustness in handling non-standard streams, consider tracking the map on a per-segment basis or validating that all segments reference the same init segment URI.

🤖 Prompt for AI Agents
In rs/hang/src/import/hls.rs around lines 362-364, find_map currently returns
the first EXT-X-MAP it encounters which assumes a uniform init segment across
the playlist; this can break playlists that use per-segment maps. Fix by either
(A) switching to per-segment map handling — change callers to request the Map
for each segment (e.g., change find_map to accept a segment or lookup map by
segment and use that when processing each segment), or (B) validate that every
segment that has a Map uses the same URI before proceeding and return an
error/log and abort if differing URIs are found. Implement one of these
approaches and adjust call sites accordingly so decoding uses the correct init
segment for each segment.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
rs/hang/src/import/avc3.rs (1)

243-250: Consider using checked conversion for the timestamp cast.

Line 249 casts u128 to u64, which could theoretically truncate after ~584,542 years of elapsed time. While not a practical concern, using .try_into() or documenting the safety assumption would be more defensive.

Note: This pattern matches the implementation in rs/hang/src/import/aac.rs lines 150-157, so if changed here, consider updating both for consistency.

Example with checked conversion:

-		Ok(hang::Timestamp::from_micros(zero.elapsed().as_micros() as u64)?)
+		let micros = zero.elapsed().as_micros().try_into()
+			.context("timestamp overflow (stream ran for >584k years)")?;
+		Ok(hang::Timestamp::from_micros(micros)?)
📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0bab310 and 8307f3c.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (6)
  • justfile (1 hunks)
  • rs/hang/Cargo.toml (1 hunks)
  • rs/hang/src/import/aac.rs (3 hunks)
  • rs/hang/src/import/avc3.rs (6 hunks)
  • rs/hang/src/import/fmp4.rs (4 hunks)
  • rs/libmoq/src/state.rs (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
  • rs/hang/src/import/fmp4.rs
  • rs/hang/Cargo.toml
  • rs/hang/src/import/aac.rs
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rs

📄 CodeRabbit inference engine (CLAUDE.md)

In Rust crates, tests should be integrated within source files using inline test modules

Files:

  • rs/libmoq/src/state.rs
  • rs/hang/src/import/avc3.rs
🧠 Learnings (1)
📚 Learning: 2025-12-10T04:00:14.871Z
Learnt from: CR
Repo: moq-dev/moq PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-10T04:00:14.871Z
Learning: Applies to **/*.rs : In Rust crates, tests should be integrated within source files using inline test modules

Applied to files:

  • rs/libmoq/src/state.rs
🧬 Code graph analysis (2)
rs/libmoq/src/state.rs (1)
rs/hang/src/import/decoder.rs (1)
  • new (37-45)
rs/hang/src/import/avc3.rs (1)
rs/hang/src/import/aac.rs (1)
  • pts (151-158)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Check
🔇 Additional comments (9)
rs/libmoq/src/state.rs (1)

2-2: LGTM! Clean refactor to enum-based format handling.

The changes correctly migrate from string-based format handling to the typed DecoderFormat enum:

  • The FromStr import supports the from_str parsing on line 158
  • Error mapping to UnknownFormat preserves the semantic error handling
  • The Decoder::new call aligns with the updated signature that accepts DecoderFormat

This refactor integrates well with the broader PR direction of moving to typed format enums across the import/publish pipeline.

Note: The past review comment requesting inline tests for create_track per the Rust coding guidelines remains open.

Also applies to: 156-159, 167-167

justfile (2)

147-147: LGTM: Explicit format specification improves clarity.

The addition of fmp4 as an explicit positional argument aligns with the refactored publish architecture. The {{args}} placement after fmp4 still allows passing additional flags.


149-151: LGTM: HLS ingestion recipe properly quotes all parameters.

The new pub-hls recipe correctly quotes all interpolated variables ("{{relay}}", "{{name}}", "{{url}}"), which protects against shell interpretation of special characters like & in URLs with query parameters. The past review concern about quoting is no longer applicable, as the implementation changed from delegating to rs/justfile to directly calling cargo run.

rs/hang/src/import/avc3.rs (6)

26-28: LGTM: Wall-clock timestamp support added.

The zero field is properly initialized and enables optional wall-clock timestamp fallback for the HLS ingestion workflow.

Also applies to: 38-38


102-111: LGTM: Initialize method enables pre-configuration.

The new initialize method cleanly handles SPS/PPS setup before stream decoding begins. Passing None for timestamps is safe because maybe_start_frame returns early when no slices have been encountered.


117-132: LGTM: Optional timestamp support in decode_stream.

The signature change maintains correct behavior by deriving a concrete timestamp upfront, ensuring downstream calls to decode_nal always receive Some(pts) when processing stream data.


141-163: LGTM: Optional timestamp support in decode_frame.

The timestamp handling is consistent with decode_stream, deriving a concrete timestamp early and propagating it correctly through the frame decoding pipeline.


165-213: LGTM: Optional timestamp handling in decode_nal.

The optional timestamp parameter is correctly propagated to maybe_start_frame. The design is safe because:

  • During initialization (SPS/PPS/AUD/SEI), maybe_start_frame is called before any slices are encountered, returning early.
  • During streaming, public methods ensure Some(pts) is always provided.

215-237: LGTM: Well-designed guard logic in maybe_start_frame.

The optional timestamp parameter is correctly guarded—the method only requires a concrete timestamp (line 222) when actually flushing a frame (contains_slice == true). This allows initialization paths to pass None while ensuring streaming paths always have timestamps when needed.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
justfile (1)

326-331: Consider trapping EXIT to clean up orphaned processes.

If the Python server crashes or exits unexpectedly, the backgrounded ffmpeg processes will continue running. Adding EXIT to the trap ensures cleanup on any termination.

-	trap cleanup SIGINT SIGTERM
+	trap cleanup EXIT

Note: With EXIT, you can remove SIGINT SIGTERM as EXIT is triggered on those signals too.

📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8307f3c and 401e840.

📒 Files selected for processing (2)
  • .gitignore (1 hunks)
  • justfile (2 hunks)
✅ Files skipped from review due to trivial changes (1)
  • .gitignore
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Check
🔇 Additional comments (3)
justfile (3)

147-147: LGTM!

Making the fmp4 format explicit aligns with the new CLI structure.


149-151: LGTM!

The URL and other parameters are properly quoted to handle special characters.


294-363: Well-structured test recipe for HLS streaming.

The recipe properly sets up a multi-variant HLS stream with synchronized segment numbering across video and audio tracks. Using fmp4 segment type aligns with the HLS importer's expectations. Good addition for testing the HLS publishing workflow.

video_144p/video.m3u8
EOF

sed -i 's/^\t//' "$OUT_DIR/master.m3u8"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

sed -i is not portable to macOS.

On macOS/BSD, sed -i requires a backup extension argument (even if empty). This will fail on macOS with "invalid command code".

Consider using a portable approach:

-	sed -i 's/^\t//' "$OUT_DIR/master.m3u8"
+	sed -i'' 's/^\t//' "$OUT_DIR/master.m3u8"

Alternatively, use <<- heredoc syntax which natively strips leading tabs:

cat > "$OUT_DIR/master.m3u8" <<-'EOF'
	#EXTM3U
	...
	EOF
🤖 Prompt for AI Agents
In justfile around line 321, the use of "sed -i 's/^\t//'" is not portable to
macOS/BSD; replace it with a portable solution such as either (A) use the
macOS-compatible inline syntax "sed -i '' -e 's/^\t//'" when running on macOS,
or (B) perform the edit without in-place sed by writing the transformed output
to a temporary file using a POSIX-safe tool (awk/perl) and then atomically move
it into place, or (C) avoid post-processing entirely by generating the file with
a heredoc that strips leading tabs (<<-), and update the recipe to use one of
these portable options.

@sebastianpiq
Copy link
Contributor Author

Hey @kixelated , I've been fixing the comments you left on the PR.

What's still pending is the CLI refactor so it can be used with the import in this form: hang import hls --url ... - I think it's better for another PR.

I also left a new justfile script just hls-stream to serve an HLS stream so it can be used as ingest.

PS: There are some conflicts that appeared when trying to sync with main. Could you take a look since I'm not very familiar with them?

@kixelated
Copy link
Collaborator

Hey @kixelated , I've been fixing the comments you left on the PR.

What's still pending is the CLI refactor so it can be used with the import in this form: hang import hls --url ... - I think it's better for another PR.

I also left a new justfile script just hls-stream to serve an HLS stream so it can be used as ingest.

PS: There are some conflicts that appeared when trying to sync with main. Could you take a look since I'm not very familiar with them?

Sure, and maybe give me contributor on your fork? It seems like qualabs has turned off the "allow maintainers to make edits" option so it's more difficult for me to fix stuff.

@kixelated kixelated mentioned this pull request Dec 19, 2025
@kixelated
Copy link
Collaborator

Merged as #789

@kixelated kixelated closed this Dec 19, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants