-
Notifications
You must be signed in to change notification settings - Fork 135
feat: hls import #761
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: hls import #761
Conversation
…:Media` for generic tracks
…`ImportManifest` for clarity.
WalkthroughThe PR refactors the publish path to a new Publish workflow (formats: Avc3, Fmp4, Hls) and removes the prior Import-based AsyncRead ingest. It adds an HLS ingest module that parses master/media playlists, selects variants, fetches init/segment data, and feeds per-variant FMP4 importers into a shared broadcast. Decoder APIs were changed to optional timestamp handling and a DecoderFormat enum; error types were extended for HTTP and URL errors; multiple Cargo dependencies and CLI signatures were updated. Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (6)
rs/justfile (1)
110-112: Consider adding variadic*argspassthrough toingest-hlsThe wiring to
hang publishlooks consistent with other recipes, but unlikepub/pub-h264/clock, this recipe doesn’t allow forwarding extra CLI flags. That makes it harder to experiment with additional publish options (e.g. auth, logging tweaks) without editing the Justfile.You could keep behavior identical while allowing future flexibility:
-# Ingest a live HLS media playlist and publish it via hang (full ladder). -ingest-hls url name="demo" relay="http://localhost:4443/anon": - cargo run --bin hang -- publish --url "{{relay}}" --name "{{name}}" --format hls --hls-url "{{url}}" +# Ingest a live HLS media playlist and publish it via hang (full ladder). +ingest-hls url name="demo" relay="http://localhost:4443/anon" *args: + cargo run --bin hang -- publish \ + --url "{{relay}}" \ + --name "{{name}}" \ + --format hls \ + --hls-url "{{url}}" \ + {{args}}rs/hang/src/error.rs (1)
69-79: New HLS/HTTP/URL error variants integrate cleanlyThe additional
Hls,Http, andUrlvariants plusFrom<reqwest::Error>follow the existing error design (Arc-wrapped inner errors, Clone-able enum) and should compose well with callers.If you later need richer diagnostics for HLS failures, consider introducing a small
HlsErrorenum/newtype instead of a bareString, but that’s purely optional at this stage.Also applies to: 95-99
rs/hang-cli/src/main.rs (1)
67-69: Have clap enforce--hls-urlwhen--format hlsThe new
hls_urloption is correctly plumbed intoclient, and the runtime check there ensures it’s present forImportType::Hls. You can tighten the UX by having clap enforce this at parse time:- /// URL of an HLS playlist to ingest (for --format hls). - #[arg(long)] - hls_url: Option<Url>, + /// URL of an HLS playlist to ingest (for --format hls). + #[arg(long, required_if_eq("format", "hls"))] + hls_url: Option<Url>,Please double-check the exact
required_if_eqattribute spelling/usage for your clap version before applying.Also applies to: 90-91
rs/hang/src/import/media/fmp4.rs (1)
96-103: Consider acceptingBytesdirectly to avoid an extra allocation.The HLS caller already has
Bytesfrom the HTTP response. UsingBytes::copy_from_slicehere creates an unnecessary copy. Consider adding an overload or changing the signature:- pub fn parse(&mut self, bytes: &[u8]) -> anyhow::Result<()> { - use std::io::Cursor; - let mut cursor = Cursor::new(Bytes::copy_from_slice(bytes)); - self.decode(&mut cursor) + pub fn parse(&mut self, bytes: impl Into<Bytes>) -> anyhow::Result<()> { + use std::io::Cursor; + let mut cursor = Cursor::new(bytes.into()); + self.decode(&mut cursor) }This allows callers with
Bytesto pass ownership directly, while&[u8]still works viaInto<Bytes>.rs/hang/src/import/manifest/hls.rs (2)
130-144: Consider adding a maximum iteration limit or health check.The
service()loop runs indefinitely until externally cancelled. If the HLS source ends (e.g., VOD playlist withEXT-X-ENDLIST), this will keep polling forever. Consider checking for playlist end conditions.pub async fn service(&mut self) -> Result<()> { loop { let outcome = self.step().await?; + + // TODO: Check for EXT-X-ENDLIST and break if playlist is complete + let delay = self.refresh_delay(outcome.target_duration, outcome.wrote_segments); debug!( wrote = outcome.wrote_segments, delay = ?delay, "HLS ingest step complete" ); tokio::time::sleep(delay).await; } }
289-318: Consider adding retry logic for transient HTTP failures.HLS ingest over the network is prone to transient failures. A single failed segment fetch currently propagates the error and terminates the entire ingest. Consider adding retry with backoff for resilience.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
rs/Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (16)
justfile(1 hunks)rs/hang-cli/Cargo.toml(1 hunks)rs/hang-cli/src/client.rs(3 hunks)rs/hang-cli/src/import.rs(2 hunks)rs/hang-cli/src/main.rs(2 hunks)rs/hang-cli/src/server.rs(2 hunks)rs/hang/Cargo.toml(1 hunks)rs/hang/src/error.rs(2 hunks)rs/hang/src/import/manifest/hls.rs(1 hunks)rs/hang/src/import/manifest/mod.rs(1 hunks)rs/hang/src/import/media/fmp4.rs(1 hunks)rs/hang/src/import/media/mod.rs(3 hunks)rs/hang/src/import/mod.rs(1 hunks)rs/hang/src/lib.rs(1 hunks)rs/justfile(1 hunks)rs/libmoq/src/state.rs(2 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.rs
📄 CodeRabbit inference engine (CLAUDE.md)
In Rust crates, tests should be integrated within source files using inline test modules
Files:
rs/hang-cli/src/server.rsrs/hang-cli/src/main.rsrs/hang/src/error.rsrs/hang/src/import/manifest/mod.rsrs/hang/src/import/media/fmp4.rsrs/hang/src/import/mod.rsrs/hang-cli/src/client.rsrs/hang/src/lib.rsrs/libmoq/src/state.rsrs/hang/src/import/manifest/hls.rsrs/hang-cli/src/import.rsrs/hang/src/import/media/mod.rs
rs/**/Cargo.toml
📄 CodeRabbit inference engine (CLAUDE.md)
For Rust development, use the workspace configuration in
rs/Cargo.toml
Files:
rs/hang-cli/Cargo.tomlrs/hang/Cargo.toml
🧬 Code graph analysis (5)
rs/hang-cli/src/server.rs (2)
rs/hang-cli/src/import.rs (3)
new(30-33)new(54-64)read_from(44-46)rs/hang/src/import/media/mod.rs (1)
new(36-49)
rs/hang-cli/src/main.rs (1)
rs/hang-cli/src/client.rs (1)
client(10-46)
rs/hang/src/import/manifest/mod.rs (3)
rs/hang-cli/src/import.rs (4)
new(30-33)new(54-64)init(66-68)service(70-72)rs/hang/src/import/manifest/hls.rs (5)
new(32-34)new(92-99)new(104-115)init(120-128)service(131-144)rs/hang/src/import/media/mod.rs (1)
new(36-49)
rs/libmoq/src/state.rs (1)
rs/hang/src/import/media/mod.rs (1)
new(36-49)
rs/hang-cli/src/import.rs (3)
rs/hang/src/import/manifest/hls.rs (5)
new(32-34)new(92-99)new(104-115)init(120-128)service(131-144)rs/hang/src/import/manifest/mod.rs (3)
new(18-28)init(30-34)service(36-40)rs/hang/src/import/media/mod.rs (1)
new(36-49)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Check
🔇 Additional comments (19)
rs/hang/src/lib.rs (1)
14-15: Updated import docs match new import modulesThe new fMP4 and HLS import bullets accurately reflect the split into media/manifest modules and the new
hlspath. No changes needed.rs/hang/src/import/mod.rs (1)
1-5: Import module re-export reorganization looks goodRe-exporting
manifestandmediaat the top level keeps the publichang::import::*surface intact while clarifying internal structure. No issues from this change alone.rs/hang-cli/src/server.rs (1)
13-14: Server-side switch toImportMediakeeps ingest flow intactUsing
crate::import::ImportMediahere is consistent with the client path and the new import API: construct fromImportType,init_from(input).await?, then drive ingestion viaread_from(input)insideselect!. This preserves the previous behavior under the new naming.Also applies to: 37-40, 46-47
rs/hang-cli/Cargo.toml (1)
22-22: New CLI dependencies align with HLS/manifest ingestAdding
bytes,m3u8-rs, andreqwesthere fits the new HLS/HTTP ingest path. Just make sure versions/features stay consistent with the workspace (and with thehangcrate) so you don’t accidentally pull in multiple major versions or conflicting TLS backends.Also applies to: 26-26, 28-28
rs/libmoq/src/state.rs (1)
27-28: Track storage migration toImportMediais consistentUsing
NonZeroSlab<hang::import::ImportMedia>fortracksand constructing viaImportMedia::new(broadcast.clone(), format)cleanly follows the new import API. The existingUnknownFormatandShortDecodeerror paths are preserved, so external behavior should remain the same.Also applies to: 156-167
rs/hang/Cargo.toml (1)
31-33: Core crate dependency additions match new HLS/HTTP functionalityBringing
url,m3u8-rs, andreqwestinto the mainhangcrate lines up with the new manifest/HLS import code and the addedUrl/Httperror variants. Ensure these versions and feature flags stay in sync with the CLI crate and any workspace-level definitions to avoid duplication or TLS backend mismatches.rs/hang/src/import/media/fmp4.rs (2)
74-89: LGTM!The
with_catalogconstructor correctly initializes all fields and enables sharing a catalog across multiple importers, which is essential for the HLS multi-rendition use case.
91-94: LGTM!Simple accessor that returns a clone of the catalog producer for sharing.
rs/hang-cli/src/client.rs (2)
30-45: LGTM!Clean branching logic that properly validates the
hls_urlrequirement for HLS format and delegates to the appropriate import path.
48-61: LGTM!Well-structured helper that handles the common event loop pattern with graceful shutdown via CTRL-C.
rs/hang/src/import/manifest/mod.rs (2)
8-10: LGTM!The
Protocolenum provides a clean extension point for future manifest formats (e.g., DASH as mentioned in PR objectives).
17-41: LGTM!Clean factory pattern with
Optionreturn for unsupported formats and proper delegation to the inner protocol implementation.rs/hang/src/import/media/mod.rs (2)
6-12: LGTM!Clean module organization with submodules for each codec format and glob re-exports for convenience.
106-126: Verify partial buffer handling in the decode helpers.The
mem::takepattern swaps out the buffer, but ifdecodedoesn't fully consume it, the remaining bytes are put back. However, the currentdecodemethod on line 79 has a TODO andensure!that requires the buffer to be empty. This means partial data will cause an error rather than being accumulated.This is fine for fMP4 where atoms are self-contained, but worth verifying this matches the expected behavior for streaming inputs.
rs/hang-cli/src/import.rs (2)
8-23: LGTM!The
ImportTypeenum cleanly separates the CLI-level format selection from the underlying implementation details. Theas_str()method provides the mapping to internal format identifiers.
53-64: LGTM!Good HTTP client setup with a proper user-agent header for identification, and appropriate error handling with context.
rs/hang/src/import/manifest/hls.rs (3)
24-35: LGTM!Clean configuration struct with a simple constructor.
461-533: LGTM!The variant selection logic is well-thought-out: it filters to known codec families (currently H.264), deduplicates by resolution keeping the lowest bandwidth variant, and has clear comments explaining the rationale.
543-565: LGTM!Good to see inline unit tests following the Rust convention. The tests cover basic construction and initial state verification.
kixelated
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Initial review, haven't looked closely at import/manifest/hls.rs yet.
Lemme know if you want me to edit the PR for any nit pick Rust stuff. Looks pretty good in general.
rs/hang/src/import/media/fmp4.rs
Outdated
| pub fn parse(&mut self, bytes: &[u8]) -> anyhow::Result<()> { | ||
| use std::io::Cursor; | ||
| let mut cursor = Cursor::new(Bytes::copy_from_slice(bytes)); | ||
| self.decode(&mut cursor) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And yeah I read more, fetch_bytes returns Bytes so this parse function is indeed making an extra allocation/copy. Just call decode on the returned Bytes and it'll work like magic.
|
And yeah, I really love the reuse of the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (4)
justfile (1)
74-78: Quote arguments to prevent shell breakage (already flagged in previous review).The delegation on line 77 passes arguments without quotes, which will cause failures when the HLS URL contains shell-special characters like
&,?, or spaces. This issue has already been flagged in a previous review with a suggested fix.rs/hang-cli/src/client.rs (1)
34-34:sd_notifyis still called before initialization completes.Per the previous review feedback, moving
sd_notifyafter theinit()/init_from()calls would allowsystemctl startto block until the media is fully initialized. Currently, readiness is signaled before the HLS manifest or media stream is parsed.Consider moving the notification into the branched logic after initialization succeeds:
- // Notify systemd that we're ready. - let _ = sd_notify::notify(true, &[sd_notify::NotifyState::Ready]); - if format == ImportType::Hls { let hls_url = hls_url.ok_or_else(|| anyhow::anyhow!("--hls-url is required when --format hls is specified"))?; let mut manifest = ImportManifest::new(broadcast.producer.into(), hls_url)?; manifest.init().await.context("failed to initialize manifest import")?; + let _ = sd_notify::notify(true, &[sd_notify::NotifyState::Ready]); run_loop(manifest.service(), session).await } else { let mut media = ImportMedia::new(broadcast.producer.into(), format); media .init_from(input) .await .context("failed to initialize from media stream")?; + let _ = sd_notify::notify(true, &[sd_notify::NotifyState::Ready]); run_loop(media.read_from(input), session).await }rs/hang/src/import/manifest/hls.rs (2)
294-305: Consider the suggested optimization for segment iteration.The current loop iterates all segments checking sequence numbers. Per previous feedback, using
.skip()could be more efficient for long playlists:let skip = track.next_sequence.unwrap_or(0).saturating_sub(playlist.media_sequence) as usize; for segment in playlist.segments.iter().skip(skip) { self.push_segment(kind, track, segment, sequence).await?; consumed += 1; sequence += 1; }This avoids redundant iteration over already-processed segments.
375-380: HTTP requests lack timeout protection.As previously noted,
fetch_bytescan block indefinitely if the remote server becomes unresponsive. Add a request timeout to prevent stalling the entire ingest:async fn fetch_bytes(&self, url: &Url) -> Result<Bytes> { - let response = self.client.get(url.clone()).send().await?; + let response = self.client + .get(url.clone()) + .timeout(std::time::Duration::from_secs(30)) + .send() + .await?; let response = response.error_for_status()?; let bytes = response.bytes().await?; Ok(bytes) }Alternatively, configure a default timeout when constructing the
Client.
🧹 Nitpick comments (2)
rs/justfile (1)
110-112: LGTM with optional consistency improvement.The recipe correctly quotes all arguments and follows the expected pattern for invoking the hang binary with HLS-specific flags.
Optional: Consider adding a pre-build step for consistency.
The existing
pubandpub-h264recipes include a pre-build step (cargo build --bin hang) to avoid compilation delays during media ingestion. While less critical for polling-based HLS ingestion than for stdin-based streaming, adding this step would improve consistency and user experience:# Ingest a live HLS media playlist and publish it via hang (full ladder). pub-hls url name="demo" relay="http://localhost:4443/anon": + cargo build --bin hang cargo run --bin hang -- publish --url "{{relay}}" --name "{{name}}" --format hls --hls-url "{{url}}"rs/hang/src/import/manifest/hls.rs (1)
1-6: Documentation mentions abstraction that doesn't exist.The module doc states it's "independent of any particular HTTP client" and mentions a
Fetchertrait, but the implementation directly usesreqwest::Client. Consider updating the documentation to reflect the actual design, or remove the reference toFetcher.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
justfile(1 hunks)rs/hang-cli/src/client.rs(3 hunks)rs/hang/src/import/manifest/hls.rs(1 hunks)rs/justfile(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rs
📄 CodeRabbit inference engine (CLAUDE.md)
In Rust crates, tests should be integrated within source files using inline test modules
Files:
rs/hang-cli/src/client.rsrs/hang/src/import/manifest/hls.rs
🔇 Additional comments (11)
rs/hang-cli/src/client.rs (2)
36-51: LGTM!The branching logic cleanly separates HLS manifest-based import from media-based import. Error handling with
.context()provides clear failure messages, and the symmetric structure is easy to follow.
54-67: LGTM!The
run_loophelper cleanly abstracts the multiplexing of the import task, session lifecycle, and graceful shutdown handling. The 100ms delay afterclose()allows pending writes to flush before termination.rs/hang/src/import/manifest/hls.rs (9)
23-34: LGTM!Simple, focused configuration struct with an appropriate constructor.
44-95: LGTM!The struct hierarchy cleanly separates configuration, state tracking, and runtime data.
TrackStateproperly encapsulates per-track state including sequence tracking and initialization status.
97-122: LGTM!Constructor properly initializes empty state, and
init()provides appropriate logging for both success and warning cases.
124-138: LGTM!The continuous ingest loop with adaptive delay based on target duration is appropriate for HLS polling.
140-214: LGTM!The
std::mem::takepattern to work around borrow checker limitations is a reasonable approach. Both methods consistently handle video variants and the optional audio track.
216-226: LGTM!Adaptive delay logic appropriately polls more frequently when no segments are found, while respecting a minimum interval to avoid overwhelming the server.
235-281: LGTM!The dual-path handling for master vs. media playlists with appropriate fallback is well structured. Logging provides good visibility into the selected renditions.
314-369: LGTM!Init segment and media segment handling is well implemented with proper state tracking and error handling.
530-552: LGTM!Tests are appropriately placed in an inline test module per the coding guidelines. The tests validate basic construction and initial state.
| fn codec_family(codec: &str) -> Option<&'static str> { | ||
| if codec.starts_with("avc1.") || codec.starts_with("avc3.") { | ||
| Some("h264") | ||
| } else { | ||
| None | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Find FAMILY_PREFERENCE definition
rg "FAMILY_PREFERENCE" rs/hang/src/import/manifest/hls.rs -A 5 -B 2Repository: moq-dev/moq
Length of output: 423
🏁 Script executed:
# Check select_variants implementation for fallback behavior
rg "fn select_variants" rs/hang/src/import/manifest/hls.rs -A 20Repository: moq-dev/moq
Length of output: 844
🏁 Script executed:
# Check if there's any error handling for empty codec results
rg "select_variants|codec_family" rs/hang/src/import/manifest/hls.rs -B 3 -A 3Repository: moq-dev/moq
Length of output: 1085
🌐 Web query:
fMP4 container HEVC AV1 VP9 codec support
💡 Result:
Short answer: fragmented MP4 (fMP4 / ISO‑BMFF/CMAF) can carry HEVC, AV1 and VP9 — the container supports the codec sample entries (hvc1/hev1 for HEVC, av01 for AV1, vp09 for VP9) and commercial toolchains (FFmpeg, Bento4/MP4 libraries, AWS MediaConvert/CMAF workflows, HLS fMP4) support muxing them. Client playback/decoder availability is a separate compatibility issue (platform/browser/OS must have a decoder). [1][2][3][4]
Sources
- AWS MediaConvert — CMAF / MP4 outputs list AV1, HEVC, VP9 support. [1]
- AndroidX / MP4 muxing docs — sample entry names: hev1/hvc1, av01, vp09 (vpcC). [2]
- HLS / fMP4 notes — fMP4 is used for HEVC and LL‑HLS workflows. [3]
- FFmpeg docs — encoder/decoder and muxer support for AV1, HEVC, VP9 and MP4 tools. [4]
HLS streams with only HEVC, VP9, or AV1 variants will fail to initialize.
The codec_family function only recognizes H.264 (avc1/avc3). During variant selection, streams with HEVC (hvc1/hev1), VP9, or AV1 codecs are filtered out entirely by the filter_map at line 460–461. If a master playlist contains no H.264 variants, select_variants returns an empty list and initialization fails with "no usable variants found in master playlist".
Since fMP4 containers support HEVC, AV1, and VP9, extend codec support:
fn codec_family(codec: &str) -> Option<&'static str> {
if codec.starts_with("avc1.") || codec.starts_with("avc3.") {
Some("h264")
+ } else if codec.starts_with("hvc1.") || codec.starts_with("hev1.") {
+ Some("hevc")
+ } else if codec.starts_with("vp09.") {
+ Some("vp9")
+ } else if codec.starts_with("av01.") {
+ Some("av1")
} else {
None
}
}Also update FAMILY_PREFERENCE (currently ["h264"]) to include fallback families:
-const FAMILY_PREFERENCE: &[&str] = &["h264"];
+const FAMILY_PREFERENCE: &[&str] = &["h264", "hevc", "vp9", "av1"];Committable suggestion skipped: line range outside the PR's diff.
|
I just caused a bunch of merge conflicts, so lemme resolve them. |
|
I caused a bunch of merge conflicts so I fixed them and incorporated a lot of Rust nit-picks: qualabs#5 |
|
(I would push straight to the PR branch but I don't have permission, it's an organization setting) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (2)
rs/hang/src/import/hls.rs (2)
304-315: Performance optimization available.This loop iterates through all segments even when skipping already-seen ones. The optimization suggested in a prior review using
.skip()would be more efficient for long-running streams with large playlists.
449-455: Limited codec support: only H.264 streams are accepted.As noted in a prior review,
codec_familyonly recognizesavc1/avc3. Streams with only HEVC, VP9, or AV1 variants will fail with "no usable variants found in master playlist." Since fMP4 supports these codecs, consider extending support.
🧹 Nitpick comments (8)
rs/hang/src/import/decoder.rs (1)
82-83: Consider making AAC stream decoding a compile-time constraint.The TODO comment correctly identifies that the runtime
bail!for AAC stream decoding could be made more type-safe. One option would be to splitDecoderinto separate types for stream-capable vs frame-only decoders, or use a trait-based approach. This can be deferred to a follow-up.Would you like me to open an issue to track making this more type-safe?
rs/hang-cli/src/publish.rs (2)
38-48: Consider reducing duplication in Avc3/Fmp4 branches.The
Avc3andFmp4branches are nearly identical, differing only in the format variant. You could consolidate them:PublishFormat::Avc3 | PublishFormat::Fmp4 => { let decoder_format = match format { PublishFormat::Avc3 => DecoderFormat::Avc3, PublishFormat::Fmp4 => DecoderFormat::Fmp4, _ => unreachable!(), }; PublishDecoder::Decoder(Decoder::new(broadcast.clone(), decoder_format)) }This is a minor nit; the current code is clear and readable.
74-102: Bothinitandruncreate separate stdin handles.Both methods independently call
tokio::io::stdin(). While this works (tokio's stdin returns a handle to the same underlying resource), it could be slightly cleaner to store the stdin handle inPublishif you want to avoid the repeated instantiation. However, the current approach is functionally correct.rs/hang/src/import/hls.rs (5)
109-114: Consider usingexpect()instead ofunwrap()for better panic messages.While
Client::builder().build()rarely fails, usingexpect()provides a clearer error message if it does.Client::builder() .user_agent(concat!(env!("CARGO_PKG_NAME"), "/", env!("CARGO_PKG_VERSION"))) .build() - .unwrap() + .expect("failed to build default HTTP client")
160-168: Track state may be lost on mid-iteration errors.If
consume_segmentsfails for a video track, the currenttrackis dropped and subsequent tracks invideo_tracksare never processed or restored toself.video. While errors propagate up (making theHlsinstance effectively unusable), this could complicate error recovery if retry logic is added later.Consider restoring all tracks before returning on error, or documenting that errors leave the instance in an undefined state.
373-374: Inconsistent byte consumption check withensure_init_segment.
ensure_init_segmentverifiesbytes.is_empty()after decoding, butpush_segmentdoes not. If trailing bytes after media segment decoding indicates a problem, consider adding the same check here for consistency.
383-388: HTTP requests lack timeout configuration.If a server hangs or is slow,
fetch_byteswill block indefinitely. Consider configuring a request or client-level timeout to prevent the ingest loop from stalling.async fn fetch_bytes(&self, url: Url) -> anyhow::Result<Bytes> { - let response = self.client.get(url).send().await?; + let response = self.client.get(url) + .timeout(std::time::Duration::from_secs(30)) + .send() + .await?; let response = response.error_for_status()?; let bytes = response.bytes().await.context("failed to read response body")?; Ok(bytes) }
523-544: Tests are minimal but correctly structured.The inline test module follows coding guidelines. Current tests only cover initialization; consider adding tests for
resolve_uri,select_variants, andselect_audiowhich are pure functions that can be tested without HTTP mocking.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
rs/Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (16)
rs/hang-cli/Cargo.toml(1 hunks)rs/hang-cli/src/client.rs(1 hunks)rs/hang-cli/src/import.rs(0 hunks)rs/hang-cli/src/main.rs(4 hunks)rs/hang-cli/src/publish.rs(1 hunks)rs/hang-cli/src/server.rs(2 hunks)rs/hang/Cargo.toml(1 hunks)rs/hang/src/error.rs(2 hunks)rs/hang/src/import/aac.rs(3 hunks)rs/hang/src/import/avc3.rs(6 hunks)rs/hang/src/import/decoder.rs(5 hunks)rs/hang/src/import/hls.rs(1 hunks)rs/hang/src/import/mod.rs(1 hunks)rs/hang/src/model/broadcast.rs(1 hunks)rs/justfile(2 hunks)rs/libmoq/src/state.rs(2 hunks)
💤 Files with no reviewable changes (1)
- rs/hang-cli/src/import.rs
🚧 Files skipped from review as they are similar to previous changes (4)
- rs/hang-cli/Cargo.toml
- rs/hang/src/import/mod.rs
- rs/hang/src/error.rs
- rs/hang/Cargo.toml
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rs
📄 CodeRabbit inference engine (CLAUDE.md)
In Rust crates, tests should be integrated within source files using inline test modules
Files:
rs/hang/src/model/broadcast.rsrs/hang/src/import/aac.rsrs/hang-cli/src/client.rsrs/hang/src/import/decoder.rsrs/hang-cli/src/publish.rsrs/libmoq/src/state.rsrs/hang-cli/src/server.rsrs/hang-cli/src/main.rsrs/hang/src/import/hls.rsrs/hang/src/import/avc3.rs
🧬 Code graph analysis (9)
rs/hang/src/model/broadcast.rs (7)
rs/hang/src/catalog/video/vp9.rs (1)
default(97-108)rs/hang-cli/src/publish.rs (1)
new(35-66)rs/hang/src/import/avc3.rs (2)
new(31-39)new(278-280)rs/libmoq/src/state.rs (1)
new(79-85)rs/hang/src/model/track.rs (2)
new(30-36)new(181-189)rs/hang/src/catalog/root.rs (2)
new(151-156)new(226-228)rs/hang/src/model/group.rs (1)
new(36-43)
rs/hang/src/import/aac.rs (2)
rs/hang/src/import/avc3.rs (2)
new(31-39)new(278-280)rs/hang/src/model/timestamp.rs (1)
from_micros(40-46)
rs/hang-cli/src/client.rs (3)
rs/hang/src/catalog/root.rs (1)
produce(86-93)rs/moq-native/src/client.rs (1)
connect(141-205)rs/hang-cli/src/publish.rs (1)
run(89-102)
rs/hang-cli/src/publish.rs (5)
rs/hang/src/import/aac.rs (1)
new(14-20)rs/hang/src/import/avc3.rs (2)
new(31-39)new(278-280)rs/hang/src/import/decoder.rs (1)
new(37-45)rs/hang/src/import/hls.rs (3)
new(36-38)new(92-99)new(104-119)rs/hang/src/model/broadcast.rs (2)
new(17-26)default(36-38)
rs/libmoq/src/state.rs (1)
rs/hang/src/import/decoder.rs (1)
new(37-45)
rs/hang-cli/src/server.rs (1)
rs/hang-cli/src/publish.rs (2)
consume(68-70)run(89-102)
rs/hang-cli/src/main.rs (3)
rs/hang-cli/src/server.rs (1)
server(14-41)rs/hang-cli/src/client.rs (1)
client(6-31)rs/hang-cli/src/publish.rs (1)
new(35-66)
rs/hang/src/import/hls.rs (4)
rs/hang/src/import/aac.rs (2)
new(14-20)is_initialized(140-142)rs/hang/src/import/avc3.rs (4)
new(31-39)new(278-280)init(41-99)is_initialized(227-229)rs/hang/src/import/decoder.rs (2)
new(37-45)is_initialized(114-120)rs/hang/src/model/broadcast.rs (2)
new(17-26)default(36-38)
rs/hang/src/import/avc3.rs (1)
rs/hang/src/model/timestamp.rs (1)
from_micros(40-46)
🔇 Additional comments (28)
rs/hang/src/model/broadcast.rs (1)
35-39: LGTM!The
Defaultimplementation correctly delegates toSelf::new(), maintaining consistency with the existingFromtrait implementation and enabling cleaner instantiation in the newPublishworkflow.rs/justfile (3)
108-108: LGTM!The explicit
fmp4format flag aligns with the newPublishFormatenum structure.
110-112: LGTM!The new
pub-hlscommand follows consistent patterns with other publish commands and correctly passes the playlist URL to the HLS importer.
129-129: LGTM!The
avc3format flag correctly replaces the previous--format annex-bflag.rs/hang/src/import/avc3.rs (6)
25-27: LGTM!The addition of
zerofor wall-clock-based timestamp generation provides a sensible fallback when explicit timestamps aren't provided, enabling the decoder to work in streaming scenarios without external timing.
101-110: LGTM!The
initializemethod correctly processes non-slice NALs (SPS/PPS/AUD/SEI) without requiring a timestamp, enabling pre-initialization of the decoder before streaming begins.
116-131: LGTM!The optional timestamp parameter with early resolution via
self.pts()ensures consistent timestamp handling across all NALs in the stream.
212-225: Verify timestamp requirement for frame emission.The
pts.context("missing timestamp")?at line 219 enforces that a timestamp must be present when flushing frames. This is correct for frame emission but meansmaybe_start_framecan fail if called withNonefrom paths that contain slices.Tracing the call sites:
decode_streamanddecode_frameresolve timestamps viaself.pts()before callingdecode_nal, soSome(pts)is always passed.initializepassesNone, but only processes non-slice NALs, socontains_sliceremains false andmaybe_start_framereturns early.This is safe as implemented.
231-238: LGTM!The
ptshelper provides clean timestamp resolution with wall-clock fallback. Theas u64cast fromu128is safe for practical stream durations (overflow would require ~500,000 years of continuous streaming).
401-634: Excellent test coverage.The comprehensive unit tests for
after_start_code,find_start_code, andNalIteratorcover edge cases including invalid start codes, consecutive zeros, embedded zeros in NAL data, and realistic H.264 streams.rs/hang-cli/src/server.rs (2)
14-19: LGTM!The refactored signature cleanly encapsulates the publish workflow into the
Publishstruct, simplifying the function signature and separating concerns.
36-40: LGTM!The
select!block correctly orchestrates the concurrent tasks:
accepthandles incoming connections using the broadcast consumerpublish.run()drives the input source (stdin or HLS fetching)webserves HTTP requestsThe
publish.consume()call returns a cloneableBroadcastConsumerbeforepublishis moved intorun().rs/hang-cli/src/main.rs (2)
37-38: Good refactor to subcommand-based format selection.The change from
#[arg]to#[command(subcommand)]forPublishFormataddresses the previous review feedback about supporting nested CLI commands (e.g.,hang import hls --playlist ...). This approach scales better for format-specific options.Also applies to: 63-64
73-79: Initialization before command dispatch looks correct.The pattern of extracting the format, creating the
Publishinstance, and callinginit()before dispatching to server/client ensures the broadcast is ready before connections are established.rs/hang-cli/src/client.rs (1)
6-30: Clean refactor to Publish-driven flow.The function now accepts a
Publishhandle and correctly usespublish.consume()for the broadcast andpublish.run()in the select loop. The systemd notification now occurs after initialization (which happens inmain.rsbefore this function is called), addressing the previous review feedback.rs/hang/src/import/aac.rs (2)
144-151: Timestamp helper looks correct.The
pts()method correctly falls back to wall-clock time when no hint is provided. Theas u64cast fromas_micros()(which returnsu128) is practically safe sinceu64microseconds covers ~584,542 years, andTimestamp::from_microsalready handles overflow via itsResultreturn type.
125-138: Optional timestamp parameter aligns with the broader refactor.Making
ptsoptional indecode()enables timestamp inference when the caller doesn't have timing information, which supports the HLS ingest path where timestamps may need to be derived from wall-clock time.rs/hang/src/import/decoder.rs (1)
7-16: Good introduction of strongly-typed format enum.The
DecoderFormatenum withstrum::EnumStringprovides type-safe format selection and eliminates the previous string-based parsing with deprecated aliases. This is a cleaner API.rs/hang-cli/src/publish.rs (2)
11-21: Good use of clap Subcommand for format-specific arguments.Using
#[derive(Subcommand)]forPublishFormatallows theHlsvariant to carry its own--playlistargument, addressing the previous review feedback about avoiding "hundreds of--hlsprefixed flags." This is a clean design.
49-58: No API mismatch found.The call site at lines 50–56 correctly passes
(broadcast.clone(), HlsConfig { ... })matching the actualHls::newsignature:pub fn new(broadcast: BroadcastProducer, cfg: HlsConfig) -> Self. The code is correct as written.Likely an incorrect or invalid review comment.
rs/hang/src/import/hls.rs (8)
24-39: LGTM!Clean configuration struct with sensible defaults and optional client injection for testing/customization.
134-148: LGTM!The infinite ingest loop with error propagation is appropriate. External cancellation (via task abort or
select!) is the expected shutdown mechanism.
224-234: LGTM!The adaptive polling logic is well-designed—halving the delay when no segments were consumed helps reduce latency for catching new segments.
246-291: LGTM!Good fallback behavior: gracefully handles both master and media playlists. The warning logs for missing audio are appropriate since audio is optional.
324-354: LGTM!Thorough validation of init segment processing: verifying complete consumption and successful initialization is good defensive coding.
390-401: LGTM!The lazy-initialization pattern ensures importers are created on-demand while maintaining index correspondence with video tracks.
515-521: LGTM!Standard URI resolution handling both absolute and relative paths correctly.
492-510: Variants without resolution are deduplicated together, affecting only video variants.Using
Option<Resolution>as the key means multiple video variants lacking resolution metadata collapse into a single entry (the lowest-bandwidth one). Audio is selected separately viaselect_audio()and does not participate in this deduplication. The function filters for video codecs before this logic applies, so audio-only variants do not reach the HashMap.Likely an incorrect or invalid review comment.
| //! independent of any particular HTTP client; callers provide an implementation | ||
| //! of [`Fetcher`] to perform the actual network I/O. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Documentation refers to non-existent Fetcher trait.
The module doc mentions callers should "provide an implementation of Fetcher" but the actual implementation uses reqwest::Client directly via HlsConfig.client. Update the doc to reflect the actual design.
-//! independent of any particular HTTP client; callers provide an implementation
-//! of [`Fetcher`] to perform the actual network I/O.
+//! independent of any particular HTTP client; callers can optionally provide a
+//! custom `reqwest::Client` via [`HlsConfig`] to perform network I/O.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| //! independent of any particular HTTP client; callers provide an implementation | |
| //! of [`Fetcher`] to perform the actual network I/O. | |
| //! independent of any particular HTTP client; callers can optionally provide a | |
| //! custom `reqwest::Client` via [`HlsConfig`] to perform network I/O. |
🤖 Prompt for AI Agents
In rs/hang/src/import/hls.rs around lines 5 to 6, the module doc incorrectly
references a non-existent `Fetcher` trait; update the documentation to state
that network I/O is performed using a reqwest::Client provided via
HlsConfig.client (or similar field) instead of a `Fetcher` implementation, and
adjust the phrasing to accurately describe that callers supply an HlsConfig with
a configured HTTP client for network requests.
rs/libmoq/src/state.rs
Outdated
| pub fn create_track(&mut self, broadcast: Id, format: &str, mut init: &[u8]) -> Result<Id, Error> { | ||
| let broadcast = self.broadcasts.get_mut(broadcast).ok_or(Error::NotFound)?; | ||
| let mut decoder = hang::import::Decoder::new(broadcast.clone(), format) | ||
| .ok_or_else(|| Error::UnknownFormat(format.to_string()))?; | ||
| // TODO add support for stream decoders too. | ||
| let format = | ||
| hang::import::DecoderFormat::from_str(format).map_err(|err| Error::UnknownFormat(err.to_string()))?; | ||
| let mut decoder = hang::import::Decoder::new(broadcast.clone(), format); | ||
|
|
||
| decoder | ||
| .initialize(&mut init) | ||
| .map_err(|err| Error::InitFailed(Arc::new(err)))?; | ||
| assert!(init.is_empty(), "buffer was not fully consumed"); | ||
|
|
||
| let id = self.tracks.insert(decoder); | ||
| Ok(id) | ||
| Ok(self.tracks.insert(decoder)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, find and examine DecoderFormat definition and FromStr implementation
rg -n "DecoderFormat" rs/hang/src/import/decoder.rs -A 10 -B 2Repository: moq-dev/moq
Length of output: 1064
🏁 Script executed:
# Search for FromStr trait implementation for DecoderFormat
rg -n "impl.*FromStr.*DecoderFormat|#\[derive.*FromStr\]" rs/hang/src/import/ -B 3 -A 5Repository: moq-dev/moq
Length of output: 37
🏁 Script executed:
# Check if there are existing tests in state.rs
rg -n "#\[cfg\(test\)\]|#\[test\]|mod tests" rs/libmoq/src/state.rsRepository: moq-dev/moq
Length of output: 37
🏁 Script executed:
# View the complete create_track function to understand context
sed -n '154,166p' rs/libmoq/src/state.rsRepository: moq-dev/moq
Length of output: 661
🏁 Script executed:
# Check the entire state.rs file for any test content or test patterns
wc -l rs/libmoq/src/state.rsRepository: moq-dev/moq
Length of output: 81
🏁 Script executed:
# Search more broadly for test content in state.rs
cat rs/libmoq/src/state.rs | tail -100 | head -50Repository: moq-dev/moq
Length of output: 1521
🏁 Script executed:
# Check if there are tests for create_track elsewhere in the codebase
rg -l "create_track" rs/libmoq/Repository: moq-dev/moq
Length of output: 98
🏁 Script executed:
# Check the end of state.rs (where tests would typically be)
tail -30 rs/libmoq/src/state.rsRepository: moq-dev/moq
Length of output: 1004
🏁 Script executed:
# Look for any test files related to state.rs
fd -t f "test|spec" rs/libmoq/ | grep -i stateRepository: moq-dev/moq
Length of output: 37
🏁 Script executed:
# Check lib.rs to see if tests are centralized there
rg -n "#\[cfg\(test\)\]|#\[test\]|mod tests" rs/libmoq/src/lib.rs -A 3Repository: moq-dev/moq
Length of output: 37
Add inline test module to verify create_track behavior per coding guidelines.
The refactor to use DecoderFormat::from_str is correct—the strum::EnumString derive macro provides the FromStr implementation. However, rs/libmoq/src/state.rs should include an inline test module (#[cfg(test)]) with tests for create_track and related public methods per the Rust coding guidelines.
🤖 Prompt for AI Agents
In rs/libmoq/src/state.rs around lines 154 to 166, add an inline test module
(#[cfg(test)] mod tests) that verifies create_track behavior: write tests that
construct a State, insert a broadcast, call create_track with a valid format and
a buffer that decodes successfully and assert it returns Ok(Id) and consumes the
buffer; test invalid format returns Error::UnknownFormat; test initialization
failure returns Error::InitFailed; use the crate's public API/setup helpers to
create broadcasts and assert results with pattern matching (avoid panics),
keeping tests self-contained and using #[test] functions inside the test module.
…t's catalog and remove the `parse` method.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
♻️ Duplicate comments (1)
rs/hang/src/import/hls.rs (1)
1-6: Module docs still reference a non-existentFetcherabstractionThe docs say callers provide a
Fetcher, but the code now uses an optionalreqwest::ClientinHlsConfig. This can confuse users of the API.Consider updating the wording along these lines:
-//! feed their fMP4 segments into a `hang` broadcast. It is designed to be -//! independent of any particular HTTP client; callers provide an implementation -//! of [`Fetcher`] to perform the actual network I/O. +//! feed their fMP4 segments into a `hang` broadcast. Network I/O is performed +//! using a `reqwest::Client`; callers can optionally provide a custom client +//! via [`HlsConfig::client`].
🧹 Nitpick comments (1)
rs/hang/src/import/hls.rs (1)
102-119: Avoid panicking on HTTP client construction inHls::new
Client::builder().build().unwrap()can panic (e.g. bad TLS configuration, missing roots). Since this is part of ingest setup, a fallible constructor would make failures explicit instead of crashing the process.One option:
-pub fn new(broadcast: BroadcastProducer, cfg: HlsConfig) -> Self { - Self { +pub fn new(broadcast: BroadcastProducer, cfg: HlsConfig) -> anyhow::Result<Self> { + let client = cfg.client.clone().unwrap_or_else(|| { + Client::builder() + .user_agent(concat!(env!("CARGO_PKG_NAME"), "/", env!("CARGO_PKG_VERSION"))) + .build() + .unwrap() + }); + + Ok(Self { broadcast, video_importers: Vec::new(), audio_importer: None, - client: cfg.client.clone().unwrap_or_else(|| { - Client::builder() - .user_agent(concat!(env!("CARGO_PKG_NAME"), "/", env!("CARGO_PKG_VERSION"))) - .build() - .unwrap() - }), + client, cfg, video: Vec::new(), audio: None, - } + })and propagate the
Resultat call sites (or, if you want to keepnewinfallible, at least useexpectwith a clear message).
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
rs/hang/src/import/hls.rs(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rs
📄 CodeRabbit inference engine (CLAUDE.md)
In Rust crates, tests should be integrated within source files using inline test modules
Files:
rs/hang/src/import/hls.rs
| /// Fetch the latest playlist, download the init segment, and prime the importer with a buffer of segments. | ||
| /// | ||
| /// Returns the number of segments buffered during initialization. | ||
| pub async fn init(&mut self) -> anyhow::Result<()> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
init doc comment does not match its return type
The doc says init "Returns the number of segments buffered during initialization.", but the function returns anyhow::Result<()> and only logs the count.
Either return the count or adjust the docs. For example:
-/// Returns the number of segments buffered during initialization.
+/// Primes the ingest by buffering initial segments, logging how many were
+/// buffered. Returns an error if initialization fails.🤖 Prompt for AI Agents
In rs/hang/src/import/hls.rs around lines 121-124 the doc comment for init
wrongly claims it "Returns the number of segments buffered during
initialization" while the function signature is pub async fn init(&mut self) ->
anyhow::Result<()> and it only logs the count; fix by either (A) updating the
function to return anyhow::Result<usize> and actually return the buffered
segment count (update the signature, propagate the usize return to callers and
adjust any call sites/tests), or (B) change the doc comment to remove the
"Returns..." sentence so it accurately reflects that init returns Result<()> and
only logs the count; pick one option and update related docs/call sites
accordingly.
| async fn consume_segments( | ||
| &mut self, | ||
| kind: TrackKind, | ||
| track: &mut TrackState, | ||
| playlist: &MediaPlaylist, | ||
| ) -> anyhow::Result<usize> { | ||
| self.ensure_init_segment(kind, track, playlist).await?; | ||
|
|
||
| // Skip segments we've already seen | ||
| let skip = track.next_sequence.unwrap_or(0).saturating_sub(playlist.media_sequence) as usize; | ||
| let base_seq = playlist.media_sequence + skip as u64; | ||
| for (i, segment) in playlist.segments[skip..].iter().enumerate() { | ||
| self.push_segment(kind, track, segment, base_seq + i as u64).await?; | ||
| } | ||
| let consumed = playlist.segments.len() - skip; | ||
|
|
||
| if consumed == 0 { | ||
| debug!(media_type = ?track.media_type, "no fresh HLS segments available"); | ||
| } | ||
|
|
||
| Ok(consumed) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential panic when skip exceeds playlist.segments.len()
skip is derived from next_sequence and media_sequence and then used both for slicing (playlist.segments[skip..]) and len() - skip. If next_sequence jumps ahead of the current playlist window (e.g. server reset, malformed playlist), skip can be greater than the number of segments, causing an out-of-bounds panic.
Safer approach: clamp skip to the segment count and use iter().skip:
- // Skip segments we've already seen
- let skip = track.next_sequence.unwrap_or(0).saturating_sub(playlist.media_sequence) as usize;
- let base_seq = playlist.media_sequence + skip as u64;
- for (i, segment) in playlist.segments[skip..].iter().enumerate() {
+ // Skip segments we've already seen.
+ let desired = track.next_sequence.unwrap_or(playlist.media_sequence);
+ let mut skip = desired
+ .saturating_sub(playlist.media_sequence) as usize;
+ let total = playlist.segments.len();
+ if skip > total {
+ skip = total;
+ }
+
+ let base_seq = playlist.media_sequence + skip as u64;
+ for (i, segment) in playlist.segments.iter().skip(skip).enumerate() {
self.push_segment(kind, track, segment, base_seq + i as u64).await?;
}
- let consumed = playlist.segments.len() - skip;
+ let consumed = total - skip;This preserves the intended behavior while avoiding panics on malformed or reset playlists.
🤖 Prompt for AI Agents
In rs/hang/src/import/hls.rs around lines 293 to 314, the computed skip can
exceed playlist.segments.len(), which would panic when slicing and computing
len() - skip; clamp skip to at most playlist.segments.len() (e.g. let skip =
skip.min(playlist.segments.len());) and iterate safely (either use
playlist.segments.iter().skip(skip) or slice with the clamped value) while
computing base_seq and consumed from the clamped skip so consumed =
playlist.segments.len() - skip; keep existing logic for pushing segments but use
the clamped skip to avoid out-of-bounds panics.
| fn select_variants(master: &MasterPlaylist) -> Vec<&VariantStream> { | ||
| // Helper to extract the first video codec token from the CODECS attribute. | ||
| fn first_video_codec(variant: &VariantStream) -> Option<&str> { | ||
| let codecs = variant.codecs.as_deref()?; | ||
| codecs.split(',').map(|s| s.trim()).find(|s| !s.is_empty()) | ||
| } | ||
|
|
||
| // Map codec strings into a coarse "family" so we can prefer H.264 over others. | ||
| fn codec_family(codec: &str) -> Option<&'static str> { | ||
| if codec.starts_with("avc1.") || codec.starts_with("avc3.") { | ||
| Some("h264") | ||
| } else { | ||
| None | ||
| } | ||
| } | ||
|
|
||
| // Consider only non-i-frame variants with a URI and a known codec family. | ||
| let candidates: Vec<(&VariantStream, &str, &str)> = master | ||
| .variants | ||
| .iter() | ||
| .filter(|variant| !variant.is_i_frame && !variant.uri.is_empty()) | ||
| .filter_map(|variant| { | ||
| let codec = first_video_codec(variant)?; | ||
| let family = codec_family(codec)?; | ||
| Some((variant, codec, family)) | ||
| }) | ||
| .collect(); | ||
|
|
||
| if candidates.is_empty() { | ||
| return Vec::new(); | ||
| } | ||
|
|
||
| // Prefer families in this order, falling back to the first available. | ||
| const FAMILY_PREFERENCE: &[&str] = &["h264"]; | ||
|
|
||
| let families_present: Vec<&str> = candidates.iter().map(|(_, _, fam)| *fam).collect(); | ||
|
|
||
| let target_family = FAMILY_PREFERENCE | ||
| .iter() | ||
| .find(|fav| families_present.iter().any(|fam| fam == *fav)) | ||
| .copied() | ||
| .unwrap_or(families_present[0]); | ||
|
|
||
| // Keep only variants in the chosen family. | ||
| let family_variants: Vec<&VariantStream> = candidates | ||
| .into_iter() | ||
| .filter(|(_, _, fam)| *fam == target_family) | ||
| .map(|(variant, _, _)| variant) | ||
| .collect(); | ||
|
|
||
| // Deduplicate by resolution, keeping the lowest-bandwidth variant for each size. | ||
| let mut by_resolution: HashMap<Option<Resolution>, &VariantStream> = HashMap::new(); | ||
|
|
||
| for variant in family_variants { | ||
| let key = variant.resolution; | ||
| let bandwidth = variant.average_bandwidth.unwrap_or(variant.bandwidth); | ||
|
|
||
| match by_resolution.entry(key) { | ||
| Entry::Vacant(entry) => { | ||
| entry.insert(variant); | ||
| } | ||
| Entry::Occupied(mut entry) => { | ||
| let existing = entry.get(); | ||
| let existing_bw = existing.average_bandwidth.unwrap_or(existing.bandwidth); | ||
| if bandwidth < existing_bw { | ||
| entry.insert(variant); | ||
| } | ||
| } | ||
| } | ||
| } | ||
|
|
||
| by_resolution.values().cloned().collect() | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Variant selection may drop valid H.264 renditions when audio codec appears first in CODECS
first_video_codec currently just takes the first non-empty token from variant.codecs. For manifests where the CODECS attribute lists audio first (e.g. "mp4a.40.2,avc1.640029"), this returns the audio codec, codec_family returns None, and the whole variant is discarded. That can make select_variants return an empty list even though valid H.264 video is present.
You can fix this by making first_video_codec explicitly search for an H.264 video codec:
- // Helper to extract the first video codec token from the CODECS attribute.
- fn first_video_codec(variant: &VariantStream) -> Option<&str> {
- let codecs = variant.codecs.as_deref()?;
- codecs.split(',').map(|s| s.trim()).find(|s| !s.is_empty())
- }
+ // Helper to extract a video codec token from the CODECS attribute.
+ fn first_video_codec(variant: &VariantStream) -> Option<&str> {
+ let codecs = variant.codecs.as_deref()?;
+ codecs
+ .split(',')
+ .map(|s| s.trim())
+ // Currently we only support H.264 video.
+ .find(|s| s.starts_with("avc1.") || s.starts_with("avc3."))
+ }The rest of select_variants can stay the same, but this will keep valid H.264 variants even when audio is listed first in CODECS.
If you later add support for other video codec families, you can generalize this to search for any codec where codec_family returns Some(_).
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| fn select_variants(master: &MasterPlaylist) -> Vec<&VariantStream> { | |
| // Helper to extract the first video codec token from the CODECS attribute. | |
| fn first_video_codec(variant: &VariantStream) -> Option<&str> { | |
| let codecs = variant.codecs.as_deref()?; | |
| codecs.split(',').map(|s| s.trim()).find(|s| !s.is_empty()) | |
| } | |
| // Map codec strings into a coarse "family" so we can prefer H.264 over others. | |
| fn codec_family(codec: &str) -> Option<&'static str> { | |
| if codec.starts_with("avc1.") || codec.starts_with("avc3.") { | |
| Some("h264") | |
| } else { | |
| None | |
| } | |
| } | |
| // Consider only non-i-frame variants with a URI and a known codec family. | |
| let candidates: Vec<(&VariantStream, &str, &str)> = master | |
| .variants | |
| .iter() | |
| .filter(|variant| !variant.is_i_frame && !variant.uri.is_empty()) | |
| .filter_map(|variant| { | |
| let codec = first_video_codec(variant)?; | |
| let family = codec_family(codec)?; | |
| Some((variant, codec, family)) | |
| }) | |
| .collect(); | |
| if candidates.is_empty() { | |
| return Vec::new(); | |
| } | |
| // Prefer families in this order, falling back to the first available. | |
| const FAMILY_PREFERENCE: &[&str] = &["h264"]; | |
| let families_present: Vec<&str> = candidates.iter().map(|(_, _, fam)| *fam).collect(); | |
| let target_family = FAMILY_PREFERENCE | |
| .iter() | |
| .find(|fav| families_present.iter().any(|fam| fam == *fav)) | |
| .copied() | |
| .unwrap_or(families_present[0]); | |
| // Keep only variants in the chosen family. | |
| let family_variants: Vec<&VariantStream> = candidates | |
| .into_iter() | |
| .filter(|(_, _, fam)| *fam == target_family) | |
| .map(|(variant, _, _)| variant) | |
| .collect(); | |
| // Deduplicate by resolution, keeping the lowest-bandwidth variant for each size. | |
| let mut by_resolution: HashMap<Option<Resolution>, &VariantStream> = HashMap::new(); | |
| for variant in family_variants { | |
| let key = variant.resolution; | |
| let bandwidth = variant.average_bandwidth.unwrap_or(variant.bandwidth); | |
| match by_resolution.entry(key) { | |
| Entry::Vacant(entry) => { | |
| entry.insert(variant); | |
| } | |
| Entry::Occupied(mut entry) => { | |
| let existing = entry.get(); | |
| let existing_bw = existing.average_bandwidth.unwrap_or(existing.bandwidth); | |
| if bandwidth < existing_bw { | |
| entry.insert(variant); | |
| } | |
| } | |
| } | |
| } | |
| by_resolution.values().cloned().collect() | |
| } | |
| fn select_variants(master: &MasterPlaylist) -> Vec<&VariantStream> { | |
| // Helper to extract a video codec token from the CODECS attribute. | |
| fn first_video_codec(variant: &VariantStream) -> Option<&str> { | |
| let codecs = variant.codecs.as_deref()?; | |
| codecs | |
| .split(',') | |
| .map(|s| s.trim()) | |
| // Currently we only support H.264 video. | |
| .find(|s| s.starts_with("avc1.") || s.starts_with("avc3.")) | |
| } | |
| // Map codec strings into a coarse "family" so we can prefer H.264 over others. | |
| fn codec_family(codec: &str) -> Option<&'static str> { | |
| if codec.starts_with("avc1.") || codec.starts_with("avc3.") { | |
| Some("h264") | |
| } else { | |
| None | |
| } | |
| } | |
| // Consider only non-i-frame variants with a URI and a known codec family. | |
| let candidates: Vec<(&VariantStream, &str, &str)> = master | |
| .variants | |
| .iter() | |
| .filter(|variant| !variant.is_i_frame && !variant.uri.is_empty()) | |
| .filter_map(|variant| { | |
| let codec = first_video_codec(variant)?; | |
| let family = codec_family(codec)?; | |
| Some((variant, codec, family)) | |
| }) | |
| .collect(); | |
| if candidates.is_empty() { | |
| return Vec::new(); | |
| } | |
| // Prefer families in this order, falling back to the first available. | |
| const FAMILY_PREFERENCE: &[&str] = &["h264"]; | |
| let families_present: Vec<&str> = candidates.iter().map(|(_, _, fam)| *fam).collect(); | |
| let target_family = FAMILY_PREFERENCE | |
| .iter() | |
| .find(|fav| families_present.iter().any(|fam| fam == *fav)) | |
| .copied() | |
| .unwrap_or(families_present[0]); | |
| // Keep only variants in the chosen family. | |
| let family_variants: Vec<&VariantStream> = candidates | |
| .into_iter() | |
| .filter(|(_, _, fam)| *fam == target_family) | |
| .map(|(variant, _, _)| variant) | |
| .collect(); | |
| // Deduplicate by resolution, keeping the lowest-bandwidth variant for each size. | |
| let mut by_resolution: HashMap<Option<Resolution>, &VariantStream> = HashMap::new(); | |
| for variant in family_variants { | |
| let key = variant.resolution; | |
| let bandwidth = variant.average_bandwidth.unwrap_or(variant.bandwidth); | |
| match by_resolution.entry(key) { | |
| Entry::Vacant(entry) => { | |
| entry.insert(variant); | |
| } | |
| Entry::Occupied(mut entry) => { | |
| let existing = entry.get(); | |
| let existing_bw = existing.average_bandwidth.unwrap_or(existing.bandwidth); | |
| if bandwidth < existing_bw { | |
| entry.insert(variant); | |
| } | |
| } | |
| } | |
| } | |
| by_resolution.values().cloned().collect() | |
| } |
🤖 Prompt for AI Agents
In rs/hang/src/import/hls.rs around lines 433 to 505, first_video_codec
currently returns the first token from variant.codecs which can be an audio
codec and causes H.264 video variants to be dropped; change first_video_codec so
it scans all comma-separated CODECS tokens and returns the first token that
matches an H.264 video codec (i.e. starts_with "avc1." or "avc3.") — keep the
existing as_deref handling and trimming logic, returning None only if no video
codec token is found; you may alternatively call codec_family from inside the
search and return the token when codec_family(token).is_some().
…roadcast's catalog in FMP4 importer.
…tead of creating its own.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (5)
rs/hang/src/import/hls.rs (5)
5-6: Documentation refers to non-existentFetchertrait.The module doc mentions "callers provide an implementation of [
Fetcher]" but the implementation usesreqwest::Clientdirectly viaHlsConfig.client. Update the docs to reflect the actual design.
113-124: Doc comment does not match return type.The doc says "Returns the number of segments buffered during initialization" but the function returns
anyhow::Result<()>and only logs the count.
284-305: Potential panic whenskipexceeds segment count.If
skipexceedsplaylist.segments.len()(e.g., due to playlist reset or malformed data), the sliceplaylist.segments[skip..]will panic. Clampskipto the segment count and use iterator methods to avoid out-of-bounds access.
432-438: HLS streams with only HEVC, VP9, or AV1 variants will fail.The
codec_familyfunction only recognizes H.264 (avc1/avc3). Streams with HEVC, VP9, or AV1 codecs are filtered out entirely, causing initialization to fail if no H.264 variants exist.Since fMP4 supports these codecs (per rs/hang/src/import/fmp4.rs lines 204-276), extend codec recognition to include them.
426-429: Variant selection may drop valid H.264 renditions when audio codec is listed first.
first_video_codecreturns the first non-empty token fromvariant.codecs. If theCODECSattribute lists audio first (e.g.,"mp4a.40.2,avc1.640029"), an audio codec is returned,codec_familyreturnsNone, and the variant is discarded.Fix by searching for an H.264 video codec explicitly (tokens starting with
avc1.oravc3.) instead of blindly taking the first token.
🧹 Nitpick comments (1)
rs/hang/src/import/hls.rs (1)
506-527: Consider expanding test coverage beyond basic construction.The existing tests verify basic initialization, but the module would benefit from tests covering core functionality such as:
- Playlist parsing and variant selection
- URI resolution edge cases
- Segment sequencing logic
- Error handling for malformed playlists
These tests would increase confidence in the implementation and catch regressions.
As per coding guidelines, consider adding inline test modules within the source file for better integration testing of the core logic.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
rs/hang/src/import/fmp4.rs(4 hunks)rs/hang/src/import/hls.rs(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rs
📄 CodeRabbit inference engine (CLAUDE.md)
In Rust crates, tests should be integrated within source files using inline test modules
Files:
rs/hang/src/import/fmp4.rsrs/hang/src/import/hls.rs
🧬 Code graph analysis (2)
rs/hang/src/import/fmp4.rs (4)
rs/hang/src/import/aac.rs (1)
new(14-20)rs/libmoq/src/state.rs (1)
new(79-85)rs/hang/src/import/avc3.rs (2)
new(31-39)new(278-280)rs/hang/src/catalog/root.rs (2)
new(151-156)new(226-228)
rs/hang/src/import/hls.rs (4)
rs/hang/src/import/decoder.rs (2)
new(37-45)is_initialized(114-120)rs/hang/src/model/broadcast.rs (2)
new(17-26)default(36-38)rs/hang/src/import/aac.rs (2)
new(14-20)is_initialized(140-142)rs/hang/src/import/avc3.rs (4)
new(31-39)new(278-280)init(41-99)is_initialized(227-229)
🔇 Additional comments (6)
rs/hang/src/import/fmp4.rs (3)
31-32: LGTM: Clear explanation of the catalog cloning pattern.The comment accurately describes that this is the same underlying catalog accessed via a separate binding. The Arc-based sharing is well-documented.
54-58: LGTM: Simplified initialization using shared catalog.The refactored constructor correctly uses the broadcast's existing catalog instead of creating a new one. Removing
mutfrom the parameter is appropriate since the broadcast is no longer mutated here. This aligns with the pattern used by other importers (Avc3, Aac).
498-498: LGTM: Drop correctly updated to use broadcast's catalog.The Drop implementation correctly accesses the catalog through
self.broadcast.catalogto maintain consistency with the new shared catalog design.rs/hang/src/import/hls.rs (3)
24-39: LGTM: Well-designed configuration struct.The
HlsConfigstructure provides a clean interface with sensible defaults (optional client) and follows Rust conventions.
94-111: LGTM: Robust constructor with sensible defaults.The constructor properly initializes all fields and provides a fallback
reqwest::Clientwith an appropriate user agent when none is provided.
307-337: LGTM: Robust init segment handling with validation.The initialization logic properly validates that the init segment was fully consumed and successfully initialized the importer. The guard for
init_readyprevents redundant fetches.
| fn find_map<'a>(&self, playlist: &'a MediaPlaylist) -> Option<&'a Map> { | ||
| playlist.segments.iter().find_map(|segment| segment.map.as_ref()) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Edge case: Assumes uniform init segment across all segments.
The find_map helper returns the first EXT-X-MAP found and assumes all segments in the playlist share the same init segment. While this is typical, the HLS spec permits per-segment maps with different URIs. If a playlist contains segments with varying maps, only the first will be used, potentially causing decode failures for later segments.
For robustness in handling non-standard streams, consider tracking the map on a per-segment basis or validating that all segments reference the same init segment URI.
🤖 Prompt for AI Agents
In rs/hang/src/import/hls.rs around lines 362-364, find_map currently returns
the first EXT-X-MAP it encounters which assumes a uniform init segment across
the playlist; this can break playlists that use per-segment maps. Fix by either
(A) switching to per-segment map handling — change callers to request the Map
for each segment (e.g., change find_map to accept a segment or lookup map by
segment and use that when processing each segment), or (B) validate that every
segment that has a Map uses the same URI before proceeding and return an
error/log and abort if differing URIs are found. Implement one of these
approaches and adjust call sites accordingly so decoding uses the correct init
segment for each segment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
rs/hang/src/import/avc3.rs (1)
243-250: Consider using checked conversion for the timestamp cast.Line 249 casts
u128tou64, which could theoretically truncate after ~584,542 years of elapsed time. While not a practical concern, using.try_into()or documenting the safety assumption would be more defensive.Note: This pattern matches the implementation in
rs/hang/src/import/aac.rslines 150-157, so if changed here, consider updating both for consistency.Example with checked conversion:
- Ok(hang::Timestamp::from_micros(zero.elapsed().as_micros() as u64)?) + let micros = zero.elapsed().as_micros().try_into() + .context("timestamp overflow (stream ran for >584k years)")?; + Ok(hang::Timestamp::from_micros(micros)?)
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (6)
justfile(1 hunks)rs/hang/Cargo.toml(1 hunks)rs/hang/src/import/aac.rs(3 hunks)rs/hang/src/import/avc3.rs(6 hunks)rs/hang/src/import/fmp4.rs(4 hunks)rs/libmoq/src/state.rs(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- rs/hang/src/import/fmp4.rs
- rs/hang/Cargo.toml
- rs/hang/src/import/aac.rs
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rs
📄 CodeRabbit inference engine (CLAUDE.md)
In Rust crates, tests should be integrated within source files using inline test modules
Files:
rs/libmoq/src/state.rsrs/hang/src/import/avc3.rs
🧠 Learnings (1)
📚 Learning: 2025-12-10T04:00:14.871Z
Learnt from: CR
Repo: moq-dev/moq PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-10T04:00:14.871Z
Learning: Applies to **/*.rs : In Rust crates, tests should be integrated within source files using inline test modules
Applied to files:
rs/libmoq/src/state.rs
🧬 Code graph analysis (2)
rs/libmoq/src/state.rs (1)
rs/hang/src/import/decoder.rs (1)
new(37-45)
rs/hang/src/import/avc3.rs (1)
rs/hang/src/import/aac.rs (1)
pts(151-158)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Check
🔇 Additional comments (9)
rs/libmoq/src/state.rs (1)
2-2: LGTM! Clean refactor to enum-based format handling.The changes correctly migrate from string-based format handling to the typed
DecoderFormatenum:
- The
FromStrimport supports thefrom_strparsing on line 158- Error mapping to
UnknownFormatpreserves the semantic error handling- The
Decoder::newcall aligns with the updated signature that acceptsDecoderFormatThis refactor integrates well with the broader PR direction of moving to typed format enums across the import/publish pipeline.
Note: The past review comment requesting inline tests for
create_trackper the Rust coding guidelines remains open.Also applies to: 156-159, 167-167
justfile (2)
147-147: LGTM: Explicit format specification improves clarity.The addition of
fmp4as an explicit positional argument aligns with the refactored publish architecture. The{{args}}placement afterfmp4still allows passing additional flags.
149-151: LGTM: HLS ingestion recipe properly quotes all parameters.The new
pub-hlsrecipe correctly quotes all interpolated variables ("{{relay}}","{{name}}","{{url}}"), which protects against shell interpretation of special characters like&in URLs with query parameters. The past review concern about quoting is no longer applicable, as the implementation changed from delegating tors/justfileto directly callingcargo run.rs/hang/src/import/avc3.rs (6)
26-28: LGTM: Wall-clock timestamp support added.The
zerofield is properly initialized and enables optional wall-clock timestamp fallback for the HLS ingestion workflow.Also applies to: 38-38
102-111: LGTM: Initialize method enables pre-configuration.The new
initializemethod cleanly handles SPS/PPS setup before stream decoding begins. PassingNonefor timestamps is safe becausemaybe_start_framereturns early when no slices have been encountered.
117-132: LGTM: Optional timestamp support in decode_stream.The signature change maintains correct behavior by deriving a concrete timestamp upfront, ensuring downstream calls to
decode_nalalways receiveSome(pts)when processing stream data.
141-163: LGTM: Optional timestamp support in decode_frame.The timestamp handling is consistent with
decode_stream, deriving a concrete timestamp early and propagating it correctly through the frame decoding pipeline.
165-213: LGTM: Optional timestamp handling in decode_nal.The optional timestamp parameter is correctly propagated to
maybe_start_frame. The design is safe because:
- During initialization (SPS/PPS/AUD/SEI),
maybe_start_frameis called before any slices are encountered, returning early.- During streaming, public methods ensure
Some(pts)is always provided.
215-237: LGTM: Well-designed guard logic in maybe_start_frame.The optional timestamp parameter is correctly guarded—the method only requires a concrete timestamp (line 222) when actually flushing a frame (
contains_slice == true). This allows initialization paths to passNonewhile ensuring streaming paths always have timestamps when needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
justfile (1)
326-331: Consider trapping EXIT to clean up orphaned processes.If the Python server crashes or exits unexpectedly, the backgrounded ffmpeg processes will continue running. Adding
EXITto the trap ensures cleanup on any termination.- trap cleanup SIGINT SIGTERM + trap cleanup EXITNote: With
EXIT, you can removeSIGINT SIGTERMas EXIT is triggered on those signals too.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
.gitignore(1 hunks)justfile(2 hunks)
✅ Files skipped from review due to trivial changes (1)
- .gitignore
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Check
🔇 Additional comments (3)
justfile (3)
147-147: LGTM!Making the
fmp4format explicit aligns with the new CLI structure.
149-151: LGTM!The URL and other parameters are properly quoted to handle special characters.
294-363: Well-structured test recipe for HLS streaming.The recipe properly sets up a multi-variant HLS stream with synchronized segment numbering across video and audio tracks. Using fmp4 segment type aligns with the HLS importer's expectations. Good addition for testing the HLS publishing workflow.
| video_144p/video.m3u8 | ||
| EOF | ||
|
|
||
| sed -i 's/^\t//' "$OUT_DIR/master.m3u8" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sed -i is not portable to macOS.
On macOS/BSD, sed -i requires a backup extension argument (even if empty). This will fail on macOS with "invalid command code".
Consider using a portable approach:
- sed -i 's/^\t//' "$OUT_DIR/master.m3u8"
+ sed -i'' 's/^\t//' "$OUT_DIR/master.m3u8"Alternatively, use <<- heredoc syntax which natively strips leading tabs:
cat > "$OUT_DIR/master.m3u8" <<-'EOF'
#EXTM3U
...
EOF🤖 Prompt for AI Agents
In justfile around line 321, the use of "sed -i 's/^\t//'" is not portable to
macOS/BSD; replace it with a portable solution such as either (A) use the
macOS-compatible inline syntax "sed -i '' -e 's/^\t//'" when running on macOS,
or (B) perform the edit without in-place sed by writing the transformed output
to a temporary file using a POSIX-safe tool (awk/perl) and then atomically move
it into place, or (C) avoid post-processing entirely by generating the file with
a heredoc that strips leading tabs (<<-), and update the recipe to use one of
these portable options.
|
Hey @kixelated , I've been fixing the comments you left on the PR. What's still pending is the CLI refactor so it can be used with the import in this form: hang import hls --url ... - I think it's better for another PR. I also left a new justfile script PS: There are some conflicts that appeared when trying to sync with main. Could you take a look since I'm not very familiar with them? |
Sure, and maybe give me contributor on your fork? It seems like qualabs has turned off the "allow maintainers to make edits" option so it's more difficult for me to fix stuff. |
|
Merged as #789 |
This PR introduces HLS import functionality. The import module now supports a manifest-based import type for HLS streams (with potential DASH support in the future).
Key Changes:
HLS Import Implementation
CLI Support
Added CLI command for HLS import:
just ingest-hlse.g
nix develop -c just ingest-hls http://example.com/manifest.m3u8 exampleThe HLS importer provides a first basic implementation that parses manifest data and uses the existing fmp4 import. The manifest polling loop uses the segment target duration to determine the refresh interval, though this timing mechanism could be further refined.