Skip to content

Conversation

@webmonch
Copy link
Contributor

This PR adds an AI video template.

Comes with a small CLI that generates the timeline and content for the video. Video is rendered based on that timeline.

End result looks like this (the images could be improved using a better model):

AIVideo.mp4

@vercel
Copy link
Contributor

vercel bot commented Oct 25, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
remotion Ready Ready Preview Comment Oct 26, 2025 10:56am

@vercel
Copy link
Contributor

vercel bot commented Oct 25, 2025

@webmonch is attempting to deploy a commit to the Remotion Team on Vercel.

A member of the Team first needs to authorize it.

Comment on lines +78 to +86
for (let i = 0; i < word.length; i++) {
currentEndMs =
character_end_times_seconds[currentCharIndex] * 1000 + durationMs;
currentCharIndex++;
}

currentEndMs =
character_end_times_seconds[currentCharIndex] * 1000 + durationMs;
currentCharIndex++;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Array index out of bounds when processing character timings, potentially accessing undefined values if audio timestamps don't match text character count.

View Details
📝 Patch Details
diff --git a/packages/template-ai-video/cli/timeline.ts b/packages/template-ai-video/cli/timeline.ts
index b6fe21f465..2cdedd39b0 100644
--- a/packages/template-ai-video/cli/timeline.ts
+++ b/packages/template-ai-video/cli/timeline.ts
@@ -76,14 +76,31 @@ export const createTimeLineFromStoryWithDetails = (
 
       currentText += `${word} `;
       for (let i = 0; i < word.length; i++) {
+        if (currentCharIndex < character_end_times_seconds.length) {
+          currentEndMs =
+            character_end_times_seconds[currentCharIndex] * 1000 + durationMs;
+          currentCharIndex++;
+        } else {
+          // Fallback: use last available timing if array is shorter than expected
+          currentEndMs =
+            character_end_times_seconds[character_end_times_seconds.length - 1] *
+              1000 +
+            durationMs;
+        }
+      }
+
+      // Handle space after word with bounds checking
+      if (currentCharIndex < character_end_times_seconds.length) {
         currentEndMs =
           character_end_times_seconds[currentCharIndex] * 1000 + durationMs;
         currentCharIndex++;
+      } else {
+        // Fallback: use last available timing if array is shorter than expected
+        currentEndMs =
+          character_end_times_seconds[character_end_times_seconds.length - 1] *
+            1000 +
+          durationMs;
       }
-
-      currentEndMs =
-        character_end_times_seconds[currentCharIndex] * 1000 + durationMs;
-      currentCharIndex++;
     }
 
     if (currentText.trim().length > 0) {

Analysis

Array index out of bounds in createTimeLineFromStoryWithDetails() when character timing array is shorter than expected

What fails: Function createTimeLineFromStoryWithDetails() in packages/template-ai-video/cli/timeline.ts accesses character_end_times_seconds[currentCharIndex] without bounds checking, causing undefined access when ElevenLabs API returns fewer character timing entries than text characters plus spaces

How to reproduce:

// Create story with mismatched timing data
const testStory = {
  shortTitle: "Test",
  content: [{
    text: "Hello world",  // 11 characters (10 letters + 1 space)
    audioTimestamps: {
      characterEndTimesSeconds: [0.1, 0.2, 0.3, 0.4, 0.5]  // Only 5 entries
    }
  }]
};
createTimeLineFromStoryWithDetails(testStory);

Result: Code accesses character_end_times_seconds[5] when array only has 5 elements (indices 0-4), returning undefined. Calculation undefined * 1000 produces NaN, assigned to currentEndMs and propagated to timeline text elements

Expected: Should handle timing array mismatches gracefully by validating array bounds before access per ElevenLabs API docs which don't guarantee timing array length matches text character count

Comment on lines +25 to +29
const lenMs = Math.ceil(
content.audioTimestamps.characterEndTimesSeconds[
content.audioTimestamps.characterEndTimesSeconds.length - 1
] * 1000,
);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing validation for empty character timing array, which will result in NaN when calculating video duration.

View Details
📝 Patch Details
diff --git a/packages/template-ai-video/cli/timeline.ts b/packages/template-ai-video/cli/timeline.ts
index b6fe21f465..ac771d71b0 100644
--- a/packages/template-ai-video/cli/timeline.ts
+++ b/packages/template-ai-video/cli/timeline.ts
@@ -23,9 +23,11 @@ export const createTimeLineFromStoryWithDetails = (
     const content = storyWithDetails.content[i];
 
     const lenMs = Math.ceil(
-      content.audioTimestamps.characterEndTimesSeconds[
-        content.audioTimestamps.characterEndTimesSeconds.length - 1
-      ] * 1000,
+      content.audioTimestamps.characterEndTimesSeconds.length > 0
+        ? content.audioTimestamps.characterEndTimesSeconds[
+            content.audioTimestamps.characterEndTimesSeconds.length - 1
+          ] * 1000
+        : 0,
     );
 
     const bgElem: BackgroundElement = {
@@ -54,7 +56,7 @@ export const createTimeLineFromStoryWithDetails = (
     const MaxSentenseSizeChars = 14;
 
     let currentText = "";
-    let currentStartMs = character_start_times_seconds[0] * 1000 + durationMs;
+    let currentStartMs = character_start_times_seconds.length > 0 ? character_start_times_seconds[0] * 1000 + durationMs : durationMs;
     let currentEndMs = durationMs;
     let currentCharIndex = 0;
 
@@ -77,12 +79,16 @@ export const createTimeLineFromStoryWithDetails = (
       currentText += `${word} `;
       for (let i = 0; i < word.length; i++) {
         currentEndMs =
-          character_end_times_seconds[currentCharIndex] * 1000 + durationMs;
+          currentCharIndex < character_end_times_seconds.length
+            ? character_end_times_seconds[currentCharIndex] * 1000 + durationMs
+            : durationMs;
         currentCharIndex++;
       }
 
       currentEndMs =
-        character_end_times_seconds[currentCharIndex] * 1000 + durationMs;
+        currentCharIndex < character_end_times_seconds.length
+          ? character_end_times_seconds[currentCharIndex] * 1000 + durationMs
+          : durationMs;
       currentCharIndex++;
     }
 
@@ -90,9 +96,11 @@ export const createTimeLineFromStoryWithDetails = (
       const textElem: TextElement = {
         startMs: currentStartMs,
         endMs:
-          character_end_times_seconds[character_end_times_seconds.length - 1] *
-            1000 +
-          durationMs,
+          character_end_times_seconds.length > 0
+            ? character_end_times_seconds[character_end_times_seconds.length - 1] *
+                1000 +
+              durationMs
+            : durationMs,
         text: currentText.trim(),
         position: "center",
         animations: getTextAnimations(),

Analysis

Empty character timing array causes NaN in video timeline duration calculation

What fails: createTimeLineFromStoryWithDetails() in packages/template-ai-video/cli/timeline.ts accesses characterEndTimesSeconds[array.length - 1] without checking if array is empty, resulting in undefined * 1000 = NaN for timeline durations

How to reproduce:

const testStory = {
  shortTitle: "Test",
  content: [{
    text: "Hello",
    imageDescription: "Test image", 
    uid: "test-uid",
    audioTimestamps: {
      characters: [],
      characterStartTimesSeconds: [],
      characterEndTimesSeconds: [] // Empty array from malformed API response
    }
  }]
};
createTimeLineFromStoryWithDetails(testStory);

Result: Timeline contains NaN values in startMs/endMs fields, breaking video generation

Expected: Should handle empty arrays gracefully with fallback duration values per ElevenLabs API docs which indicate alignment can be incomplete

Comment on lines +33 to +49
useEffect(() => {
const handle = delayRender("Loading timeline...");

const fetchConfig = async () => {
const { timeline } = await loadTimelineFromFile(
getTimelinePath(projectName),
);
setTimeline(timeline);
continueRender(handle);
};

fetchConfig();

return () => {
continueRender(handle);
};
}, [projectName]);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing error handling in the async effect means that if the timeline fetch fails, continueRender() will never be called, causing the Remotion render to hang indefinitely.

View Details
📝 Patch Details
diff --git a/packages/template-ai-video/src/Root.tsx b/packages/template-ai-video/src/Root.tsx
index af7768c794..a42e1ba7d5 100644
--- a/packages/template-ai-video/src/Root.tsx
+++ b/packages/template-ai-video/src/Root.tsx
@@ -26,11 +26,16 @@ export const RemotionRoot: React.FC = () => {
     const handle = delayRender("Calculating FPS duration...");
 
     const fetchConfig = async () => {
-      const { lengthFrames } = await loadTimelineFromFile(
-        getTimelinePath(resolvedProjectName),
-      );
-      setFrameLength(lengthFrames);
-      continueRender(handle);
+      try {
+        const { lengthFrames } = await loadTimelineFromFile(
+          getTimelinePath(resolvedProjectName),
+        );
+        setFrameLength(lengthFrames);
+        continueRender(handle);
+      } catch (error) {
+        console.error("Failed to load timeline:", error);
+        continueRender(handle);
+      }
     };
 
     fetchConfig();
diff --git a/packages/template-ai-video/src/components/AIVideo.tsx b/packages/template-ai-video/src/components/AIVideo.tsx
index 4181ebab71..1f2e269066 100644
--- a/packages/template-ai-video/src/components/AIVideo.tsx
+++ b/packages/template-ai-video/src/components/AIVideo.tsx
@@ -34,11 +34,16 @@ export const AIVideo: React.FC<z.infer<typeof aiVideoSchema>> = ({
     const handle = delayRender("Loading timeline...");
 
     const fetchConfig = async () => {
-      const { timeline } = await loadTimelineFromFile(
-        getTimelinePath(projectName),
-      );
-      setTimeline(timeline);
-      continueRender(handle);
+      try {
+        const { timeline } = await loadTimelineFromFile(
+          getTimelinePath(projectName),
+        );
+        setTimeline(timeline);
+        continueRender(handle);
+      } catch (error) {
+        console.error("Failed to load timeline:", error);
+        continueRender(handle);
+      }
     };
 
     fetchConfig();

Analysis

Missing error handling in async effects causes Remotion renders to hang indefinitely

What fails: AIVideo.tsx:fetchConfig() and Root.tsx:fetchConfig() call loadTimelineFromFile() without try/catch, so if fetch() or res.json() throws an error, continueRender() is never called

How to reproduce:

# Create invalid timeline file and attempt render
mkdir -p packages/template-ai-video/public/content/test-project
echo "{ invalid json }" > packages/template-ai-video/public/content/test-project/timeline.json
cd packages/template-ai-video && bun run remotion render AIVideo --props='{"projectName":"test-project"}'

Result: Render hangs for 30 seconds, then fails with "A delayRender() was called but not cleared after 28000ms"

Expected: Should handle the JSON parsing error gracefully per Remotion documentation: "Always use cancelRender() to handle any error that prevents you from calling continueRender()"

@JonnyBurger
Copy link
Member

Huge! 🥳

Thanks a lot for sending this in.
As it's large, needing a few days to process this. This is a great addition to our list of templates!

@webmonch
Copy link
Contributor Author

Huge! 🥳

Thanks a lot for sending this in. As it's large, needing a few days to process this. This is a great addition to our list of templates!

Glad you like it!
I may push a few minor commits later today to fix failing tests.

Comment on lines +14 to +68
export const Background: React.FC<{
item: BackgroundElement;
project: string;
}> = ({ item, project }) => {
const frame = useCurrentFrame();
const localMs = (frame / FPS) * 1000;

const viewSize = { width: WindowWidth, height: WindowHeight };
const imageRatio = ImageHeight / ImageWidth;

const imgWidth = viewSize.height;
const imgHeight = imgWidth * imageRatio;
let animScale = 1 + EXTRA_SCALE;

const currentScaleAnim = item.animations?.find(
(anim) =>
anim.type === "scale" && anim.startMs <= localMs && anim.endMs >= localMs,
);

if (currentScaleAnim) {
const progress =
(localMs - currentScaleAnim.startMs) /
(currentScaleAnim.endMs - currentScaleAnim.startMs);
animScale =
EXTRA_SCALE +
progress * (currentScaleAnim.to - currentScaleAnim.from) +
currentScaleAnim.from;
}

const imgScale = animScale;
const top = -(imgHeight * imgScale - viewSize.height) / 2;
const left = -(imgWidth * imgScale - viewSize.width) / 2;

const blur = calculateBlur({ item, localMs });
const maxBlur = 25;

const currentBlur = maxBlur * blur;

return (
<AbsoluteFill style={{ backgroundColor: "blue" }}>
<Img
src={staticFile(getImagePath(project, item.imageUrl))}
style={{
width: imgWidth * imgScale,
height: imgHeight * imgScale,
position: "absolute",
top,
left,
filter: `blur(${currentBlur}px)`,
WebkitFilter: `blur(${currentBlur}px)`,
}}
/>
</AbsoluteFill>
);
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rotation animations defined in the timeline are completely ignored during rendering. Only scale animations are applied to background images.

View Details
📝 Patch Details
diff --git a/packages/template-ai-video/src/components/Background.tsx b/packages/template-ai-video/src/components/Background.tsx
index 4c18879575..2a21e75a99 100644
--- a/packages/template-ai-video/src/components/Background.tsx
+++ b/packages/template-ai-video/src/components/Background.tsx
@@ -40,6 +40,21 @@ export const Background: React.FC<{
       currentScaleAnim.from;
   }
 
+  let rotationDeg = 0;
+  const currentRotateAnim = item.animations?.find(
+    (anim) =>
+      anim.type === "rotate" && anim.startMs <= localMs && anim.endMs >= localMs,
+  );
+
+  if (currentRotateAnim) {
+    const progress =
+      (localMs - currentRotateAnim.startMs) /
+      (currentRotateAnim.endMs - currentRotateAnim.startMs);
+    rotationDeg =
+      progress * (currentRotateAnim.to - currentRotateAnim.from) +
+      currentRotateAnim.from;
+  }
+
   const imgScale = animScale;
   const top = -(imgHeight * imgScale - viewSize.height) / 2;
   const left = -(imgWidth * imgScale - viewSize.width) / 2;
@@ -61,6 +76,7 @@ export const Background: React.FC<{
           left,
           filter: `blur(${currentBlur}px)`,
           WebkitFilter: `blur(${currentBlur}px)`,
+          transform: `rotate(${rotationDeg}deg)`,
         }}
       />
     </AbsoluteFill>

Analysis

Background component ignores rotation animations during rendering

What fails: Background.tsx component only applies scale animations, completely ignoring rotation animations from timeline data. The currentRotateAnim search and rotation transform application is missing.

How to reproduce:

  1. Generate timeline with rotation animations: bun cli/cli.ts creates both scale and rotate animations via getBgAnimations()
  2. Render video with background elements containing rotation data
  3. Observe that images only scale but never rotate despite rotation animations in timeline.json

Result: All rotation animations are silently ignored - videos lack intended rotation effects Expected: Rotation animations should be applied as CSS transform: rotate() to background images

Evidence: Timeline contains rotation data (e.g. {"type": "rotate", "from": 0, "to": -7.1, "startMs": 0, "endMs": 7616}) but Background.tsx only searches for "scale" type animations (lines 28-41), never "rotate" type.

Comment on lines +38 to +40
EXTRA_SCALE +
progress * (currentScaleAnim.to - currentScaleAnim.from) +
currentScaleAnim.from;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
EXTRA_SCALE +
progress * (currentScaleAnim.to - currentScaleAnim.from) +
currentScaleAnim.from;
currentScaleAnim.from + progress * (currentScaleAnim.to - currentScaleAnim.from);

The scale animation formula incorrectly adds EXTRA_SCALE to the interpolated value, causing scale animations to be offset from their intended values.

View Details

Analysis

Scale animation formula adds incorrect offset causing oversized backgrounds

What fails: Background.tsx scale animation formula incorrectly adds EXTRA_SCALE (0.2) to the interpolated animation value, causing all scale animations to render 20% larger than specified

How to reproduce:

  1. Create a scale animation from 1.5 to 1.0 in a BackgroundElement
  2. At 50% progress, the scale should be 1.25 but renders as 1.45
  3. Mathematical proof: EXTRA_SCALE + progress * (to - from) + from vs correct from + progress * (to - from)

Result: All background scale animations are offset by +0.2, making images appear 16-20% larger than intended. This causes inconsistent behavior compared to Word.tsx which uses standard Remotion interpolation.

Expected: Scale animations should follow standard linear interpolation formula matching other Remotion components

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants