Create, preview, and render HTML video compositions from the command line.
The hyperframes CLI is the primary way to work with Hyperframes. It handles project creation, live preview, rendering, linting, and diagnostics — all from your terminal.
npm install -g hyperframes# or use directly with npxnpx hyperframes <command>
Preview compositions with live hot reload during development
Render compositions to MP4 (locally or in Docker)
Lint compositions for structural issues
Check your environment for missing dependencies
Use a different package if you want to:
Render programmatically from Node.js code — use the producer
Build a custom frame capture pipeline — use the engine
Embed a composition editor in your own web app — use the studio
Parse or generate composition HTML in code — use core
The CLI is the recommended starting point for all Hyperframes users. It wraps the producer, engine, and studio packages so you do not need to install them separately.
The CLI is non-interactive by default — designed so AI agents (Claude Code, Gemini CLI, Codex, Cursor) can drive every command without prompts or interactive UI.
All inputs are passed via flags (e.g., --example, --video, --output)
Missing required flags fail fast with a clear error and usage example
Output is plain text suitable for parsing
No interactive prompts, spinners, or selection menus
Add --human-friendly to any command to enable the interactive terminal UI with prompts, spinners, and selection menus.
This allows agents to detect outdated versions from any command’s output without running a separate upgrade check. The version data comes from a 24-hour cache — no network request is made during --json output.
The CLI checks npm for newer versions in the background (cached 24 hours). If an update is available, a notice appears on stderr after command completion:
Example to scaffold (required in default mode, interactive in --human-friendly)
--video, -V
Path to a video file (MP4, WebM, MOV)
--audio, -a
Path to an audio file (MP3, WAV, M4A)
--skip-skills
Skip AI coding skills installation
--skip-transcribe
Skip automatic whisper transcription
--model
Whisper model for transcription (e.g. small.en, medium.en, large-v3)
--language
Language code for transcription (e.g. en, es, ja). Filters non-target speech.
--human-friendly
Enable interactive terminal UI with prompts
Example
Description
blank
Empty composition — just the scaffolding
warm-grain
Cream aesthetic with grain texture
play-mode
Playful elastic animations
swiss-grid
Structured grid layout
vignelli
Bold typography with red accents
In default (agent) mode, --example is required — the CLI errors with a usage example if missing. In --human-friendly mode, you choose interactively. When --video or --audio is provided, the CLI automatically transcribes the audio with Whisper and patches captions into the composition (use --skip-transcribe to disable).After scaffolding, the CLI installs AI coding skills for Claude Code, Gemini CLI, and Codex CLI (use --skip-skills to disable). See skills command.See Examples for full details.
Install a block or component from the registry into an existing project. Examples (full projects) are scaffolded with init; blocks and components are smaller units you add to a composition you already have.
# Add a block (sub-composition scene)npx hyperframes add claude-code-window# Add a component (effect / snippet)npx hyperframes add shader-wipe# Target a different project dirnpx hyperframes add shader-wipe --dir ./my-video# Headless / CI (skip clipboard; also: --json for a machine-readable result)npx hyperframes add shader-wipe --no-clipboard --json
Flag
Description
<name> (positional)
Registry item name (e.g. claude-code-window, shader-wipe)
--dir
Project directory (defaults to the current working directory)
--no-clipboard
Skip copying the include snippet to the clipboard
--json
Print a machine-readable summary (written files + snippet) to stdout
add reads hyperframes.json at the project root to know which registry to pull from and where to drop files. If the file is missing but the directory looks like a Hyperframes project (has index.html), a default hyperframes.json is written the first time you run add.Output for a block or component is a set of files plus a paste snippet — the <iframe> tag (for blocks) or the fragment path (for components) to include in your host composition. The snippet is copied to the clipboard by default; add --no-clipboard for CI or headless environments.Trying add with an example’s name (e.g. hyperframes add warm-grain) emits a clear error pointing you at init --example.
Browse the registry — list available blocks and components with optional filters:
# List everything (default: table output)npx hyperframes catalog# Filter by type or tagnpx hyperframes catalog --type blocknpx hyperframes catalog --type block --tag social# Machine-readable JSONnpx hyperframes catalog --json# Interactive picker — select to installnpx hyperframes catalog --human-friendly
Flag
Description
--type
Filter by block or component
--tag
Filter by tag (e.g. social, transition, text)
--json
Print matching items as JSON (non-interactive)
--human-friendly
Interactive picker — select an item to install it
Default output is a table listing name, type, description, and tags — designed for agents to parse. --json produces structured output. --human-friendly opens an interactive picker that runs add on selection.
Transcribe audio/video to word-level timestamps, or import an existing transcript:
# Transcribe audio/video with local whisper.cppnpx hyperframes transcribe audio.mp3npx hyperframes transcribe video.mp4 --model medium.en --language en# Import existing transcripts from other toolsnpx hyperframes transcribe subtitles.srtnpx hyperframes transcribe captions.vttnpx hyperframes transcribe openai-response.json
Flag
Description
--dir, -d
Project directory (default: current directory)
--model, -m
Whisper model (default: small.en). Options: tiny.en, base.en, small.en, medium.en, large-v3
--language, -l
Language code (e.g. en, es, ja). Filters out non-target language speech.
--json
Output result as JSON
The command auto-detects the input type. Audio/video files are transcribed with whisper.cpp. Transcript files (.json, .srt, .vtt) are normalized and imported.Supported transcript formats:
Format
Source
whisper.cpp JSON
hyperframes init --video, hyperframes transcribe
OpenAI Whisper API JSON
openai.audio.transcriptions.create() with word timestamps
SRT subtitles
Video editors, YouTube, subtitle tools
VTT subtitles
Web players, YouTube, transcription services
All formats are normalized to a standard [{text, start, end}] word array and saved as transcript.json. If the project has caption HTML files, they are automatically patched with the transcript data.
For music or noisy audio, use --model medium.en for better accuracy. For the best results with production content, transcribe via the OpenAI or Groq Whisper API and import the JSON.
Generate speech audio from text using a local AI model (Kokoro-82M). No API key required — runs entirely on-device.
# Generate speech from textnpx hyperframes tts "Welcome to HyperFrames"# Choose a voicenpx hyperframes tts "Hello world" --voice am_adam# Save to a specific filenpx hyperframes tts "Intro" --voice bf_emma --output narration.wav# Adjust speech speednpx hyperframes tts "Slow and clear" --speed 0.8# Read text from a filenpx hyperframes tts script.txt# List available voicesnpx hyperframes tts --list
Flag
Description
--output, -o
Output file path (default: speech.wav in current directory)
--voice, -v
Voice ID (run --list to see options)
--speed, -s
Speech speed multiplier (default: 1.0)
--list
List available voices and exit
--json
Output result as JSON
Combine tts with transcribe to generate narration and word-level timestamps for captions in a single workflow: generate the audio with tts, then transcribe the output with transcribe to get word-level timing.
Opens your composition in the Hyperframes Studio with live preview. Edits to index.html and any referenced sub-compositions are reflected automatically. The preview uses the same Hyperframes runtime as production rendering, so what you see is what you get.The preview server runs in three modes, auto-detected:
Embedded mode (default for npx) — runs a standalone server with the studio bundled in the CLI. Zero extra dependencies.
Local studio mode — if @hyperframes/studio is installed in your project’s node_modules, spawns Vite with full HMR for faster iteration.
Monorepo mode — if running from the Hyperframes source repo, spawns the studio dev server directly.
◆ Linting my-project/index.html ✗ missing_gsap_script: Composition uses GSAP but no GSAP script is loaded. ⚠ unmuted-video [clip-1]: Video should have the 'muted' attribute for reliable autoplay.◇ 1 error(s), 1 warning(s)
By default only errors and warnings are printed. Info-level findings (e.g., external script dependency notices) are hidden to keep output clean for agents and CI. Use --verbose to include them.
Flag
Description
--json
Output findings as JSON (includes errorCount, warningCount, infoCount, and findings array)
--verbose
Include info-level findings in output (hidden by default)
Severity levels:
Error (✗) — must fix before rendering (e.g., missing adapter library, invalid attributes)
Warning (⚠) — likely issues that may cause unexpected behavior
Info (ℹ) — informational notices, shown only with --verbose
The linter detects missing attributes, missing adapter libraries (GSAP, Lottie, Three.js), structural problems, and more. See Common Mistakes for details on each rule.
Use --format webm to render compositions with a transparent background. This produces VP9 video with alpha channel in a WebM container — the standard format for overlayable video.
# Render a caption overlay with transparent backgroundnpx hyperframes render --format webm --output captions.webm# Overlay on another video with FFmpegffmpeg -c:v libvpx-vp9 -i captions.webm -i background.mp4 \ -filter_complex "[1:v][0:v]overlay=0:0" -y composited.mp4
For transparency to work, your composition’s HTML should use background: transparent on the root elements. WebM renders use PNG frame capture (instead of JPEG) to preserve the alpha channel.
hyperframes doctor ✓ Version 0.1.4 (latest) ✓ Node.js v22.x (linux x64) ✓ FFmpeg 7.x ✓ FFprobe 7.x ✓ Chrome (system or cached) ✓ Docker 24.x ✓ Docker running Running ◇ All checks passed
Verifies CLI version, Node.js, FFmpeg, FFprobe, Chrome, and Docker availability. If a newer CLI version is available, the version row shows an upgrade hint.
npx hyperframes telemetry enablenpx hyperframes telemetry disablenpx hyperframes telemetry status
Telemetry collects command names, render performance, example choices, and system info. It does not collect file paths, project names, video content, or personally identifiable information. Disable with HYPERFRAMES_NO_TELEMETRY=1 or the command above.
Install HyperFrames and GSAP skills for AI coding tools:
# Install to all default targets (Claude Code, Gemini CLI, Codex CLI)npx hyperframes skills# Install to specific toolsnpx hyperframes skills --claudenpx hyperframes skills --cursornpx hyperframes skills --claude --gemini
Flag
Description
--claude
Install to Claude Code (~/.claude/skills/)
--gemini
Install to Gemini CLI (~/.gemini/skills/)
--codex
Install to Codex CLI (~/.codex/skills/)
--cursor
Install to Cursor (.cursor/skills/ in current project)
Skills are fetched from GitHub and include composition authoring, GSAP animation patterns, registry block/component wiring, and other domain-specific knowledge. The init command also offers to install skills automatically after scaffolding a project.
hyperframes init writes a hyperframes.json file at the root of every new project. hyperframes add reads it to know which registry to pull items from and where to drop them. Edit the file (or delete it to fall back to defaults) to reshape your project layout or point at a custom registry.