No Manual Runtime Setup • No Separate Python Install • No Separate Node.js Install
Just download the latest .exe file, double-click, and start generating videos locally!
claude mcp add automated-video-generator -- npx automated-video-generatorAdd to your claude_desktop_config.json:
{
"mcpServers": {
"automated-video-generator": {
"command": "npx",
"args": ["automated-video-generator"]
}
}
}Free and open-source AI video generator for turning scripts into MP4 videos with Remotion, Edge-TTS, stock footage APIs, and a local web portal.
🚀 Available on NPM: automated-video-generator
🦀 Listed on ClawHub: automated-video-generator
Automated Video Generator is a self-hosted text-to-video pipeline for developers, creators, and AI agents. Give it a script and it can fetch visuals, generate voiceovers, render scenes with Remotion, and export a ready-to-share video.
If you are searching for a free video generator, open-source AI video generator, Remotion video generator, YouTube Shorts generator, TikTok video generator, or self-hosted text-to-video tool, this repo is built for that workflow.
This is not a fake "free trial" generator. The project itself is completely free and MIT-licensed. There is no paid plan, no subscription, and no watermark added by this codebase. Optional third-party services such as Pexels or Pixabay may still have their own quotas or terms.
Inspired by the Vibe Coding movement, this project shifts you from a "manual editor" to a Creative Director.
- High-Level Intent: Describe your story and the "vibe" you want.
- Automated Performance: The AI handles media fetching, voice synthesis, and frame-perfect audio-visual synchronization.
- No Syntax, Just Story: Stop worrying about keyframes and timelines. If you can describe it, you can generate it.
The Automated Video Generator project is officially available on ClawHub. You can discover and use our native skills:
- Video Generator CLI: High-performance command line tools for mass video production.
- Video Script Generator: Agentic skill to turn storytelling prompts into video-ready JSON scripts.
- Worldwide support for 400+ voices across all major languages with a searchable interface
- Text-to-video pipeline with Remotion and React
- Multi-language support including Tamil, Hindi, Spanish, French, and German
- Edge-TTS voiceovers with Windows desktop fallback support for fresh installs
- Stock media fetching from Pexels and Pixabay
- Local asset support for your own images and videos
- Configurable background music with volume control
- Batch generation for multiple videos in one run
- Local web portal for generating, previewing, and sharing videos
- Built-in MCP server for Claude Desktop, Claude Code, and other MCP clients
- YouTube Shorts automation
- TikTok and Reels content pipelines
- Faceless YouTube channels
- Marketing videos and product promos
- Explainer videos and tutorials
- Programmatic content generation for AI agents
- Script-driven video generation from plain text or JSON
- Director mode with
[Visual: ...]tags for exact visual control - Automatic scene parsing and timeline generation
- Neural voice generation with
edge-tts, Windows offline speech fallback, and recovery-friendly desktop setup - Portrait and landscape video output
- Resumable segmented rendering with Remotion
- Cancel, retry, and restart-aware job recovery
- Render thumbnails for completed videos
- Browser portal for generation, status tracking, playback, and downloads
- Windows desktop installer with setup wizard, bundled runtime checks, and release verification
- MCP tool interface for agentic workflows
For non-technical users, we provide a completely standalone Windows desktop app. No terminal, no Node.js, and no Python installation required.
👉 Click here to download the latest Windows .exe Installer
- No Manual Runtime Setup: Most users do not need to install Node.js or Python themselves.
- Natively Bundled: The desktop app ships with its own runtime and bundled voice engine resources.
- Fallback Friendly: If bundled
Edge-TTSis unavailable, Windows builds can fall back to offline Windows speech. - Auto-Open: The video generator portal launches automatically on startup.
- Repair Friendly: The setup wizard can repair missing runtime pieces and launch the app directly.
If you are shipping or testing the Windows app, read docs/WINDOWS_INSTALLER.md and docs/PRODUCTION_HARDENING.md.
For non-technical users on Windows, the easiest option is:
Start-Automated-Video-Generator.bat
If you are already inside PowerShell, use:
.\Start-Automated-Video-Generator.batThere is also a native PowerShell launcher:
.\Start-Automated-Video-Generator.ps1It can:
- install Node.js and Python with
wingetif missing - create
.envfrom.env.example - install Node dependencies
- install Python voice dependencies if needed
- start the local portal
- open the browser automatically
After the browser opens:
- Save your
PEXELS_API_KEY - Paste or edit your script
- Click
Generate Video - Wait on the live status page
- Watch or download the final MP4
- Clone the repository:
git clone https://github.com/itsPremkumar/Automated-Video-Generator.git
cd Automated-Video-Generator- Double-click:
Start-Automated-Video-Generator.bat
If you are launching from PowerShell instead of File Explorer, use:
.\Start-Automated-Video-Generator.bat-
The launcher handles the first-time setup and opens the browser portal.
-
In the portal:
- save your API key
- paste the script
- choose voice, orientation, and music if needed
- start the render
- watch or download the result
- Clone the repository:
git clone https://github.com/itsPremkumar/Automated-Video-Generator.git
cd Automated-Video-Generator- Install Node dependencies:
npm install- Install Python voice dependencies:
Windows:
py -m pip install -r requirements.txtIf py does not work:
python -m pip install -r requirements.txtmacOS or Linux:
python3 -m pip install -r requirements.txt- Copy
.env.exampleto.env
Windows PowerShell:
Copy-Item .env.example .envmacOS or Linux:
cp .env.example .env-
Add
PEXELS_API_KEYto.env -
Start the browser portal:
npm run dev- Open:
http://localhost:3001/
Check Node.js:
node -vCheck npm:
npm -vCheck Python:
Windows:
py --versionor:
python --versionmacOS or Linux:
python3 --versionCheck Edge-TTS:
Windows:
py -m edge_tts --helpor:
python -m edge_tts --helpmacOS or Linux:
python3 -m edge_tts --helpPortal health check:
npm run devThen open:
http://localhost:3001/health
You should see JSON similar to:
{"status":"ok","service":"video-generator"}- Node.js 18+
- npm
- Python 3.8+
- FFmpeg available on your
PATH
Note: the renderer tries to use bundled ffmpeg-static and ffprobe-static first, so many users will not need a separate FFmpeg install. A global FFmpeg install is still useful as a fallback on some machines.
You can run the MCP server directly without cloning:
npx automated-video-generatorOr install it globally:
npm install -g automated-video-generatorgit clone https://github.com/itsPremkumar/Automated-Video-Generator.git
cd Automated-Video-Generator
npm install
pip install -r requirements.txtCopy .env.example to .env and add your keys.
# Free stock media API keys
PEXELS_API_KEY=your_key_here
PIXABAY_API_KEY=your_key_here
# Optional but recommended for public deployments
PUBLIC_BASE_URL=https://your-domain.example
# Optional defaults
PORT=3001
VIDEO_ORIENTATION=portrait
VIDEO_VOICE=en-US-GuyNeuralPEXELS_API_KEY is the main one to start with, and Pexels offers a free API key.
Create input/input-scripts.json with one or more jobs:
[
{
"id": "youtube-shorts-demo",
"title": "3 Productivity Habits That Actually Work",
"orientation": "portrait",
"language": "tamil",
"script": "வணக்கம்! செயற்கை நுண்ணறிவு தொழில்நுட்பம் உலகையே மாற்றிக்கொண்டிருக்கிறது."
}
]
Run the pipeline:
npm run generateThe final video will be written to output/<id>/.
Run the local portal:
npm run devThen open:
http://localhost:3001/
The portal lets you:
- Start a render from the browser
- Save API keys from the browser setup form
- Fill a sample script without touching
input/input-scripts.json - Track progress on a job page
- Watch completed videos
- Download the final MP4
- Expose SEO-ready pages if you deploy it publicly
- Clone the repo
- Run the launcher or complete the manual install
- Open
http://localhost:3001/ - Save
PEXELS_API_KEY - Paste the script
- Click
Generate Video - Wait for the render page to finish
- Watch or download the MP4
Preview templates and compositions locally:
npm run remotion:studioThis project ships with an MCP server, so AI tools can create and manage videos through chat-driven workflows.
Start the MCP server:
npm run mcpUseful for:
- Claude Desktop
- Claude Code
- Other Model Context Protocol clients
- CI runs on pushes and pull requests
- Dependabot keeps npm and GitHub Actions dependencies fresh
- Issue templates make bug reports and feature requests easier to review
- A pull request template helps contributors ship cleaner changes
- Parse a script into scenes and durations.
- Fetch stock visuals or use local assets from
input/input-assests/. - Generate voiceover audio with Edge-TTS and supported fallbacks when needed.
- Save scene data into
output/<job-id>/scene-data.json. - Render scene segments with Remotion.
- Stitch the final MP4 and thumbnail for playback and sharing.
npm run generate # Generate videos from input/input-scripts.json
npm run resume # Resume an interrupted generation run
npm run segment # Rebuild from existing scene data
npm run remotion:studio # Open Remotion studio
npm run remotion:render # Render using the render pipeline
npm run dev # Start the local web portal
npm run mcp # Start the MCP server
npm run typecheck # Validate TypeScript before opening a PR
npm run electron:verify-bundle # Check desktop bundle inputs before building
npm run electron:verify-release # Check the unpacked Windows releasesrc/
adapters/
http/ Express controllers, routes, and server bootstrap
cli/ CLI adapter and batch runner
mcp/ MCP tool registrars and MCP-specific stores/helpers
application/ Shared application services and orchestration
infrastructure/
persistence/ Persistent job tracking
shared/
contracts/ Shared runtime-safe request/status contracts
runtime/ Path and runtime helpers
logging/ Runtime-aware logging helpers
app.ts Express app composition
server.ts Thin HTTP executable entrypoint
cli.ts Thin CLI executable entrypoint
mcp-server.ts Thin MCP executable entrypoint
video-generator.ts Pipeline generation implementation
render.ts Segmented Remotion renderer
electron/
electron-main.ts Desktop composition root
dependency-service.ts Desktop dependency checks and repair
server-manager.ts Desktop backend process manager
window-manager.ts Desktop window and tray manager
ipc.ts Electron IPC wiring
remotion/
MainVideo.tsx
SingleSceneVideo.tsx
Root.tsx
input/
input-scripts.json
input-assests/ Local images and videos
output/ Generated videos
public/ Job assets served by the portal
For the full architecture reference, see docs/ARCHITECTURE.md.
Yes. The project itself is completely free and open source under the MIT license. There is no paid plan attached to this repo. Optional external services may have their own rules or usage limits.
Yes. It is an open-source text-to-video pipeline that uses AI voice generation plus deterministic media selection and Remotion rendering.
Yes. Use portrait for 9:16 output and landscape for 16:9 videos.
No watermark is added by this project.
Yes. Put files in input/input-assests/ and reference them with [Visual: filename.mp4] or [Visual: filename.jpg].
Yes. You can run it locally, in Docker, or behind your own deployment setup.
Not always. The project tries bundled ffmpeg-static first. A global FFmpeg install is mainly a fallback for machines where the bundled binary cannot be used.
The desktop app now tries multiple voice paths instead of failing immediately.
It prefers bundled Edge-TTS, can repair the bundled runtime from the setup wizard, and can fall back to Windows offline speech if needed.
Clone the repo, run Start-Automated-Video-Generator.bat, save the PEXELS_API_KEY in the browser portal, and generate from the UI.
Try:
python -m pip install -r requirements.txtIf Python itself is broken, reinstall Python 3 or use:
winget install --id Python.Python.3.12 --exact --accept-package-agreements --accept-source-agreements --silentPowerShell does not run files from the current folder by name alone.
Use:
.\Start-Automated-Video-Generator.bator:
.\Start-Automated-Video-Generator.ps1These files make the project easier for AI tools and answer engines to understand:
llms.txtllms-full.txtQUICKSTART.mdWINDOWS_INSTALLER.mdSETUP.mdPRODUCTION_HARDENING.mdCLAUDE_MCP_SETUP.md
If you want more GitHub discovery, set repo topics like:
free-video-generator, open-source-video-generator, text-to-video, ai-video-generator, remotion, edge-tts, youtube-shorts, tiktok-video-generator, self-hosted, mcp-server
Issues, feature requests, docs improvements, and pull requests are welcome.
If this repo helps you, please star it on GitHub:
