Local-first FastAPI + Vue 3 WebUI for running modern diffusion pipelines with managed installers and safe updates.
Quick links: Install | Quick Start | Docker Install | Model Hub Notes | Custom PyTorch FA2 | Support
| Launcher (Services tab) | WebUI (SDXL tab) |
|---|---|
![]() |
![]() |
| Manage API/UI services and runtime startup state from one place. | Run SDXL workflows with generation controls and model/runtime options. |
Important
This project optionally provides Windows PyTorch builds with FlashAttention2 enabled for RTX architecture targets.
- Repository: https://github.com/sangoi-exe/pytorch
- Targeted RTX build variants:
SM80SM86SM89SM90
For this workflow, these custom builds are the recommended path when you want FA2-enabled runtime behavior.
- Multi-engine workflows: SD15, SDXL, FLUX.1, Z-Image, WAN 2.2, and related adapters.
- Utility views and tools: PNG Info, XYZ plot, GGUF tools, and workflow snapshots.
- Local-first backend/UI stack with managed Python and Node environments.
Prerequisites:
- Git
- Internet access for first install (
uv, managed Python, repo-local Node.js via nodeenv, and Python wheels)
For detailed install options, backend selection, and troubleshooting, see INSTALL.md.
.\install-webui.cmdinstall-webui.cmd
run-webui.batOnline one-liner + full manual (no installer scripts) are documented in INSTALL.md.
Optional safe update:
update-webui.batbash install-webui.sh
./run-webui.shOptional safe update:
bash update-webui.sh
# Optional: ignore untracked-path preflight checks (tracked changes still abort)
bash update-webui.sh --forceOn Linux/WSL, startup prints the local URLs:
[webui] API: http://localhost:7850
[webui] UI: http://localhost:7860
Open the UI URL in your browser. Stop with Ctrl+C.
On Windows, use the launcher output/UI to confirm the active API and UI endpoints.
Use this path when you want containerized setup with persisted models/output state.
docker build -t codex-webui:latest .Optional build overrides:
docker build -t codex-webui:latest . \
--build-arg CODEX_TORCH_MODE=cpu \
--build-arg CODEX_TORCH_BACKEND=cu126docker run --rm -it --gpus all \
-p 7850:7850 -p 7860:7860 \
-v "$(pwd)/models:/opt/stable-diffusion-webui-codex/models" \
-v "$(pwd)/output:/opt/stable-diffusion-webui-codex/output" \
codex-webui:latestdocker run --rm -it \
codex-webui:latest --tui --configure-onlydocker compose up --buildFirst-time interactive profile setup with compose:
docker compose run --rm webui --tui --configure-only- Entrypoint is
run-webui-docker.sh(delegates toapps/docker_tui_launcher.pyand thenrun-webui.sh). - Disable interactive TUI with
-e CODEX_DOCKER_TUI=0or runtime args--no-tui --non-interactive. - If you override
API_PORT_OVERRIDEorWEB_PORTin TUI/profile, host-pmappings must use the same ports. - Docker defaults are preseeded for this project profile (CUDA runtime path, SDPA flash, LoRA online, WAN22
ram+hd) and can be overridden with-e KEY=VALUEor compose.env. - Default allocator env is
PYTORCH_CUDA_ALLOC_CONF=backend:cudaMallocAsync.
update-webui.(bat|sh) is fail-closed and non-destructive:
- Dirty worktree checks are strict: tracked changes always abort; untracked changes abort unless
--forceis set. --forceonly relaxes untracked-path preflight checks;git pull --ff-onlysafety checks still apply.- Ignored paths (
.gitignore) are excluded from dirty-tree abort checks. - Update path remains non-destructive:
git fetch --prune+git pull --ff-only.
Full contract and edge cases are documented in INSTALL.md.
Official Hugging Face repo: https://huggingface.co/sangoi-exe/sd-webui-codex
Highlights from the Hub (snapshot from hf download --dry-run on 2026-01-29):
- FLUX.1
flux/FLUX.1-dev-Q5_K_M-Codex.gguf
- Z-Image
zimage/Z-Image-Turbo-Q5_K_M-Codex.ggufzimage/Z-Image-Turbo-Q8_0-Codex.gguf
- WAN 2.2
wan22/wan22_i2v_14b_HN_lx2v_4step-Q4_K_M-Codex.ggufwan22/wan22_i2v_14b_LN_lx2v_4step-Q4_K_M-Codex.ggufwan22-loras/wan22_i2v_14b_HN_lx2v_4step_lora_rank64_1022.safetensorswan22-loras/wan22_i2v_14b_LN_lx2v_4step_lora_rank64_1022.safetensorswan22-loras/wan22_t2v_14b_HN_lx2v_4step_lora_rank64_1017.safetensorswan22-loras/wan22_t2v_14b_LN_lx2v_4step_lora_rank64_1017.safetensors
Default model roots are under models/:
models/sd15/,models/sdxl/,models/flux/,models/zimage/,models/wan22/- plus
*-vae,*-tenc,*-loras
More model hub and folder layout details: README_HF_MODELS.md. If you customize model roots, edit apps/paths.json.
- Bug reports: https://github.com/sangoi-exe/stable-diffusion-webui-codex/issues
- Questions and discussion: https://github.com/sangoi-exe/stable-diffusion-webui-codex/discussions
- Code license: PolyForm Noncommercial License 1.0.0 (LICENSE).
- Required notice must be preserved: NOTICE.
- Commercial use is not permitted: COMMERCIAL.md.
- Trademarks and branding: TRADEMARKS.md.
- Diffusion ecosystem baseline: Hugging Face Diffusers.
- UX and workflow inspiration: AUTOMATIC1111 and Forge.


