-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Add GPU compute capability compatibility checking #367
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -182,6 +182,7 @@ class HealthResponse(BaseModel): | |
| vram_used_mb: Optional[float] = None | ||
| backend_type: Optional[str] = None # Backend type (mlx or pytorch) | ||
| backend_variant: Optional[str] = None # Binary variant (cpu or cuda) | ||
| gpu_compatibility_warning: Optional[str] = None # Warning if GPU arch unsupported | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🧩 Analysis chain🏁 Script executed: #!/bin/bash
# Check if frontend types include gpu_compatibility_warning
echo "=== Checking auto-generated types ==="
rg -n 'gpu_compatibility_warning' app/src/lib/api/ || echo "Field NOT found in auto-generated types"
echo ""
echo "=== Checking hand-written types.ts ==="
rg -n 'gpu_compatibility_warning' app/src/lib/api/types.ts || echo "Field NOT found in types.ts"Repository: jamiepine/voicebox Length of output: 207 🏁 Script executed: # Check justfile for generation commands
if [ -f "justfile" ]; then
echo "=== Justfile generation commands ==="
rg -A 3 'generate.*api|api.*generate' justfile
else
echo "No justfile found"
fiRepository: jamiepine/voicebox Length of output: 235 🏁 Script executed: # Check headers of auto-generated files to see if they indicate regeneration is needed
echo "=== Checking auto-generated file headers ==="
head -20 app/src/lib/api/models/HealthResponse.ts 2>/dev/null || echo "File not found"
echo ""
head -20 app/src/lib/api/schemas/\$HealthResponse.ts 2>/dev/null || echo "File not found"Repository: jamiepine/voicebox Length of output: 961 🏁 Script executed: # Check if frontend code tries to access gpu_compatibility_warning
echo "=== Checking frontend code that uses HealthResponse ==="
rg -n 'HealthResponse' app/src --type ts --type tsx -B 2 -A 5 | head -50Repository: jamiepine/voicebox Length of output: 162 🏁 Script executed: # Search for HealthResponse usage in frontend code
echo "=== HealthResponse usage in frontend ==="
rg -n 'HealthResponse' app/src --type ts -B 2 -A 5Repository: jamiepine/voicebox Length of output: 8799 🏁 Script executed: # Check what the generate-api.sh script does
echo "=== Checking generate-api.sh script ==="
cat scripts/generate-api.shRepository: jamiepine/voicebox Length of output: 2351 🏁 Script executed: # Verify the backend model structure to understand the complete change
echo "=== Checking backend HealthResponse model ==="
rg -B 5 -A 10 'class HealthResponse|gpu_compatibility_warning' backend/models.pyRepository: jamiepine/voicebox Length of output: 944 Run API code generator to sync frontend types with backend changes. The new
Run Without this, the frontend cannot access the GPU compatibility warning to display it to users. 🤖 Prompt for AI Agents |
||
|
|
||
|
|
||
| class DirectoryCheck(BaseModel): | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
When will PyTorch stable release include sm_120 Blackwell GPU support?💡 Result:
PyTorch stable releases have supported sm_120 (Blackwell GPU compute capability 12.0) since version 2.7.0, released on April 23, 2025. This support is available in pre-built wheels for CUDA 12.8 and newer, which include Blackwell architectures (sm_120). Install via pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128. Official PyTorch 2.7 release notes confirm: "PyTorch 2.7 introduces support for NVIDIA’s new Blackwell GPU architecture and ships pre-built wheels for CUDA 12.8." The RELEASE.md CUDA support matrix lists Blackwell(10.0, 12.0) for CUDA 12.8+ builds in 2.7 and later (e.g., 2.11). Ongoing GitHub issues (e.g.,
#164342, updated March 2026) report platform-specific issues like Windows WDDM mode or certain workloads, but core kernel support exists in stable builds since 2.7. PyTorch forum posts from maintainers (ptrblck) affirm: "all of our stable and nightly releases starting with PyTorch 2.7.0 and built with CUDA 12.8 or newer already support the Blackwell family including sm_120". As of March 28, 2026 (post-2.11 release on March 23), use CUDA 12.8+ wheels from stable PyTorch for sm_120 support. Nightlies or source builds may resolve edge cases.Citations:
Switch from PyTorch nightly to stable; sm_120 support is already available in PyTorch 2.7+.
PyTorch has included native support for sm_120 (Blackwell) in stable releases since version 2.7.0 (April 2025). The stable CUDA 12.8 wheels already contain the necessary Blackwell kernels, eliminating the need for nightly builds.
Replace with:
Suggested change
- name: Install PyTorch with CUDA 12.8 (RTX 50-series / sm_120 support) run: | - pip install torch --index-url https://download.pytorch.org/whl/nightly/cu128 --force-reinstall --no-deps - pip install torchaudio --index-url https://download.pytorch.org/whl/nightly/cu128 --force-reinstall --no-deps + pip install torch --index-url https://download.pytorch.org/whl/cu128 --force-reinstall --no-deps + pip install torchaudio --index-url https://download.pytorch.org/whl/cu128 --force-reinstall --no-depsUsing stable releases ensures reproducible builds and avoids unexpected breakage from nightly changes.
🤖 Prompt for AI Agents