Skip to content

Support XPU prefix #75

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Support XPU prefix #75

wants to merge 1 commit into from

Conversation

Nuitari
Copy link

@Nuitari Nuitari commented Jul 20, 2025

Intel GPU's are detected with the prefix of xpu: instead of cuda:
This PR allows the xpu: entries to show up in the selector.

Summary by Sourcery

Enable detection and selection of Intel XPU devices by adding torch.xpu availability checks, listing xpu devices, and updating allocation routines to include XPU alongside CUDA

New Features:

  • Add support for torch.xpu device availability alongside CUDA for main and text encoder device selection
  • Include xpu:{i} entries in the global device list
  • Extend VRAM allocation logic to consider XPU devices when building available device selections

Copy link

sourcery-ai bot commented Jul 20, 2025

Reviewer's Guide

This PR integrates support for the Intel GPU prefix “xpu:” by expanding device availability checks, updating the device list, and extending allocation logic to include xpu alongside cuda.

Class diagram for device selection and allocation changes

classDiagram
    class DeviceSelectorMultiGPU {
        +override(*args, device=None, expert_mode_allocations=None, use_other_vram=None)
    }
    class Torch {
        +cuda.is_available()
        +cuda.device_count()
        +xpu.is_available()
        +xpu.device_count()
    }
    class mm {
        CPUState
        cpu_state
    }
    class get_torch_device_patched {
        +get_torch_device_patched()
    }
    class text_encoder_device_patched {
        +text_encoder_device_patched()
    }
    class get_device_list {
        +get_device_list()
    }
    DeviceSelectorMultiGPU --|> get_device_list
    get_torch_device_patched --|> Torch
    text_encoder_device_patched --|> Torch
    get_device_list --|> Torch
Loading

File-Level Changes

Change Details Files
Include xpu availability in device selection functions
  • Add torch.xpu.is_available() to the get_torch_device_patched condition
  • Add torch.xpu.is_available() to the text_encoder_device_patched condition
__init__.py
Add xpu devices to the global device list
  • Extend get_device_list return value with f"xpu:{i}" entries based on torch.xpu.device_count()
__init__.py
Extend device filtering in VRAM allocation to include xpu
  • Include xpu-prefixed devices in available_devices filter in the first override block
  • Include xpu-prefixed devices in available_devices filter in the second override block
__init__.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @Nuitari - I've reviewed your changes - here's some feedback:

  • Wrap direct torch.xpu calls in hasattr or try/except blocks to avoid runtime errors on PyTorch builds without XPU support.
  • Extract the repeated (cuda or xpu) availability check and device-list construction into a shared helper to reduce duplication.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- Wrap direct torch.xpu calls in hasattr or try/except blocks to avoid runtime errors on PyTorch builds without XPU support.
- Extract the repeated (cuda or xpu) availability check and device-list construction into a shared helper to reduce duplication.

## Individual Comments

### Comment 1
<location> `__init__.py:40` </location>
<code_context>
 def get_torch_device_patched():
     device = None
-    if (not torch.cuda.is_available() or mm.cpu_state == mm.CPUState.CPU or "cpu" in str(current_device).lower()):
+    if (not (torch.cuda.is_available() or torch.xpu.is_available()) or mm.cpu_state == mm.CPUState.CPU or "cpu" in str(current_device).lower()):
         device = torch.device("cpu")
     else:
</code_context>

<issue_to_address>
Potential AttributeError if torch.xpu is not available in all environments.

To avoid errors, use hasattr(torch, 'xpu') before accessing torch.xpu methods.
</issue_to_address>

### Comment 2
<location> `__init__.py:48` </location>
<code_context>
 def text_encoder_device_patched():
     device = None
-    if (not torch.cuda.is_available() or mm.cpu_state == mm.CPUState.CPU or "cpu" in str(current_text_encoder_device).lower()):
+    if (not (torch.cuda.is_available() or torch.xpu.is_available()) or mm.cpu_state == mm.CPUState.CPU or "cpu" in str(current_text_encoder_device).lower()):
         device = torch.device("cpu")
     else:
</code_context>

<issue_to_address>
torch.xpu usage may not be safe on all platforms.

Add a check like hasattr(torch, 'xpu') before using torch.xpu to prevent errors on systems where it is unavailable.
</issue_to_address>

### Comment 3
<location> `__init__.py:328` </location>
<code_context>
 def get_device_list():
     import torch
-    return ["cpu"] + [f"cuda:{i}" for i in range(torch.cuda.device_count())]
+    return ["cpu"] + [f"cuda:{i}" for i in range(torch.cuda.device_count())] + [f"xpu:{i}" for i in range(torch.xpu.device_count())]

 class DeviceSelectorMultiGPU:
</code_context>

<issue_to_address>
Unconditional torch.xpu.device_count() may cause errors if xpu is not present.

Check for 'xpu' in torch with hasattr(torch, 'xpu') before calling torch.xpu.device_count() to avoid AttributeError on systems without xpu support.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@@ -37,15 +37,15 @@

def get_torch_device_patched():
device = None
if (not torch.cuda.is_available() or mm.cpu_state == mm.CPUState.CPU or "cpu" in str(current_device).lower()):
if (not (torch.cuda.is_available() or torch.xpu.is_available()) or mm.cpu_state == mm.CPUState.CPU or "cpu" in str(current_device).lower()):
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): Potential AttributeError if torch.xpu is not available in all environments.

To avoid errors, use hasattr(torch, 'xpu') before accessing torch.xpu methods.

device = torch.device("cpu")
else:
device = torch.device(current_device)
return device

def text_encoder_device_patched():
device = None
if (not torch.cuda.is_available() or mm.cpu_state == mm.CPUState.CPU or "cpu" in str(current_text_encoder_device).lower()):
if (not (torch.cuda.is_available() or torch.xpu.is_available()) or mm.cpu_state == mm.CPUState.CPU or "cpu" in str(current_text_encoder_device).lower()):
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): torch.xpu usage may not be safe on all platforms.

Add a check like hasattr(torch, 'xpu') before using torch.xpu to prevent errors on systems where it is unavailable.

@@ -325,7 +325,7 @@ def calculate_vvram_allocation_string(model, virtual_vram_str):

def get_device_list():
import torch
return ["cpu"] + [f"cuda:{i}" for i in range(torch.cuda.device_count())]
return ["cpu"] + [f"cuda:{i}" for i in range(torch.cuda.device_count())] + [f"xpu:{i}" for i in range(torch.xpu.device_count())]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): Unconditional torch.xpu.device_count() may cause errors if xpu is not present.

Check for 'xpu' in torch with hasattr(torch, 'xpu') before calling torch.xpu.device_count() to avoid AttributeError on systems without xpu support.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants