-
Notifications
You must be signed in to change notification settings - Fork 17
Support XPU prefix #75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Reviewer's GuideThis PR integrates support for the Intel GPU prefix “xpu:” by expanding device availability checks, updating the device list, and extending allocation logic to include xpu alongside cuda. Class diagram for device selection and allocation changesclassDiagram
class DeviceSelectorMultiGPU {
+override(*args, device=None, expert_mode_allocations=None, use_other_vram=None)
}
class Torch {
+cuda.is_available()
+cuda.device_count()
+xpu.is_available()
+xpu.device_count()
}
class mm {
CPUState
cpu_state
}
class get_torch_device_patched {
+get_torch_device_patched()
}
class text_encoder_device_patched {
+text_encoder_device_patched()
}
class get_device_list {
+get_device_list()
}
DeviceSelectorMultiGPU --|> get_device_list
get_torch_device_patched --|> Torch
text_encoder_device_patched --|> Torch
get_device_list --|> Torch
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @Nuitari - I've reviewed your changes - here's some feedback:
- Wrap direct torch.xpu calls in hasattr or try/except blocks to avoid runtime errors on PyTorch builds without XPU support.
- Extract the repeated (cuda or xpu) availability check and device-list construction into a shared helper to reduce duplication.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- Wrap direct torch.xpu calls in hasattr or try/except blocks to avoid runtime errors on PyTorch builds without XPU support.
- Extract the repeated (cuda or xpu) availability check and device-list construction into a shared helper to reduce duplication.
## Individual Comments
### Comment 1
<location> `__init__.py:40` </location>
<code_context>
def get_torch_device_patched():
device = None
- if (not torch.cuda.is_available() or mm.cpu_state == mm.CPUState.CPU or "cpu" in str(current_device).lower()):
+ if (not (torch.cuda.is_available() or torch.xpu.is_available()) or mm.cpu_state == mm.CPUState.CPU or "cpu" in str(current_device).lower()):
device = torch.device("cpu")
else:
</code_context>
<issue_to_address>
Potential AttributeError if torch.xpu is not available in all environments.
To avoid errors, use hasattr(torch, 'xpu') before accessing torch.xpu methods.
</issue_to_address>
### Comment 2
<location> `__init__.py:48` </location>
<code_context>
def text_encoder_device_patched():
device = None
- if (not torch.cuda.is_available() or mm.cpu_state == mm.CPUState.CPU or "cpu" in str(current_text_encoder_device).lower()):
+ if (not (torch.cuda.is_available() or torch.xpu.is_available()) or mm.cpu_state == mm.CPUState.CPU or "cpu" in str(current_text_encoder_device).lower()):
device = torch.device("cpu")
else:
</code_context>
<issue_to_address>
torch.xpu usage may not be safe on all platforms.
Add a check like hasattr(torch, 'xpu') before using torch.xpu to prevent errors on systems where it is unavailable.
</issue_to_address>
### Comment 3
<location> `__init__.py:328` </location>
<code_context>
def get_device_list():
import torch
- return ["cpu"] + [f"cuda:{i}" for i in range(torch.cuda.device_count())]
+ return ["cpu"] + [f"cuda:{i}" for i in range(torch.cuda.device_count())] + [f"xpu:{i}" for i in range(torch.xpu.device_count())]
class DeviceSelectorMultiGPU:
</code_context>
<issue_to_address>
Unconditional torch.xpu.device_count() may cause errors if xpu is not present.
Check for 'xpu' in torch with hasattr(torch, 'xpu') before calling torch.xpu.device_count() to avoid AttributeError on systems without xpu support.
</issue_to_address>
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
@@ -37,15 +37,15 @@ | |||
|
|||
def get_torch_device_patched(): | |||
device = None | |||
if (not torch.cuda.is_available() or mm.cpu_state == mm.CPUState.CPU or "cpu" in str(current_device).lower()): | |||
if (not (torch.cuda.is_available() or torch.xpu.is_available()) or mm.cpu_state == mm.CPUState.CPU or "cpu" in str(current_device).lower()): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (bug_risk): Potential AttributeError if torch.xpu is not available in all environments.
To avoid errors, use hasattr(torch, 'xpu') before accessing torch.xpu methods.
device = torch.device("cpu") | ||
else: | ||
device = torch.device(current_device) | ||
return device | ||
|
||
def text_encoder_device_patched(): | ||
device = None | ||
if (not torch.cuda.is_available() or mm.cpu_state == mm.CPUState.CPU or "cpu" in str(current_text_encoder_device).lower()): | ||
if (not (torch.cuda.is_available() or torch.xpu.is_available()) or mm.cpu_state == mm.CPUState.CPU or "cpu" in str(current_text_encoder_device).lower()): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (bug_risk): torch.xpu usage may not be safe on all platforms.
Add a check like hasattr(torch, 'xpu') before using torch.xpu to prevent errors on systems where it is unavailable.
@@ -325,7 +325,7 @@ def calculate_vvram_allocation_string(model, virtual_vram_str): | |||
|
|||
def get_device_list(): | |||
import torch | |||
return ["cpu"] + [f"cuda:{i}" for i in range(torch.cuda.device_count())] | |||
return ["cpu"] + [f"cuda:{i}" for i in range(torch.cuda.device_count())] + [f"xpu:{i}" for i in range(torch.xpu.device_count())] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (bug_risk): Unconditional torch.xpu.device_count() may cause errors if xpu is not present.
Check for 'xpu' in torch with hasattr(torch, 'xpu') before calling torch.xpu.device_count() to avoid AttributeError on systems without xpu support.
Intel GPU's are detected with the prefix of xpu: instead of cuda:
This PR allows the xpu: entries to show up in the selector.
Summary by Sourcery
Enable detection and selection of Intel XPU devices by adding torch.xpu availability checks, listing xpu devices, and updating allocation routines to include XPU alongside CUDA
New Features: