Skip to content

Confusion between xla_model.xla_device() and runtime.xla_device() #7831

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
bhavya01 opened this issue Aug 12, 2024 · 2 comments · May be fixed by #9200
Open

Confusion between xla_model.xla_device() and runtime.xla_device() #7831

bhavya01 opened this issue Aug 12, 2024 · 2 comments · May be fixed by #9200
Assignees
Labels
good first issue Good for newcomers usability Bugs/features related to improving the usability of PyTorch/XLA

Comments

@bhavya01
Copy link
Collaborator

Currently we have two files exposing the xla_device function(core.xla_model and runtime) and since both the files are commonly imported in user facing programs, it can be confusing to have both the functions with same name.

xla_model.xla_device has some extra logic to handle SPMD case but other than that it calls runtime.xla_device function. Opening this issue to propose that we move the runtime.xla_device function to a xla_utils and rename it to something else like xla_device_helper and to be backward compatible maintain the core.xla_model.xla_device function.

@bhavya01 bhavya01 self-assigned this Aug 12, 2024
@bhavya01 bhavya01 added usability Bugs/features related to improving the usability of PyTorch/XLA good first issue Good for newcomers labels Aug 12, 2024
@ysiraichi ysiraichi added enhancement New feature or request and removed enhancement New feature or request labels Apr 1, 2025
@ghpvnist ghpvnist assigned ghpvnist and unassigned bhavya01 May 19, 2025
@bhavya01
Copy link
Collaborator Author

bhavya01 commented May 20, 2025

@ghpvnist It would be great if we can use torch_xla.device() where possible instead of xm.xla_device()

We exposed torch_xla.device() at a higher level api because it is so frequently used.

@yaoshiang
Copy link
Collaborator

yaoshiang commented May 20, 2025

I think we should be able to deprecate both and only use torch.device. I don't fully understand the usage of the SPMD logic

if xu.check_env_flag('XLA_USE_SPMD'):
, it appears to simply hard code "xla:0" when under SPMD - that can probably be removed and instead we either instruct the user to use torch.device("xla:0") under spmd, or capture the SPMD case elsewhere.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers usability Bugs/features related to improving the usability of PyTorch/XLA
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants