Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Doesn't run on CPU #9

Open
doevent opened this issue Jun 22, 2022 · 2 comments
Open

Doesn't run on CPU #9

doevent opened this issue Jun 22, 2022 · 2 comments

Comments

@doevent
Copy link

doevent commented Jun 22, 2022

Running on CPU, it gives an error:

Traceback (most recent call last):
  File "sample.py", line 554, in <module>
    do_run()
  File "sample.py", line 319, in do_run
    text_emb = bert.encode([args.text]*args.batch_size).to(device).float()
  File "/home/user/server/encoders/modules.py", line 102, in encode
    return self(text)
  File "/home/user/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/user/server/encoders/modules.py", line 94, in forward
    tokens = self.tknz_fn(text)#.to(self.device)
  File "/home/user/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/user/server/encoders/modules.py", line 65, in forward
    tokens = batch_encoding["input_ids"].to(self.device)
  File "/home/user/.local/lib/python3.8/site-packages/torch/cuda/__init__.py", line 216, in _lazy_init
    torch._C._cuda_init()
RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx

command:
python sample.py --model_path erlich.pt --batch_size 1 --num_batches 1 --text "a cyberpunk girl with a scifi neuralink device on her head" --cpu

@afiaka87
Copy link
Collaborator

@doevent

Noted - thanks will work on a fix soon hopefully. Although I should mention that this model is likely to take a very long time on CPU.

@alishan2040
Copy link

@afiaka87 My question is about GPU model ideal for training/fine-tuning latent-diffusion model. I had different GPUs with vram 16 to 24 GBs but still faced memory issues. What settings did you use to fine-tune latent diffusion with which GPU? Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants