runtime error
Exit code: 1. Reason: βββββ| 189M/189M [00:00<00:00, 284MB/s] model.pth: 0%| | 0.00/1.28G [00:00<?, ?B/s][A model.pth: 26%|βββ | 336M/1.28G [00:01<00:02, 335MB/s][A model.pth: 100%|ββββββββββ| 1.28G/1.28G [00:01<00:00, 732MB/s] special_tokens.json: 0%| | 0.00/31.0k [00:00<?, ?B/s][A special_tokens.json: 100%|ββββββββββ| 31.0k/31.0k [00:00<00:00, 124MB/s] tokenizer.tiktoken: 0%| | 0.00/1.70M [00:00<?, ?B/s][A tokenizer.tiktoken: 100%|ββββββββββ| 1.70M/1.70M [00:00<00:00, 25.0MB/s] All checkpoints downloaded /home/user/app/app.py:30: UserWarning: torchaudio._backend.set_audio_backend has been deprecated. With dispatcher enabled, this function is no-op. You can remove the function call. torchaudio.set_audio_backend("soundfile") The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`. 0it [00:00, ?it/s][A 0it [00:00, ?it/s] 2025-01-07 03:11:48.495 | INFO | __main__:<module>:513 - Loading models... SPACES_ZERO_GPU_DEBUG self.arg_queue._writer.fileno()=7 SPACES_ZERO_GPU_DEBUG self.res_queue._writer.fileno()=9 2025-01-07 03:11:58.728 | INFO | tools.llama.generate:load_model:682 - Restored model from checkpoint 2025-01-07 03:11:58.729 | INFO | tools.llama.generate:load_model:688 - Using DualARTransformer 2025-01-07 03:11:58.729 | INFO | tools.llama.generate:load_model:696 - Compiling function... 2025-01-07 03:11:59.941 | INFO | tools.vqgan.inference:load_model:43 - Loaded model: <All keys matched successfully> Traceback (most recent call last): File "/home/user/app/app.py", line 514, in <module> llama_queue, decoder_model = init_models() File "/usr/local/lib/python3.10/site-packages/spaces/zero/wrappers.py", line 214, in gradio_handler raise res.value _pickle.PicklingError: cannot pickle '_thread.lock' object
Container logs:
Fetching error logs...