You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
python3 cli_demo.py chatglm
[i 0121 11:52:21.719398 00 compiler.py:956] Jittor(1.3.8.5) src: /opt/homebrew/lib/python3.11/site-packages/jittor
[i 0121 11:52:21.737304 00 compiler.py:957] clang at /usr/bin/clang++(15.0.0)
[i 0121 11:52:21.737433 00 compiler.py:958] cache_path: /Users/oujiajian/.cache/jittor/jt1.3.8/clang15.0.0/py3.11.7/macOS-14.2.1-axac/AppleM1/master
[i 0121 11:52:22.064102 00 init.py:227] Total mem: 16.00GB, using 5 procs for compiling.
[i 0121 11:52:22.234178 00 jit_compiler.cc:28] Load cc_path: /usr/bin/clang++
Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a revision is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Loading checkpoint shards: 62%|██████████████████████████████████████████████████████████████████████████▍ | 5/8 [00:09<00:05, 1.92s/it]
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.11/site-packages/transformers/modeling_utils.py", line 415, in load_state_dict
return torch.load(checkpoint_file, map_location="cpu")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/torch/serialization.py", line 993, in load
with _open_zipfile_reader(opened_file) as opened_zipfile:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/torch/serialization.py", line 447, in init
super().init(torch._C.PyTorchFileReader(name_or_buffer))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.11/site-packages/transformers/modeling_utils.py", line 419, in load_state_dict
if f.read(7) == "version":
^^^^^^^^^
File "", line 322, in decode
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 64: invalid start byte
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/oujiajian/JittorLLMs/cli_demo.py", line 8, in
model = models.get_model(args)
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/oujiajian/JittorLLMs/models/init.py", line 49, in get_model
return module.get_model(args)
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/oujiajian/JittorLLMs/models/chatglm/init.py", line 48, in get_model
return ChatGLMMdoel(args)
^^^^^^^^^^^^^^^^^^
File "/Users/oujiajian/JittorLLMs/models/chatglm/init.py", line 22, in init
self.model = AutoModel.from_pretrained(os.path.dirname(file), trust_remote_code=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py", line 459, in from_pretrained
return model_class.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/transformers/modeling_utils.py", line 2478, in from_pretrained
) = cls._load_pretrained_model(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/transformers/modeling_utils.py", line 2780, in _load_pretrained_model
state_dict = load_state_dict(shard_file)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/transformers/modeling_utils.py", line 431, in load_state_dict
raise OSError(
OSError: Unable to load weights from pytorch checkpoint file for '/Users/oujiajian/JittorLLMs/models/chatglm/pytorch_model-00006-of-00008.bin' at '/Users/oujiajian/JittorLLMs/models/chatglm/pytorch_model-00006-of-00008.bin'. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
python3 web_demo.py chatglm
[i 0121 11:50:56.830004 00 compiler.py:956] Jittor(1.3.8.5) src: /opt/homebrew/lib/python3.11/site-packages/jittor
[i 0121 11:50:56.847275 00 compiler.py:957] clang at /usr/bin/clang++(15.0.0)
[i 0121 11:50:56.847352 00 compiler.py:958] cache_path: /Users/oujiajian/.cache/jittor/jt1.3.8/clang15.0.0/py3.11.7/macOS-14.2.1-axac/AppleM1/master
[i 0121 11:50:57.183703 00 init.py:227] Total mem: 16.00GB, using 5 procs for compiling.
[i 0121 11:50:57.339595 00 jit_compiler.cc:28] Load cc_path: /usr/bin/clang++
Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a revision is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Loading checkpoint shards: 62%|██████████████████████████████████████████████████████████████████████████▍ | 5/8 [00:08<00:05, 1.80s/it]
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.11/site-packages/transformers/modeling_utils.py", line 415, in load_state_dict
return torch.load(checkpoint_file, map_location="cpu")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/torch/serialization.py", line 993, in load
with _open_zipfile_reader(opened_file) as opened_zipfile:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/torch/serialization.py", line 447, in init
super().init(torch._C.PyTorchFileReader(name_or_buffer))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.11/site-packages/transformers/modeling_utils.py", line 419, in load_state_dict
if f.read(7) == "version":
^^^^^^^^^
File "", line 322, in decode
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 64: invalid start byte
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/oujiajian/JittorLLMs/web_demo.py", line 26, in
model = models.get_model(args)
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/oujiajian/JittorLLMs/models/init.py", line 49, in get_model
return module.get_model(args)
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/oujiajian/JittorLLMs/models/chatglm/init.py", line 48, in get_model
return ChatGLMMdoel(args)
^^^^^^^^^^^^^^^^^^
File "/Users/oujiajian/JittorLLMs/models/chatglm/init.py", line 22, in init
self.model = AutoModel.from_pretrained(os.path.dirname(file), trust_remote_code=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py", line 459, in from_pretrained
return model_class.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/transformers/modeling_utils.py", line 2478, in from_pretrained
) = cls._load_pretrained_model(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/transformers/modeling_utils.py", line 2780, in _load_pretrained_model
state_dict = load_state_dict(shard_file)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/transformers/modeling_utils.py", line 431, in load_state_dict
raise OSError(
OSError: Unable to load weights from pytorch checkpoint file for '/Users/oujiajian/JittorLLMs/models/chatglm/pytorch_model-00006-of-00008.bin' at '/Users/oujiajian/JittorLLMs/models/chatglm/pytorch_model-00006-of-00008.bin'. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
MacBook Pro M1芯片 16G
好像都是同一个问题,到底该怎么解决?
The text was updated successfully, but these errors were encountered:
python3 cli_demo.py chatglm
[i 0121 11:52:21.719398 00 compiler.py:956] Jittor(1.3.8.5) src: /opt/homebrew/lib/python3.11/site-packages/jittor
[i 0121 11:52:21.737304 00 compiler.py:957] clang at /usr/bin/clang++(15.0.0)
[i 0121 11:52:21.737433 00 compiler.py:958] cache_path: /Users/oujiajian/.cache/jittor/jt1.3.8/clang15.0.0/py3.11.7/macOS-14.2.1-axac/AppleM1/master
[i 0121 11:52:22.064102 00 init.py:227] Total mem: 16.00GB, using 5 procs for compiling.
[i 0121 11:52:22.234178 00 jit_compiler.cc:28] Load cc_path: /usr/bin/clang++
Explicitly passing a
revision
is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.Explicitly passing a
revision
is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.Explicitly passing a
revision
is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.Loading checkpoint shards: 62%|██████████████████████████████████████████████████████████████████████████▍ | 5/8 [00:09<00:05, 1.92s/it]
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.11/site-packages/transformers/modeling_utils.py", line 415, in load_state_dict
return torch.load(checkpoint_file, map_location="cpu")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/torch/serialization.py", line 993, in load
with _open_zipfile_reader(opened_file) as opened_zipfile:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/torch/serialization.py", line 447, in init
super().init(torch._C.PyTorchFileReader(name_or_buffer))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.11/site-packages/transformers/modeling_utils.py", line 419, in load_state_dict
if f.read(7) == "version":
^^^^^^^^^
File "", line 322, in decode
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 64: invalid start byte
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/oujiajian/JittorLLMs/cli_demo.py", line 8, in
model = models.get_model(args)
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/oujiajian/JittorLLMs/models/init.py", line 49, in get_model
return module.get_model(args)
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/oujiajian/JittorLLMs/models/chatglm/init.py", line 48, in get_model
return ChatGLMMdoel(args)
^^^^^^^^^^^^^^^^^^
File "/Users/oujiajian/JittorLLMs/models/chatglm/init.py", line 22, in init
self.model = AutoModel.from_pretrained(os.path.dirname(file), trust_remote_code=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py", line 459, in from_pretrained
return model_class.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/transformers/modeling_utils.py", line 2478, in from_pretrained
) = cls._load_pretrained_model(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/transformers/modeling_utils.py", line 2780, in _load_pretrained_model
state_dict = load_state_dict(shard_file)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/transformers/modeling_utils.py", line 431, in load_state_dict
raise OSError(
OSError: Unable to load weights from pytorch checkpoint file for '/Users/oujiajian/JittorLLMs/models/chatglm/pytorch_model-00006-of-00008.bin' at '/Users/oujiajian/JittorLLMs/models/chatglm/pytorch_model-00006-of-00008.bin'. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
python3 web_demo.py chatglm
[i 0121 11:50:56.830004 00 compiler.py:956] Jittor(1.3.8.5) src: /opt/homebrew/lib/python3.11/site-packages/jittor
[i 0121 11:50:56.847275 00 compiler.py:957] clang at /usr/bin/clang++(15.0.0)
[i 0121 11:50:56.847352 00 compiler.py:958] cache_path: /Users/oujiajian/.cache/jittor/jt1.3.8/clang15.0.0/py3.11.7/macOS-14.2.1-axac/AppleM1/master
[i 0121 11:50:57.183703 00 init.py:227] Total mem: 16.00GB, using 5 procs for compiling.
[i 0121 11:50:57.339595 00 jit_compiler.cc:28] Load cc_path: /usr/bin/clang++
Explicitly passing a
revision
is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.Explicitly passing a
revision
is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.Explicitly passing a
revision
is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.Loading checkpoint shards: 62%|██████████████████████████████████████████████████████████████████████████▍ | 5/8 [00:08<00:05, 1.80s/it]
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.11/site-packages/transformers/modeling_utils.py", line 415, in load_state_dict
return torch.load(checkpoint_file, map_location="cpu")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/torch/serialization.py", line 993, in load
with _open_zipfile_reader(opened_file) as opened_zipfile:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/torch/serialization.py", line 447, in init
super().init(torch._C.PyTorchFileReader(name_or_buffer))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.11/site-packages/transformers/modeling_utils.py", line 419, in load_state_dict
if f.read(7) == "version":
^^^^^^^^^
File "", line 322, in decode
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 64: invalid start byte
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/oujiajian/JittorLLMs/web_demo.py", line 26, in
model = models.get_model(args)
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/oujiajian/JittorLLMs/models/init.py", line 49, in get_model
return module.get_model(args)
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/oujiajian/JittorLLMs/models/chatglm/init.py", line 48, in get_model
return ChatGLMMdoel(args)
^^^^^^^^^^^^^^^^^^
File "/Users/oujiajian/JittorLLMs/models/chatglm/init.py", line 22, in init
self.model = AutoModel.from_pretrained(os.path.dirname(file), trust_remote_code=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py", line 459, in from_pretrained
return model_class.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/transformers/modeling_utils.py", line 2478, in from_pretrained
) = cls._load_pretrained_model(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/transformers/modeling_utils.py", line 2780, in _load_pretrained_model
state_dict = load_state_dict(shard_file)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/transformers/modeling_utils.py", line 431, in load_state_dict
raise OSError(
OSError: Unable to load weights from pytorch checkpoint file for '/Users/oujiajian/JittorLLMs/models/chatglm/pytorch_model-00006-of-00008.bin' at '/Users/oujiajian/JittorLLMs/models/chatglm/pytorch_model-00006-of-00008.bin'. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
MacBook Pro M1芯片 16G
好像都是同一个问题,到底该怎么解决?
The text was updated successfully, but these errors were encountered: