Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

输入问题后报错 #196

Open
xdicac opened this issue Feb 23, 2025 · 0 comments
Open

输入问题后报错 #196

xdicac opened this issue Feb 23, 2025 · 0 comments

Comments

@xdicac
Copy link

xdicac commented Feb 23, 2025

过程如下:
D:\programs\lib\JittorLLMs>python cli_demo.py chatglm
[i 0223 16:24:36.015000 72 compiler.py:956] Jittor(1.3.8.5) src: d:\programs\lib\jittorllms\env\lib\site-packages\jittor
[i 0223 16:24:36.083000 72 compiler.py:957] cl at C:\Users\zxd.cache\jittor\msvc\VC_____\bin\cl.exe(19.29.30133)
[i 0223 16:24:36.083000 72 compiler.py:958] cache_path: C:\Users\zxd.cache\jittor\jt1.3.8\cl\py3.9.13\Windows-10-10.xf4\11thGenIntelRCxe1\main
[i 0223 16:24:36.173000 72 init.py:227] Total mem: 15.80GB, using 5 procs for compiling.
[i 0223 16:24:37.409000 72 jit_compiler.cc:28] Load cc_path: C:\Users\zxd.cache\jittor\msvc\VC_____\bin\cl.exe
Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a revision is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Loading checkpoint shards: 100%|████████████████████████████████████████████| 8/8 [00:56<00:00, 7.03s/it]
用户输入:你是谁
Traceback (most recent call last):
File "D:\programs\lib\JittorLLMs\cli_demo.py", line 9, in
model.chat()
File "D:\programs\lib\JittorLLMs\models\chatglm_init_.py", line 36, in chat
for response, history in self.model.stream_chat(self.tokenizer, text, history=history):
File "C:\Users\zxd/.cache\huggingface\modules\transformers_modules\local\modeling_chatglm.py", line 1259, in stream_chat
for outputs in self.stream_generate(**input_ids, **gen_kwargs):
File "C:\Users\zxd/.cache\huggingface\modules\transformers_modules\local\modeling_chatglm.py", line 1334, in stream_generate
model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
File "C:\Users\zxd/.cache\huggingface\modules\transformers_modules\local\modeling_chatglm.py", line 1086, in prepare_inputs_for_generation
mask_positions = [seq.index(mask_token) for seq in seqs]
File "C:\Users\zxd/.cache\huggingface\modules\transformers_modules\local\modeling_chatglm.py", line 1086, in
mask_positions = [seq.index(mask_token) for seq in seqs]
ValueError: 150001 is not in list

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant