Getting error with Q3-K-M

#2
by alain401 - opened

Running llama.cpp 4145.

src/llama.cpp:5412: GGML_ASSERT(hparams.n_expert <= LLAMA_MAX_EXPERTS) failed.

Unsloth AI org

Running llama.cpp 4145.

src/llama.cpp:5412: GGML_ASSERT(hparams.n_expert <= LLAMA_MAX_EXPERTS) failed.

is it only for Q3? What about Q2?

Yes it is the same for both. I am using your example llama-cli command with 64 threads. Here is the trace of trying to load the Q2 model:

build: 4145 (9abe9eea) with cc (Ubuntu 13.2.0-23ubuntu4) 13.2.0 for x86_64-linux-gnu
main: llama backend init
main: load the model and apply lora adapter, if any
llama_load_model_from_file: using device CUDA0 (NVIDIA RTX A4000) - 15820 MiB free
llama_load_model_from_file: using device CUDA1 (NVIDIA RTX A4000) - 15820 MiB free
llama_load_model_from_file: using device CUDA2 (NVIDIA RTX A4000) - 15820 MiB free
llama_model_loader: additional 4 GGUFs metadata loaded.
llama_model_loader: loaded meta data with 46 key-value pairs and 1025 tensors from models/DeepSeek-V3-Q2_K_L-00001-of-00005.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = deepseek2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = DeepSeek V3 BF16
llama_model_loader: - kv 3: general.size_label str = 256x20B
llama_model_loader: - kv 4: deepseek2.block_count u32 = 61
llama_model_loader: - kv 5: deepseek2.context_length u32 = 163840
llama_model_loader: - kv 6: deepseek2.embedding_length u32 = 7168
llama_model_loader: - kv 7: deepseek2.feed_forward_length u32 = 18432
llama_model_loader: - kv 8: deepseek2.attention.head_count u32 = 128
llama_model_loader: - kv 9: deepseek2.attention.head_count_kv u32 = 128
llama_model_loader: - kv 10: deepseek2.rope.freq_base f32 = 10000.000000
llama_model_loader: - kv 11: deepseek2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 12: deepseek2.expert_used_count u32 = 8
llama_model_loader: - kv 13: general.file_type u32 = 10
llama_model_loader: - kv 14: deepseek2.leading_dense_block_count u32 = 3
llama_model_loader: - kv 15: deepseek2.vocab_size u32 = 129280
llama_model_loader: - kv 16: deepseek2.attention.q_lora_rank u32 = 1536
llama_model_loader: - kv 17: deepseek2.attention.kv_lora_rank u32 = 512
llama_model_loader: - kv 18: deepseek2.attention.key_length u32 = 192
llama_model_loader: - kv 19: deepseek2.attention.value_length u32 = 128
llama_model_loader: - kv 20: deepseek2.expert_feed_forward_length u32 = 2048
llama_model_loader: - kv 21: deepseek2.expert_count u32 = 256
llama_model_loader: - kv 22: deepseek2.expert_shared_count u32 = 1
llama_model_loader: - kv 23: deepseek2.expert_weights_scale f32 = 2.500000
llama_model_loader: - kv 24: deepseek2.expert_weights_norm bool = true
llama_model_loader: - kv 25: deepseek2.expert_gating_func u32 = 2
llama_model_loader: - kv 26: deepseek2.rope.dimension_count u32 = 64
llama_model_loader: - kv 27: deepseek2.rope.scaling.type str = yarn
llama_model_loader: - kv 28: deepseek2.rope.scaling.factor f32 = 40.000000
llama_model_loader: - kv 29: deepseek2.rope.scaling.original_context_length u32 = 4096
llama_model_loader: - kv 30: deepseek2.rope.scaling.yarn_log_multiplier f32 = 0.100000
llama_model_loader: - kv 31: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 32: tokenizer.ggml.pre str = deepseek-v3
llama_model_loader: - kv 33: tokenizer.ggml.tokens arr[str,129280] = ["<|begin▁of▁sentence|>", "<�...
llama_model_loader: - kv 34: tokenizer.ggml.token_type arr[i32,129280] = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 35: tokenizer.ggml.merges arr[str,127741] = ["Ġ t", "Ġ a", "i n", "Ġ Ġ", "h e...
llama_model_loader: - kv 36: tokenizer.ggml.bos_token_id u32 = 0
llama_model_loader: - kv 37: tokenizer.ggml.eos_token_id u32 = 1
llama_model_loader: - kv 38: tokenizer.ggml.padding_token_id u32 = 1
llama_model_loader: - kv 39: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 40: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 41: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 42: general.quantization_version u32 = 2
llama_model_loader: - kv 43: split.no u16 = 0
llama_model_loader: - kv 44: split.count u16 = 5
llama_model_loader: - kv 45: split.tensors.count i32 = 1025
llama_model_loader: - type f32: 361 tensors
llama_model_loader: - type q2_K: 482 tensors
llama_model_loader: - type q3_K: 180 tensors
llama_model_loader: - type q4_K: 1 tensors
llama_model_loader: - type q6_K: 1 tensors
src/llama.cpp:5412: GGML_ASSERT(hparams.n_expert <= LLAMA_MAX_EXPERTS) failed
Could not attach to process. If your uid matches the uid of the target
process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
again as the root user. For more details, see /etc/sysctl.d/10-ptrace.conf
ptrace: Operation not permitted.
No stack.
The program is not being run.
Aborted (core dumped)

I can run llama 405B no problem. 384G of RAM, dual AMD CPU system.

Unsloth AI org

It's possible you're using an old llama.cpp version - it's best to pull it again and reinstall it - DeepSeek V3 support just got added in a few days agao!

My fault, full rebuild fixed it.

Thank you for your contributions to the open-source community!

Sign up or log in to comment