Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Eval bug: CPU usage is abnormal when running deepseek-r1-671B-Q4_0 weights in Atlas 800T a2 and NPU device。 #11966

Open
woshidahunzi1 opened this issue Feb 20, 2025 · 0 comments

Comments

@woshidahunzi1
Copy link

Name and Version

./build/bin/llama-cli --version
version: 4731 (0f2bbe6)
built with cc (conda-forge gcc 12.2.0-19) 12.2.0 for aarch64-conda-linux-gnu

Operating systems

Linux

GGML backends

CPU

Hardware

NPU(8 x 910B3):

+------------------------------------------------------------------------------------------------+
| npu-smi 24.1.rc2 Version: 24.1.rc2 |
+---------------------------+---------------+----------------------------------------------------+
| NPU Name | Health | Power(W) Temp(C) Hugepages-Usage(page)|
| Chip | Bus-Id | AICore(%) Memory-Usage(MB) HBM-Usage(MB) |
+===========================+===============+====================================================+
| 0 910B3 | OK | 97.0 49 0 / 0 |
| 0 | 0000:C1:00.0 | 0 0 / 0 9069 / 65536 |
+===========================+===============+====================================================+
| 1 910B3 | OK | 93.8 49 0 / 0 |
| 0 | 0000:C2:00.0 | 0 0 / 0 8496 / 65536 |
+===========================+===============+====================================================+
| 2 910B3 | OK | 91.5 49 0 / 0 |
| 0 | 0000:81:00.0 | 0 0 / 0 8495 / 65536 |
+===========================+===============+====================================================+
| 3 910B3 | OK | 93.4 49 0 / 0 |
| 0 | 0000:82:00.0 | 0 0 / 0 8044 / 65536 |
+===========================+===============+====================================================+
| 4 910B3 | OK | 106.3 52 0 / 0 |
| 0 | 0000:01:00.0 | 0 0 / 0 8493 / 65536 |
+===========================+===============+====================================================+
| 5 910B3 | OK | 108.5 54 0 / 0 |
| 0 | 0000:02:00.0 | 0 0 / 0 8495 / 65536 |
+===========================+===============+====================================================+
| 6 910B3 | OK | 102.0 53 0 / 0 |
| 0 | 0000:41:00.0 | 0 0 / 0 8495 / 65536 |
+===========================+===============+====================================================+
| 7 910B3 | OK | 91.8 54 0 / 0 |
| 0 | 0000:42:00.0 | 0 0 / 0 12650/ 65536 |
+===========================+===============+====================================================+

system:

aarch64
openEuler release 22.03 (LTS-SP4)
NAME="openEuler"
VERSION="22.03 (LTS-SP4)"
ID="openEuler"
VERSION_ID="22.03"
PRETTY_NAME="openEuler 22.03 (LTS-SP4)"
ANSI_COLOR="0;31"

openEuler release 22.03 (LTS-SP4)

CPU:

Architecture: aarch64
CPU op-mode(s): 64-bit
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: HiSilicon
Model name: Kunpeng-920
Model: 0
Thread(s) per core: 1
Core(s) per cluster: 48
Socket(s): -
Cluster(s): 4
Stepping: 0x1
CPU max MHz: 2600.0000
CPU min MHz: 200.0000
BogoMIPS: 200.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma dcpop asimddp asimdfhm ssbs
Caches (sum of all):
L1d: 12 MiB (192 instances)
L1i: 12 MiB (192 instances)
L2: 96 MiB (192 instances)
L3: 192 MiB (8 instances)
NUMA:
NUMA node(s): 8
NUMA node0 CPU(s): 0-23
NUMA node1 CPU(s): 24-47
NUMA node2 CPU(s): 48-71
NUMA node3 CPU(s): 72-95
NUMA node4 CPU(s): 96-119
NUMA node5 CPU(s): 120-143
NUMA node6 CPU(s): 144-167
NUMA node7 CPU(s): 168-191
Vulnerabilities:
Itlb multihit: Not affected
L1tf: Not affected
Mds: Not affected
Meltdown: Not affected
Mmio stale data: Not affected
Retbleed: Not affected
Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Spectre v1: Mitigation; __user pointer sanitization
Spectre v2: Not affected
Srbds: Not affected
Tsx async abort: Not affected

CANN version:

8.0RC3

Models

DeepSeek-R1-int4-sym-gguf-q4-0-inc(DeepSeek-R1-bf16-256x20B-Q4_0-00001-of-00089.gguf)

Problem description & steps to reproduce

① I install llama.cpp from following commands:
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
cmake -B build -DGGML_CANN=on -DCMAKE_BUILD_TYPE=release
cmake --build build --config release

②then,I run this commands to eval model:
ASCEND_RT_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 ./build/bin/llama-cli -m /app/DeepSeek-R1-int4-sym-gguf-q4-0-inc/DeepSeek-R1-bf16-256x20B-Q4_0-00001-of-00089.gguf -p "Building a website can be done in 10 simple steps:" -n 512 -t 192 -e -ngl 999 -sm layer。

③problems:
I want to load all weigths into npu device,but i find when loading tensors,CPU_AARCH64 model buffer size = 350784.00 MiB,CPU_Mapped model buffer size = 3535.00 MiB,but average CANN model buffer size only= 1000+ MiB。As a result, the model loading speed and generation speed are particularly slow,only 0.2 tokens per second and i find the cpu usage is more than 10000% when generate tokens。When i check the NPU usage,i find average Usage per device is 8000MB(8000/65536),note that the model is not fully loaded to the npu device。

First Bad Commit

No response

Relevant log output

ASCEND_RT_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 ./build/bin/llama-cli -m /app/DeepSeek-R1-int4-sym-gguf-q4-0-inc/DeepSeek-R1-bf16-256x20B-Q4_0-00001-of-00089.gguf -p "Building a website can be done in 10 simple steps:" -n 512 -t 192 -e -ngl 999 -sm layer 
build: 4731 (0f2bbe65) with cc (conda-forge gcc 12.2.0-19) 12.2.0 for aarch64-conda-linux-gnu
main: llama backend init
main: load the model and apply lora adapter, if any
llama_model_load_from_file_impl: using device CANN0 (Ascend910B3) - 62114 MiB free
llama_model_load_from_file_impl: using device CANN1 (Ascend910B3) - 62112 MiB free
llama_model_load_from_file_impl: using device CANN2 (Ascend910B3) - 62114 MiB free
llama_model_load_from_file_impl: using device CANN3 (Ascend910B3) - 62114 MiB free
llama_model_load_from_file_impl: using device CANN4 (Ascend910B3) - 62115 MiB free
llama_model_load_from_file_impl: using device CANN5 (Ascend910B3) - 62115 MiB free
llama_model_load_from_file_impl: using device CANN6 (Ascend910B3) - 62115 MiB free
llama_model_load_from_file_impl: using device CANN7 (Ascend910B3) - 62113 MiB free
llama_model_loader: additional 88 GGUFs metadata loaded.
llama_model_loader: loaded meta data with 46 key-value pairs and 1025 tensors from /app/DeepSeek-R1-int4-sym-gguf-q4-0-inc/DeepSeek-R1-bf16-256x20B-Q4_0-00001-of-00089.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = deepseek2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = DeepSeek-R1-bf16
llama_model_loader: - kv   3:                         general.size_label str              = 256x20B
llama_model_loader: - kv   4:                      deepseek2.block_count u32              = 61
llama_model_loader: - kv   5:                   deepseek2.context_length u32              = 163840
llama_model_loader: - kv   6:                 deepseek2.embedding_length u32              = 7168
llama_model_loader: - kv   7:              deepseek2.feed_forward_length u32              = 18432
llama_model_loader: - kv   8:             deepseek2.attention.head_count u32              = 128
llama_model_loader: - kv   9:          deepseek2.attention.head_count_kv u32              = 128
llama_model_loader: - kv  10:                   deepseek2.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv  11: deepseek2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  12:                deepseek2.expert_used_count u32              = 8
llama_model_loader: - kv  13:                          general.file_type u32              = 2
llama_model_loader: - kv  14:        deepseek2.leading_dense_block_count u32              = 3
llama_model_loader: - kv  15:                       deepseek2.vocab_size u32              = 129280
llama_model_loader: - kv  16:            deepseek2.attention.q_lora_rank u32              = 1536
llama_model_loader: - kv  17:           deepseek2.attention.kv_lora_rank u32              = 512
llama_model_loader: - kv  18:             deepseek2.attention.key_length u32              = 192
llama_model_loader: - kv  19:           deepseek2.attention.value_length u32              = 128
llama_model_loader: - kv  20:       deepseek2.expert_feed_forward_length u32              = 2048
llama_model_loader: - kv  21:                     deepseek2.expert_count u32              = 256
llama_model_loader: - kv  22:              deepseek2.expert_shared_count u32              = 1
llama_model_loader: - kv  23:             deepseek2.expert_weights_scale f32              = 2.500000
llama_model_loader: - kv  24:              deepseek2.expert_weights_norm bool             = true
llama_model_loader: - kv  25:               deepseek2.expert_gating_func u32              = 2
llama_model_loader: - kv  26:             deepseek2.rope.dimension_count u32              = 64
llama_model_loader: - kv  27:                deepseek2.rope.scaling.type str              = yarn
llama_model_loader: - kv  28:              deepseek2.rope.scaling.factor f32              = 40.000000
llama_model_loader: - kv  29: deepseek2.rope.scaling.original_context_length u32              = 4096
llama_model_loader: - kv  30: deepseek2.rope.scaling.yarn_log_multiplier f32              = 0.100000
llama_model_loader: - kv  31:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  32:                         tokenizer.ggml.pre str              = deepseek-v3
llama_model_loader: - kv  33:                      tokenizer.ggml.tokens arr[str,129280]  = ["<|begin▁of▁sentence|>", "<�...
llama_model_loader: - kv  34:                  tokenizer.ggml.token_type arr[i32,129280]  = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  35:                      tokenizer.ggml.merges arr[str,127741]  = ["Ġ t", "Ġ a", "i n", "Ġ Ġ", "h e...
llama_model_loader: - kv  36:                tokenizer.ggml.bos_token_id u32              = 0
llama_model_loader: - kv  37:                tokenizer.ggml.eos_token_id u32              = 1
llama_model_loader: - kv  38:            tokenizer.ggml.padding_token_id u32              = 1
llama_model_loader: - kv  39:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  40:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  41:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  42:               general.quantization_version u32              = 2
llama_model_loader: - kv  43:                                   split.no u16              = 0
llama_model_loader: - kv  44:                        split.tensors.count i32              = 1025
llama_model_loader: - kv  45:                                split.count u16              = 89
llama_model_loader: - type  f32:  363 tensors
llama_model_loader: - type q4_0:  662 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_0
print_info: file size   = 357.81 GiB (4.58 BPW) 
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 818
load: token to piece cache size = 0.8223 MB
print_info: arch             = deepseek2
print_info: vocab_only       = 0
print_info: n_ctx_train      = 163840
print_info: n_embd           = 7168
print_info: n_layer          = 61
print_info: n_head           = 128
print_info: n_head_kv        = 128
print_info: n_rot            = 64
print_info: n_swa            = 0
print_info: n_embd_head_k    = 192
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 1
print_info: n_embd_k_gqa     = 24576
print_info: n_embd_v_gqa     = 16384
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-06
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: n_ff             = 18432
print_info: n_expert         = 256
print_info: n_expert_used    = 8
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = yarn
print_info: freq_base_train  = 10000.0
print_info: freq_scale_train = 0.025
print_info: n_ctx_orig_yarn  = 4096
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 671B
print_info: model params     = 671.03 B
print_info: general.name     = DeepSeek-R1-bf16
print_info: n_layer_dense_lead   = 3
print_info: n_lora_q             = 1536
print_info: n_lora_kv            = 512
print_info: n_ff_exp             = 2048
print_info: n_expert_shared      = 1
print_info: expert_weights_scale = 2.5
print_info: expert_weights_norm  = 1
print_info: expert_gating_func   = sigmoid
print_info: rope_yarn_log_mul    = 0.1000
print_info: vocab type       = BPE
print_info: n_vocab          = 129280
print_info: n_merges         = 127741
print_info: BOS token        = 0 '<|begin▁of▁sentence|>'
print_info: EOS token        = 1 '<|end▁of▁sentence|>'
print_info: EOT token        = 1 '<|end▁of▁sentence|>'
print_info: PAD token        = 1 '<|end▁of▁sentence|>'
print_info: LF token         = 201 'Ċ'
print_info: FIM PRE token    = 128801 '<|fim▁begin|>'
print_info: FIM SUF token    = 128800 '<|fim▁hole|>'
print_info: FIM MID token    = 128802 '<|fim▁end|>'
print_info: EOG token        = 1 '<|end▁of▁sentence|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors: offloading 61 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 62/62 layers to GPU
load_tensors:   CPU_Mapped model buffer size =  3535.00 MiB
load_tensors:        CANN0 model buffer size =  1594.47 MiB
load_tensors:        CANN1 model buffer size =  1048.48 MiB
load_tensors:        CANN2 model buffer size =  1048.48 MiB
load_tensors:        CANN3 model buffer size =   917.42 MiB
load_tensors:        CANN4 model buffer size =  1048.48 MiB
load_tensors:        CANN5 model buffer size =  1048.48 MiB
load_tensors:        CANN6 model buffer size =  1048.48 MiB
load_tensors:        CANN7 model buffer size =  4321.38 MiB
load_tensors:  CPU_AARCH64 model buffer size = 350784.00 MiB
....................................................................................................
llama_init_from_model: n_seq_max     = 1
llama_init_from_model: n_ctx         = 4096
llama_init_from_model: n_ctx_per_seq = 4096
llama_init_from_model: n_batch       = 2048
llama_init_from_model: n_ubatch      = 512
llama_init_from_model: flash_attn    = 0
llama_init_from_model: freq_base     = 10000.0
llama_init_from_model: freq_scale    = 0.025
llama_init_from_model: n_ctx_per_seq (4096) < n_ctx_train (163840) -- the full capacity of the model will not be utilized
llama_kv_cache_init: kv_size = 4096, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 61, can_shift = 0
llama_kv_cache_init:      CANN0 KV buffer size =  2560.00 MiB
llama_kv_cache_init:      CANN1 KV buffer size =  2560.00 MiB
llama_kv_cache_init:      CANN2 KV buffer size =  2560.00 MiB
llama_kv_cache_init:      CANN3 KV buffer size =  2240.00 MiB
llama_kv_cache_init:      CANN4 KV buffer size =  2560.00 MiB
llama_kv_cache_init:      CANN5 KV buffer size =  2560.00 MiB
llama_kv_cache_init:      CANN6 KV buffer size =  2560.00 MiB
llama_kv_cache_init:      CANN7 KV buffer size =  1920.00 MiB
llama_init_from_model: KV self size  = 19520.00 MiB, K (f16): 11712.00 MiB, V (f16): 7808.00 MiB
llama_init_from_model:  CANN_Host  output buffer size =     0.49 MiB
llama_init_from_model:      CANN0 compute buffer size =  1187.00 MiB
llama_init_from_model:      CANN1 compute buffer size =  1187.00 MiB
llama_init_from_model:      CANN2 compute buffer size =  1187.00 MiB
llama_init_from_model:      CANN3 compute buffer size =  1187.00 MiB
llama_init_from_model:      CANN4 compute buffer size =  1187.00 MiB
llama_init_from_model:      CANN5 compute buffer size =  1187.00 MiB
llama_init_from_model:      CANN6 compute buffer size =  1187.00 MiB
llama_init_from_model:      CANN7 compute buffer size =  1187.00 MiB
llama_init_from_model:  CANN_Host compute buffer size =   167.00 MiB
llama_init_from_model: graph nodes  = 5025
llama_init_from_model: graph splits = 485
common_init_from_params: KV cache shifting is not supported for this model, disabling KV cache shifting
common_init_from_params: setting dry_penalty_last_n to ctx_size = 4096
common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable)
main: llama threadpool init, n_threads = 192
main: chat template is available, enabling conversation mode (disable it with -no-cnv)
main: chat template example:
You are a helpful assistant

<|User|>Hello<|Assistant|>Hi there<|end▁of▁sentence|><|User|>How are you?<|Assistant|>

system_info: n_threads = 192 (n_threads_batch = 192) / 192 | CPU : NEON = 1 | ARM_FMA = 1 | FP16_VA = 1 | DOTPROD = 1 | LLAMAFILE = 1 | OPENMP = 1 | AARCH64_REPACK = 1 | 

main: interactive mode on.
sampler seed: 2391341698
sampler params: 
        repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000
        dry_multiplier = 0.000, dry_base = 1.750, dry_allowed_length = 2, dry_penalty_last_n = 4096
        top_k = 40, top_p = 0.950, min_p = 0.050, xtc_probability = 0.000, xtc_threshold = 0.100, typical_p = 1.000, top_n_sigma = -1.000, temp = 0.800
        mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampler chain: logits -> logit-bias -> penalties -> dry -> top-k -> typical -> top-p -> min-p -> xtc -> temp-ext -> dist 
generate: n_ctx = 4096, n_batch = 2048, n_predict = 512, n_keep = 1

== Running in interactive mode. ==
 - Press Ctrl+C to interject at any time.
 - Press Return to return control to the AI.
 - To return control without starting a new line, end your input with '/'.
 - If you want to submit another line, end your input with '\'.

Building a website can be done in 10 simple steps:


> 
1. Define your purpose and goals for the website.

2. Choose a domain name that reflects your brand and is easy to remember.

3. Select a reliable web hosting provider to ensure your site is accessible online.

4. Decide on a website builder or CMS (Content Management System) like WordPress, Wix, or Squarespace.

5. Design your website layout focusing on user experience and aesthetics.

6. Develop content that is engaging, relevant, and optimized for search engines (SEO).

7. Ensure your website is mobile-responsive and works across different devices.

8. Test your website for functionality, speed, and compatibility with different browsers.

9. Launch your website and promote it through social media, email marketing, etc.

10. Regularly update and maintain your website to keep content fresh and secure.


llama_perf_sampler_print:    sampling time =      40.85 ms /   242 runs   (    0.17 ms per token,  5924.26 tokens per second)
llama_perf_context_print:        load time =  536385.75 ms
llama_perf_context_print: prompt eval time =   88346.84 ms /    13 tokens ( 6795.91 ms per token,     0.15 tokens per second)
llama_perf_context_print:        eval time = 1365369.54 ms /   241 runs   ( 5665.43 ms per token,     0.18 tokens per second)
llama_perf_context_print:       total time = 1503656.68 ms /   254 tokens
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant