Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

enable qwenvl2.5 graph mode on ascend #3366

Closed
wants to merge 16 commits into from

Conversation

jinminxi104
Copy link
Collaborator

enable qwenvl2.5 graph mode on ascend

Motivation

Please describe the motivation of this PR and the goal you want to achieve through this PR.

Modification

Please briefly describe what modification is made in this PR.

BC-breaking (Optional)

Does the modification introduce changes that break the backward-compatibility of the downstream repositories?
If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.

Use cases (Optional)

If this PR introduces a new feature, it is better to list some use cases here, and update the documentation.

Checklist

  1. Pre-commit or other linting tools are used to fix the potential lint issues.
  2. The modification is covered by complete unit tests. If not, please add more unit tests to ensure the correctness.
  3. If the modification has a dependency on downstream projects of a newer version, this PR should be tested with all supported versions of downstream projects.
  4. The documentation has been modified accordingly, like docstring or example tutorials.

irexyc and others added 16 commits March 21, 2025 18:08
* refactor attn param

* fix lint

* fix build

* fix ut

* use creator to create rope_param

* reuse parse func

* fix ut

* fix comments

* update name

* fix dynamic

* fix deepseekv2 yarn

* use single dataclass

* fix loading workspace model
* better dist context

* can not exit

* multinode support

* better exception

* refactor

* fix local rank

* replace group

* fix dist

* remove useless code

* remove finish flag

* refactor engine and model agent

* uni executor

* wip

* tp

* fix

* less async

* circle buf

* event per block

* fast mp

* fix error handler

* remove safe wait

* context in model agent

* fix on stop

* check before init

* fix tp close

* ray ver0

* fix close

* fix remote code

* optimize ray

* better checker and logger

* pack tensor

* auto check dist

* fix mp gloo

* add timer tools

* better scheduler

* fix mp hang

* fix mp

* fix chat

* less output

* merge main

* optimize ray get output

* remove nsight runtime env

* dag

* optimize mp & lint

* optimize mp

* add base workerwrapper

* fix gather, update flags

* better return mask

* add choice

* enable mp,ray with worldsize=1

* fix mp exit

* fix mp vlm

* chat exit

* add docs

* lint

* doc

* dp check

* fix blocked fp8 moe

* remove mask

* support dp, async

* remove debug line

* fix model tp

* support sync execute

* fix chat stopwords

* refactor chat

* add warmup

* disable warmup

* dp support

* fix ut, merge main, force eager

* support qwen2/internlm2/internlm3

* support blocked fp8 all gather

* add more model support

* fix exit

* fix merge

* fix sync long context

* support process group on ray

* change dp master addr and master port

* update log level

* support moe tp

* fix tp1 dp2

* fix

* fix

* wait handle

* remove flag

* ensure execute order

* remove import

* add serve args

* force eager
* add deep gemm with tma pre allocated

* add comment

* add comment

* dispatch

* no use_deep_gemm arg

* remove DeepGemmBlockedF8

* missed op type

* latest get_best_config

* add a line of debug
* update

* update

* update

* update timeout

* update

* update

* update

* update

* updaste

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

---------

Co-authored-by: zhulinJulia24 <[email protected]>
* comm abstraction

* add custom

* fused rms norm

* refactor

* push-based kernel

* optimize for small hidden dims

* integration

* clean up

* export options & fix things

* allgather2d & VMM allocation

* optimize allgather2d

* remove obsolete comm utils

* handle non-multi-gpu build

* fix lint

* fix lint

* avoid using mscclpp repo (some deps are not needed)

* fix lint

* fix nccl version & clean up deps

* fix lint

* custom -> native

* rename

* fix p-lora

* fix lm head

* log fatal exception explicitly

* initial data parallel support

* sync dp + tp

* mixed `d*t0 | t1`

* refactor

* refactor

* fix ut

* refactor

* asymmetrical allreduce

* fix

* fix nccl<2.18

* fix

* fix

* fix

* fix

* fix converter

* fix converter

* fix tp for converted model

* fix tp for converted model

* assert tp size loading from workspace
* [ascend] fix multi-card distributed inference failures

* remove backend_type parameter in get_backend

* [ascend] support multi nodes

* update codes

* update codes

* rebase main
* support dp decoding with cudagraph

* fix exit

* fix tp=1 long context

* force eager on ep
merge dev into main
* add deepep

* add deepep

* fix lint

* fix and add dist check

* fix DistChecker

* delete tp restriction

* delete unused code
@jinminxi104 jinminxi104 marked this pull request as draft March 29, 2025 16:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants