Skip to content
View huang-baixin's full-sized avatar

Block or report huang-baixin

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
huang-baixin/README.md

Hi I'm Baixin 👋

  • 📝 I’m currently working on LLM-inference
  • 💻 I’m currently learning AI-Infra

Pinned Loading

  1. llama.cpp Public

    Forked from ggml-org/llama.cpp

    LLM inference in C/C++

    C++

  2. Awesome-LLM-Inference Public

    Forked from xlite-dev/Awesome-LLM-Inference

    📖A curated list of Awesome LLM Inference Paper with codes, TensorRT-LLM, vLLM, streaming-llm, AWQ, SmoothQuant, WINT8/4, Continuous Batching, FlashAttention, PagedAttention etc.

  3. EAGLE Public

    Forked from SafeAILab/EAGLE

    Official Implementation of EAGLE-1 (ICML'24) and EAGLE-2 (EMNLP'24)

    Python

  4. cuda_practices Public

    Cuda

  5. how-to-optim-algorithm-in-cuda Public

    Forked from BBuf/how-to-optim-algorithm-in-cuda

    how to optimize some algorithm in cuda.

    Cuda

103 contributions in the last year

Contribution Graph
Day of Week April May June July August September October November December January February March
Sunday
Monday
Tuesday
Wednesday
Thursday
Friday
Saturday
Less
No contributions.
Low contributions.
Medium-low contributions.
Medium-high contributions.
High contributions.
More

Activity overview

Loading A graph representing huang-baixin's contributions from March 31, 2024 to April 01, 2025. The contributions are 97% commits, 3% issues, 0% pull requests, 0% code review.

Contribution activity

April 1, 2025

huang-baixin has no activity yet for this period.
Loading