-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Single Batch Overlap for MoE Models #9660
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Interesting, I am also doing some overlapping recently. Do we need to use a changed DeepEP or DeepGEMM? |
@fzyzcjy Thanks for the comments. To overlap the Down GEMM with the Combine Send, we have modified the DeepGEMM and DeepEP respectively. As for the Shared Expert and Dispatch Recv overlap, that only required modifications to SGLang. We are currently cleaning up the code for DeepGEMM and DeepEP and will submit PRs within the next two days. |
01299d2 to
878daf3
Compare
|
Sure, I mean you need to paste the corresponding deepgemm/deepep branches as well (when ready). |
@fzyzcjy We updated the PR and added our modified DeepEP and DeepGEMM branches:
|
|
get, btw the speedup looks like only ~1%, thus I am curious whether it is because the overlappable region is tiny or the overhead of overlap is large, and also how much does the SBO improve over the simple standard overlap-shared-with-comunication. could you please share a pair of profile (one w/o overlap, one w/ overlap) about them? |
@fzyzcjy Thank you for your reminder. We pasted the wrong result when creating the draft, and have now updated it to the correct one. |
@fzyzcjy We recorded the profiles with and without overlap when the batch size was 32. Below is a screenshot of the profile of a single DeepseekV2Decoder layer on DP0_TP0_EP0 on the decode node:
|
|
I see, yes that looks reasonable on your H20 hardware (I do not have H20 and thus dnk the time spent of each kernel before) |
|
Since sglang has merged PR: #9340 to upgrade to DeepGEMM v2, we are working on the relevant adaptation work. |
|
this change looks great, but I am still a bit worried (1) shall we use atomicAdd (doc says relaxed ordering) or use released ordering (2) will the extra tma store wait make that warp group slower (i.e. shall we signal on the next existing tma store wait). FYI my naive implementations are in flashinfer-ai/flashinfer#1569 (have not tested it since the nvfp4 code path has not arrived yet...)
|
@fzyzcjy For (1), we will conduct a more in-depth investigation. For (2), after our tests, |
|
FYI I am waiting for the refactored deepgemm (hopper), since I need to implement deepgemm blackwell and want to be aligned with your style to avoid two conflicting styles |
@fzyzcjy We have submitted a pull request to DeepGEMM.v2 deepseek-ai/DeepGEMM#183, which contains the GEMM interface and implementation required for overlap. We would like to know if you have any suggestions for modification. |
|
@Sulfur6 I made a tiny nit comment there |
bdedad4 to
34eda38
Compare
|
Hi @Sulfur6 , how is the progress for this PR, is it ready for merge? |
@Fridge003 Currently the local test has passed and I am debugging CI. This PR depends on sgl-project/DeepGEMM#14, which is ready and waiting for approve and merge. |
… and change the git tag to the newest version.





1. Motivation
The optimization effect of Two-Batch Overlap (TBO) is suboptimal for the Decode phase on low-compute-power cards (i.e., H20). This is due to two main factors: First, on the Hopper architecture, the WGMMA block_m is 64. Consequently, when TBO is enabled with a small Decode batch size, the MLP GEMM suffers from redundant computations. A positive throughput gain is only observed at larger batch sizes (e.g., 64, 128). Second, at these larger batch sizes, low-compute-power cards like the H20 fail to meet the SLA guarantees for TPOT/ITL.
Therefore, it is necessary to find a solution that can improve Decode throughput even with small batch sizes. Single Batch Overlap (SBO) presents itself as a viable solution.
We implement SBO for DeepSeek v3/R1 by modifying DeepEP and DeepGEMM, including the overlap of Shared Expert and Dispatch Recv, as well as the overlap of Down GEMM with Combine Send.
The overlap of Down GEMM with Combine Send is implemented by modifying DeepEP and DeepGEMM, with the detailed implementation available in the branches below:
Since the latest version of SGLang depends on the branch https://github.com/sgl-project/DeepGEMM/tree/sgl, you should not use the branch specified by the above PR when starting SGLang. Instead, you should use the branch developed based on the sgl branch https://github.com/Sulfur6/DeepGEMM/tree/sbo.v2.sgl
2. Overlap Design
SBO implements two overlap for the MoE layers of DeepSeek-V3/R1. One is to overlap the Shared Expert computation with the Dispatch Recv communication, and the other is to overlap the Down GEMM computation with the Combine Send communication.


The interaction between Down GEMM and Combine Send is structured as a producer-consumer model synchronized by signals. For each local expert, a signal unit is allocated for every block_m tokens. The Down GEMM computes the results for these block_m tokens and atomically increments the signaling unit after completing a portion of the work. The Combine Send polls this signaling unit. Once the value reaches a threshold, it sends the corresponding block_m tokens.
3. Modifications
New Server Arguments
--enable-single-batch-overlap: add this argument to enable SBO (Single Batch Overlap).--deepep-mode low_latency_overlap: Newly added deepep mode, mainly for SBO.Add assertion to make sure that when SBO is enabled,
--moe-a2a-backedmust be"deepep"and--deepep-modemust be"auto"for mix or"low_latency_overlap"for pd disaggregation.EPLB manager
Unify eplb distribution recorder for both low latency and low latency overlap.
Deepep Token Dispatcher
Add new dispatcher implement
_DeepEPDispatcherImplLowLatencyOverlapfor deepep mode"low latency overlap"for SBO.Add new parameters for DeepEPDispatcher.
DeepGEMM Wrapper
Add wrapper for masked signal gemm for SBO.
Model & Layer
forward_deepep_sboinDeepseekV2MoEof deepseek_v2 modelforward_deepgemm_signalfor relatedDeepEPMoElayer.4. Evaluation
4.1. Experiment Setup
4.2. Performance Evaluation
4.3. Accuracy Tests
4.4. Repro Script