TA的每日心情 | 开心 2020-4-8 10:45 |
|---|
签到天数: 227 天 [LV.7]分神
|
6 R$ T' w9 e. e; q0 ~$ A在论文里,这是第3.2.2节的内容9 D: n/ X& R- {0 t8 U0 {- R- Q
. t; m* n( l& L7 Q3.2.2. Efficient Implementation of Cross-Node All-to-All Communication
; z% q$ L+ B) P. y& w/ t( g H& \In order to ensure sufficient computational performance for DualPipe, we customize efficient
2 b( P; Y+ e- H" `) `) }9 Ycross-node all-to-all communication kernels (including dispatching and combining) to conserve
/ [9 p& H( D: C' `' A7 z2 ~the number of SMs dedicated to communication. The implementation of the kernels is codesigned with the MoE gating algorithm and the network topology of our cluster. To be specific,$ d& t1 c% Q" s7 M5 b! @' q) e
in our cluster, cross-node GPUs are fully interconnected with IB, and intra-node communications2 m. k! b V6 F% e" s; `" j/ I
are handled via NVLink. NVLink offers a bandwidth of 160 GB/s, roughly 3.2 times that of IB
$ _& Q4 m/ u4 j+ E# Y+ V(50 GB/s). To effectively leverage the different bandwidths of IB and NVLink, we limit each- i. u" Y1 g$ w4 }- Q: `& Q% z
token to be dispatched to at most 4 nodes, thereby reducing IB traffic. For each token, when its' ]' L* d: s4 n f: B
routing decision is made, it will first be transmitted via IB to the GPUs with the same in-node6 r- [2 {2 |# u3 k" l/ [1 _
index on its target nodes. Once it reaches the target nodes, we will endeavor to ensure that it is
0 e& I. j- n: G0 n; I1 k, x6 ^instantaneously forwarded via NVLink to specific GPUs that host their target experts, without) |) M7 ~' s2 N1 s
being blocked by subsequently arriving tokens. In this way, communications via IB and NVLink
# y7 H0 m- z$ Z1 q* {9 H! jare fully overlapped, and each token can efficiently select an average of 3.2 experts per node5 w+ ?5 D V" r6 |7 h$ l! r
without incurring additional overhead from NVLink. This implies that, although DeepSeek-V3
9 {4 w4 j p1 {- O6 _) W) `13
7 X% C; } ^& C$ b- t; P5 bselects only 8 routed experts in practice, it can scale up this number to a maximum of 13 experts
! S9 Z# ~* O7 g(4 nodes × 3.2 experts/node) while preserving the same communication cost. Overall, under4 \/ c7 s& |* g8 }( x
such a communication strategy, only 20 SMs are sufficient to fully utilize the bandwidths of IB
- U3 y, |4 R/ }* S- }- d0 fand NVLink.
! x7 I' ^/ u& q2 D: jIn detail, we employ the warp specialization technique (Bauer et al., 2014) and partition+ m0 d8 P7 w' k6 h& [0 @
20 SMs into 10 communication channels. During the dispatching process, (1) IB sending, (2)
) D! T- ~ z4 ~0 _% u( fIB-to-NVLink forwarding, and (3) NVLink receiving are handled by respective warps. The
1 g0 m+ Z* i- O6 H* V* [+ Znumber of warps allocated to each communication task is dynamically adjusted according to the0 X* b1 U4 B* d$ T) J; r; F
actual workload across all SMs. Similarly, during the combining process, (1) NVLink sending,8 `/ x8 p7 Q, N* q I; Z
(2) NVLink-to-IB forwarding and accumulation, and (3) IB receiving and accumulation are also0 H- O% t9 n: o! r- |2 A% X8 r# H# K
handled by dynamically adjusted warps. In addition, both dispatching and combining kernels# d' L6 g; `3 D+ R" v( z9 c
overlap with the computation stream, so we also consider their impact on other SM computation
$ H* c4 |8 d6 t* hkernels. Specifically, we employ customized PTX (Parallel Thread Execution) instructions and3 h3 q& G$ C, A1 q. W8 ~% a
auto-tune the communication chunk size, which significantly reduces the use of the L2 cache! @2 v! e5 c+ J3 C3 R9 }0 N! @
and the interference to other SMs." _0 X" j2 F2 X8 e
3 G" k/ {) W; ]" ~0 C8 C7 D
通俗一点说,就是为了实现高效的跨节点全面通信。解决的问题本质上和唐家山老师日志里说的双机对拷的场景差不多。一般来说单机多卡之间用nvlink,多机多卡之间依赖IB网络,但nvlink的速率是IB网络的速率的3.2倍,需要通过一些优化来实现更好的传输策略。这是一整套方案。
0 n \) I3 U' C' p: j
? E& N, y6 h; j* W7 t; T我的理解,使用PTX在其中,是为了更精准的定制线程执行减少通信块分配传输之间的串扰。# C) E6 E# m6 I2 @: h
0 ]! U. Z9 P- L$ N! N
目的不是为了绕cuda,反而是为了让cuda的效率更高。
~3 v' g- s: i+ |* ? N! P8 w0 v7 M6 Y; F: _" C
类比一下,就好比发现网卡驱动在对拷特定内存块的时候会和应用的线程执行出现串行导致效率降低,而绕开操作系统定义的与网卡驱动的接口,直接使用网卡支持的指令集进行了优化。 |
评分
-
查看全部评分
|