TA的每日心情 | 开心 2020-4-8 10:45 |
---|
签到天数: 227 天 [LV.7]分神
|
" R! R2 v2 G. W. Z0 G" z在论文里,这是第3.2.2节的内容
; E5 z6 C- _ Z! [4 o7 k
$ J/ v# s1 [$ m. k- k8 ~3.2.2. Efficient Implementation of Cross-Node All-to-All Communication! t2 ]1 M, `. T% }7 Z% G
In order to ensure sufficient computational performance for DualPipe, we customize efficient: A, M! `) [$ K" }% y
cross-node all-to-all communication kernels (including dispatching and combining) to conserve' Y- ^ B' ^9 R4 B, Q% u* t, A
the number of SMs dedicated to communication. The implementation of the kernels is codesigned with the MoE gating algorithm and the network topology of our cluster. To be specific,
8 Z% Z) p" `4 f. c3 f& ]in our cluster, cross-node GPUs are fully interconnected with IB, and intra-node communications
0 j1 }: z+ q# I9 R/ A. _+ d1 g$ `are handled via NVLink. NVLink offers a bandwidth of 160 GB/s, roughly 3.2 times that of IB' U6 u) ^9 Q# W" D" y4 r
(50 GB/s). To effectively leverage the different bandwidths of IB and NVLink, we limit each
' m6 O4 u/ n1 x/ Rtoken to be dispatched to at most 4 nodes, thereby reducing IB traffic. For each token, when its
% Y: m! j/ r( W8 Q; J1 k r4 m3 yrouting decision is made, it will first be transmitted via IB to the GPUs with the same in-node4 ^+ {7 L( T, q
index on its target nodes. Once it reaches the target nodes, we will endeavor to ensure that it is
- U0 a; f% k7 k6 Iinstantaneously forwarded via NVLink to specific GPUs that host their target experts, without
/ e! t' o& N! a% E7 Zbeing blocked by subsequently arriving tokens. In this way, communications via IB and NVLink
; ^! t- y! f5 lare fully overlapped, and each token can efficiently select an average of 3.2 experts per node
: o% r; N* p5 {1 n6 B; A6 m- Mwithout incurring additional overhead from NVLink. This implies that, although DeepSeek-V3/ ~* L* S! U7 W
13$ N$ l4 j3 m: ]4 k! ?
selects only 8 routed experts in practice, it can scale up this number to a maximum of 13 experts
* z, J* V2 W/ s% G9 h0 C+ \(4 nodes × 3.2 experts/node) while preserving the same communication cost. Overall, under
- h" K+ f4 n, Csuch a communication strategy, only 20 SMs are sufficient to fully utilize the bandwidths of IB
) R! R2 c+ q5 M1 r0 k1 R+ t8 I' yand NVLink.
: b; [ t/ I+ v! HIn detail, we employ the warp specialization technique (Bauer et al., 2014) and partition
0 n5 E7 j) g6 a20 SMs into 10 communication channels. During the dispatching process, (1) IB sending, (2)
) x- D L" K) i" }, C" XIB-to-NVLink forwarding, and (3) NVLink receiving are handled by respective warps. The4 M6 s6 R8 I- ?0 d
number of warps allocated to each communication task is dynamically adjusted according to the
! N6 P* M( @/ c7 f+ U7 Pactual workload across all SMs. Similarly, during the combining process, (1) NVLink sending,. T( Z' k( i: f' ^9 y: a$ ]
(2) NVLink-to-IB forwarding and accumulation, and (3) IB receiving and accumulation are also
/ @$ Q! K9 _/ yhandled by dynamically adjusted warps. In addition, both dispatching and combining kernels
# r! V% \6 i4 E. t M& w& M2 k, F$ ^overlap with the computation stream, so we also consider their impact on other SM computation
2 ^7 Z+ q2 p/ i2 W5 Gkernels. Specifically, we employ customized PTX (Parallel Thread Execution) instructions and; h# S1 [+ T7 |
auto-tune the communication chunk size, which significantly reduces the use of the L2 cache
# w6 F% z7 M" l; ]6 k0 Z) F8 rand the interference to other SMs.9 M# i' U- @% B/ Z' j6 @
9 U' C" |: L; w% F
通俗一点说,就是为了实现高效的跨节点全面通信。解决的问题本质上和唐家山老师日志里说的双机对拷的场景差不多。一般来说单机多卡之间用nvlink,多机多卡之间依赖IB网络,但nvlink的速率是IB网络的速率的3.2倍,需要通过一些优化来实现更好的传输策略。这是一整套方案。 y& e& M' [; t0 b7 w/ v- k
. p% ~# x7 {+ u- l1 o" j我的理解,使用PTX在其中,是为了更精准的定制线程执行减少通信块分配传输之间的串扰。
, n7 y' r* M4 f8 x+ ~& D+ R
$ B( K& u2 {4 Z. t# L. n2 W, P目的不是为了绕cuda,反而是为了让cuda的效率更高。
" C' y6 e( y* o, z- W6 Y. j; H0 t
类比一下,就好比发现网卡驱动在对拷特定内存块的时候会和应用的线程执行出现串行导致效率降低,而绕开操作系统定义的与网卡驱动的接口,直接使用网卡支持的指令集进行了优化。 |
评分
-
查看全部评分
|