TA的每日心情 | 开心 2020-4-8 10:45 |
|---|
签到天数: 227 天 [LV.7]分神
|
* B9 {, j% O3 V; ]/ A
在论文里,这是第3.2.2节的内容 w3 Q+ G4 Q) a5 n! K f
0 W6 n3 O5 \; f7 b
3.2.2. Efficient Implementation of Cross-Node All-to-All Communication7 Y' j6 B7 u4 C% R) h& [
In order to ensure sufficient computational performance for DualPipe, we customize efficient
/ `+ b' J; T7 \1 D0 C! P7 b/ R: ocross-node all-to-all communication kernels (including dispatching and combining) to conserve3 ?$ R. `* i, n
the number of SMs dedicated to communication. The implementation of the kernels is codesigned with the MoE gating algorithm and the network topology of our cluster. To be specific,
5 ?( L1 c. ]6 w& w1 k! Vin our cluster, cross-node GPUs are fully interconnected with IB, and intra-node communications8 f p7 @" G+ g& T+ m: T: A9 ]8 E
are handled via NVLink. NVLink offers a bandwidth of 160 GB/s, roughly 3.2 times that of IB7 A( o3 ^6 [* P( r4 h
(50 GB/s). To effectively leverage the different bandwidths of IB and NVLink, we limit each0 f S! H; N" v7 c ~2 q
token to be dispatched to at most 4 nodes, thereby reducing IB traffic. For each token, when its
' n' j5 q; G7 ~routing decision is made, it will first be transmitted via IB to the GPUs with the same in-node
; C4 q. a/ @% n: [' tindex on its target nodes. Once it reaches the target nodes, we will endeavor to ensure that it is
6 @' g$ |0 V/ [: Y# }instantaneously forwarded via NVLink to specific GPUs that host their target experts, without
# }8 e U; q/ b1 }9 t4 @3 Z2 Ubeing blocked by subsequently arriving tokens. In this way, communications via IB and NVLink
% R7 A7 O. s( J( Vare fully overlapped, and each token can efficiently select an average of 3.2 experts per node* U! ^* H% p* f9 e" v
without incurring additional overhead from NVLink. This implies that, although DeepSeek-V3) G; W( [0 w c* j+ A
139 J/ ^! s; d* F/ I' F& p, K9 [
selects only 8 routed experts in practice, it can scale up this number to a maximum of 13 experts" y, f( v e$ v: s9 s. T5 ]
(4 nodes × 3.2 experts/node) while preserving the same communication cost. Overall, under0 e9 q. ^# P. [1 \( `/ R: \
such a communication strategy, only 20 SMs are sufficient to fully utilize the bandwidths of IB
, F$ V0 A' S9 ^and NVLink., g* p4 d% k2 I# a# z
In detail, we employ the warp specialization technique (Bauer et al., 2014) and partition( i7 {8 W" F# ~- X2 g
20 SMs into 10 communication channels. During the dispatching process, (1) IB sending, (2) d9 O, Q7 w9 I+ \
IB-to-NVLink forwarding, and (3) NVLink receiving are handled by respective warps. The
! J/ r" E3 c' L7 t8 o6 Z7 Inumber of warps allocated to each communication task is dynamically adjusted according to the
J8 v ?1 A4 F$ Y( k2 y* zactual workload across all SMs. Similarly, during the combining process, (1) NVLink sending,
. _* r. I4 m# I) Z* |! S4 Y, J8 I(2) NVLink-to-IB forwarding and accumulation, and (3) IB receiving and accumulation are also
! O! v0 e$ L6 b2 Q6 `handled by dynamically adjusted warps. In addition, both dispatching and combining kernels4 R" g3 }' t0 V: L# V; L+ j2 Q. k c! @& d
overlap with the computation stream, so we also consider their impact on other SM computation; r8 k# C+ w9 u. t# X' c( {
kernels. Specifically, we employ customized PTX (Parallel Thread Execution) instructions and1 _- }* G( I4 t5 D0 H
auto-tune the communication chunk size, which significantly reduces the use of the L2 cache4 J& G- p* T; W
and the interference to other SMs.
0 E8 V6 n' ~' |: Q2 W3 I+ ^! g0 a) r/ I7 L
通俗一点说,就是为了实现高效的跨节点全面通信。解决的问题本质上和唐家山老师日志里说的双机对拷的场景差不多。一般来说单机多卡之间用nvlink,多机多卡之间依赖IB网络,但nvlink的速率是IB网络的速率的3.2倍,需要通过一些优化来实现更好的传输策略。这是一整套方案。; S; x; |; `8 H$ X9 X% S8 ~
; i; c% }, `( F/ w9 r/ `
我的理解,使用PTX在其中,是为了更精准的定制线程执行减少通信块分配传输之间的串扰。
* H9 r0 P3 A0 U$ G1 S' i" C; U1 ?+ \1 Y& `: d9 R% f5 c' Q1 j7 {
目的不是为了绕cuda,反而是为了让cuda的效率更高。 W* J" H; E% Y3 O7 r+ m4 j
. l1 q$ }, A1 t c; s6 w6 M类比一下,就好比发现网卡驱动在对拷特定内存块的时候会和应用的线程执行出现串行导致效率降低,而绕开操作系统定义的与网卡驱动的接口,直接使用网卡支持的指令集进行了优化。 |
评分
-
查看全部评分
|