TA的每日心情 | 开心 2020-4-8 10:45 |
---|
签到天数: 227 天 [LV.7]分神
|
9 e8 I5 t z1 a" I3 m
在论文里,这是第3.2.2节的内容
! y* B* w$ E# E5 m4 L
( o. O$ M5 o- Z2 _! e# S3.2.2. Efficient Implementation of Cross-Node All-to-All Communication
( ^5 v5 S( l+ K3 C1 WIn order to ensure sufficient computational performance for DualPipe, we customize efficient. t% p7 r) x0 o) }
cross-node all-to-all communication kernels (including dispatching and combining) to conserve
3 h: ~, h+ t+ E Rthe number of SMs dedicated to communication. The implementation of the kernels is codesigned with the MoE gating algorithm and the network topology of our cluster. To be specific,
$ s& Z6 B: W4 ?! Yin our cluster, cross-node GPUs are fully interconnected with IB, and intra-node communications
1 u: g1 c. f" W3 y+ Zare handled via NVLink. NVLink offers a bandwidth of 160 GB/s, roughly 3.2 times that of IB8 A: |: ^7 x( K) ?
(50 GB/s). To effectively leverage the different bandwidths of IB and NVLink, we limit each ?& ^8 Z+ A" f8 R
token to be dispatched to at most 4 nodes, thereby reducing IB traffic. For each token, when its& E7 h5 _* V( \: A d( a
routing decision is made, it will first be transmitted via IB to the GPUs with the same in-node
7 G3 j6 ?1 Q8 @: C4 O1 [1 v( C! gindex on its target nodes. Once it reaches the target nodes, we will endeavor to ensure that it is3 s5 c) z! v8 E2 w3 m4 \& Q
instantaneously forwarded via NVLink to specific GPUs that host their target experts, without
; K8 |8 Y6 N; ^% w1 O! qbeing blocked by subsequently arriving tokens. In this way, communications via IB and NVLink, J4 s5 Z& _% f0 C
are fully overlapped, and each token can efficiently select an average of 3.2 experts per node
) z) `( }1 U0 o% ?6 T" q7 t1 Zwithout incurring additional overhead from NVLink. This implies that, although DeepSeek-V3
# E3 [3 z0 }' V& ^) x13
& }6 c* s) B8 P" h0 _: j& a- ~selects only 8 routed experts in practice, it can scale up this number to a maximum of 13 experts' ]7 n" ?& \( ?( }
(4 nodes × 3.2 experts/node) while preserving the same communication cost. Overall, under; j1 O* r& n8 T) p
such a communication strategy, only 20 SMs are sufficient to fully utilize the bandwidths of IB5 X6 A7 w% O4 a/ n- q) |2 W
and NVLink.
; T0 Q2 G: D/ c1 s" ?In detail, we employ the warp specialization technique (Bauer et al., 2014) and partition5 j' y' F# c$ B: \! t6 F
20 SMs into 10 communication channels. During the dispatching process, (1) IB sending, (2)
1 {; [( \+ C K$ R& n2 k+ c! pIB-to-NVLink forwarding, and (3) NVLink receiving are handled by respective warps. The- b$ ?7 Y8 h/ {7 z v) b& l, @* [6 O1 ~
number of warps allocated to each communication task is dynamically adjusted according to the2 B, r" u2 o- f; O1 g6 c+ d
actual workload across all SMs. Similarly, during the combining process, (1) NVLink sending,6 k; l! E# Q7 e+ u6 M. y
(2) NVLink-to-IB forwarding and accumulation, and (3) IB receiving and accumulation are also: T+ M1 C* ?$ ]& Z4 \
handled by dynamically adjusted warps. In addition, both dispatching and combining kernels; h0 }& _+ V1 R; b9 g: n
overlap with the computation stream, so we also consider their impact on other SM computation
1 {: f% t0 u" n- W9 H# Pkernels. Specifically, we employ customized PTX (Parallel Thread Execution) instructions and
+ U0 r8 `( b0 [% }auto-tune the communication chunk size, which significantly reduces the use of the L2 cache
: L! {1 n1 ^, ?' D* j3 f; F& ^: Pand the interference to other SMs./ p" [0 ~- U' k% Z7 X
* {8 ]. ^9 k- Q. R$ e
通俗一点说,就是为了实现高效的跨节点全面通信。解决的问题本质上和唐家山老师日志里说的双机对拷的场景差不多。一般来说单机多卡之间用nvlink,多机多卡之间依赖IB网络,但nvlink的速率是IB网络的速率的3.2倍,需要通过一些优化来实现更好的传输策略。这是一整套方案。# j; v( q8 y( k3 n! l
8 y& c3 r- n; F8 O4 I0 m1 H我的理解,使用PTX在其中,是为了更精准的定制线程执行减少通信块分配传输之间的串扰。
+ e8 o8 w, i- x; ^* ?* X
/ T' z5 Y) e) Z+ j2 J [目的不是为了绕cuda,反而是为了让cuda的效率更高。
# |- A1 b% b. ?; U
, N5 V+ K. v. ~1 f类比一下,就好比发现网卡驱动在对拷特定内存块的时候会和应用的线程执行出现串行导致效率降低,而绕开操作系统定义的与网卡驱动的接口,直接使用网卡支持的指令集进行了优化。 |
评分
-
查看全部评分
|