TA的每日心情 | 开心 2020-4-8 10:45 |
---|
签到天数: 227 天 [LV.7]分神
|
5 F' h7 N! |8 H在论文里,这是第3.2.2节的内容# g3 R5 b) q! a9 G2 _+ s
( M1 H' g Z; ?3.2.2. Efficient Implementation of Cross-Node All-to-All Communication
9 I; i: {& t( l% Z+ t$ e- x+ pIn order to ensure sufficient computational performance for DualPipe, we customize efficient3 u4 h- U8 K; _7 r8 v) |
cross-node all-to-all communication kernels (including dispatching and combining) to conserve( E. g0 b& K5 q3 |9 d) y
the number of SMs dedicated to communication. The implementation of the kernels is codesigned with the MoE gating algorithm and the network topology of our cluster. To be specific,# W# p; C8 { O( H( B/ j
in our cluster, cross-node GPUs are fully interconnected with IB, and intra-node communications6 Y% C; i; l( ?% D
are handled via NVLink. NVLink offers a bandwidth of 160 GB/s, roughly 3.2 times that of IB4 [1 [' |' t+ O$ s
(50 GB/s). To effectively leverage the different bandwidths of IB and NVLink, we limit each* D/ A8 C* p, j5 g6 f
token to be dispatched to at most 4 nodes, thereby reducing IB traffic. For each token, when its
; E- G* b8 y/ u/ s. P* n0 } n+ qrouting decision is made, it will first be transmitted via IB to the GPUs with the same in-node' c7 v6 Z# j3 D. ?
index on its target nodes. Once it reaches the target nodes, we will endeavor to ensure that it is4 D1 O8 R$ i+ i: t1 k
instantaneously forwarded via NVLink to specific GPUs that host their target experts, without
6 Y9 R2 J- _+ @9 i# w, \2 S# Y; Xbeing blocked by subsequently arriving tokens. In this way, communications via IB and NVLink
. C5 \4 |0 H3 `% k0 r) zare fully overlapped, and each token can efficiently select an average of 3.2 experts per node3 U9 A% V/ u+ Z( V9 d O: K
without incurring additional overhead from NVLink. This implies that, although DeepSeek-V3
; l; u, ` k8 R5 X, T13" ^, S/ w+ o' O4 _- P( b: D
selects only 8 routed experts in practice, it can scale up this number to a maximum of 13 experts
# Y& |" m6 ^1 a; U, ?7 |3 t' M! h(4 nodes × 3.2 experts/node) while preserving the same communication cost. Overall, under
. D2 i6 ^3 d0 F3 W4 psuch a communication strategy, only 20 SMs are sufficient to fully utilize the bandwidths of IB
# T% ~3 q; M- x. kand NVLink.
( G7 }4 ~& f0 z$ f! G3 tIn detail, we employ the warp specialization technique (Bauer et al., 2014) and partition- F: s) l4 i) Q. V3 Y, p
20 SMs into 10 communication channels. During the dispatching process, (1) IB sending, (2); P: `1 n4 T" Q. \4 C4 n
IB-to-NVLink forwarding, and (3) NVLink receiving are handled by respective warps. The: J) y7 Q5 `+ Z# Q( n
number of warps allocated to each communication task is dynamically adjusted according to the( u" g8 m, _4 o/ |% B7 p
actual workload across all SMs. Similarly, during the combining process, (1) NVLink sending,
2 P3 w/ {! I7 x y/ j1 j(2) NVLink-to-IB forwarding and accumulation, and (3) IB receiving and accumulation are also
. w* N+ \: Y I. }" H1 h# Vhandled by dynamically adjusted warps. In addition, both dispatching and combining kernels5 Z0 R) Z; K/ a f; b
overlap with the computation stream, so we also consider their impact on other SM computation# y6 y# [& e" ]+ K; p5 K
kernels. Specifically, we employ customized PTX (Parallel Thread Execution) instructions and$ p4 r2 P6 K1 X% B9 T' I0 h
auto-tune the communication chunk size, which significantly reduces the use of the L2 cache
5 \6 ?5 M) V0 z# y q0 b Land the interference to other SMs.
7 |$ p* b) I3 Q& ]3 Z) K2 y, B; x$ w, C0 g
通俗一点说,就是为了实现高效的跨节点全面通信。解决的问题本质上和唐家山老师日志里说的双机对拷的场景差不多。一般来说单机多卡之间用nvlink,多机多卡之间依赖IB网络,但nvlink的速率是IB网络的速率的3.2倍,需要通过一些优化来实现更好的传输策略。这是一整套方案。- V' T& @4 Z, H: `/ [+ k6 A6 B7 A
( K3 |& |1 G7 s( z4 w* u$ M我的理解,使用PTX在其中,是为了更精准的定制线程执行减少通信块分配传输之间的串扰。+ Z+ z7 d/ H% ~) h; G; D
$ L6 {5 W- M$ i9 R5 p2 z* k
目的不是为了绕cuda,反而是为了让cuda的效率更高。
5 S; W4 `$ y8 _- l' d( A3 ~: w; i1 V& {
类比一下,就好比发现网卡驱动在对拷特定内存块的时候会和应用的线程执行出现串行导致效率降低,而绕开操作系统定义的与网卡驱动的接口,直接使用网卡支持的指令集进行了优化。 |
评分
-
查看全部评分
|