TA的每日心情 | 开心 2020-4-8 10:45 |
---|
签到天数: 227 天 [LV.7]分神
|
* g/ M* I0 W; m1 ^" c在论文里,这是第3.2.2节的内容6 ^" m" j; s/ Q C
7 r1 O7 f F9 S3.2.2. Efficient Implementation of Cross-Node All-to-All Communication* p& S+ O0 R1 J8 `3 {
In order to ensure sufficient computational performance for DualPipe, we customize efficient$ A, H# D1 g0 P7 z/ R6 @
cross-node all-to-all communication kernels (including dispatching and combining) to conserve
8 Q0 g2 A9 l+ I5 H1 l+ j' _! vthe number of SMs dedicated to communication. The implementation of the kernels is codesigned with the MoE gating algorithm and the network topology of our cluster. To be specific,* ]( c* M& ]6 B! N6 g5 k; e
in our cluster, cross-node GPUs are fully interconnected with IB, and intra-node communications
6 ]; j$ j8 j3 e$ k& Qare handled via NVLink. NVLink offers a bandwidth of 160 GB/s, roughly 3.2 times that of IB
4 G2 S8 R; C! g. w( e! U, N# X1 l(50 GB/s). To effectively leverage the different bandwidths of IB and NVLink, we limit each
$ T+ s2 i4 D7 U$ d4 Q! ?8 Q2 Itoken to be dispatched to at most 4 nodes, thereby reducing IB traffic. For each token, when its
7 d4 X% w: Q3 Prouting decision is made, it will first be transmitted via IB to the GPUs with the same in-node
: M# [; Z9 F6 `; `$ t% ~5 Iindex on its target nodes. Once it reaches the target nodes, we will endeavor to ensure that it is6 P+ p8 s) [" c3 ~+ a
instantaneously forwarded via NVLink to specific GPUs that host their target experts, without
7 ]: w- B: r7 e/ [" d7 x& ~3 ]4 Z9 cbeing blocked by subsequently arriving tokens. In this way, communications via IB and NVLink, q0 B2 D! A5 K( \+ G( H
are fully overlapped, and each token can efficiently select an average of 3.2 experts per node2 q- b0 K, C& j% Q$ o& Y
without incurring additional overhead from NVLink. This implies that, although DeepSeek-V3
# o" D* F& s3 p13
7 |$ _8 [0 h. ]' i* R; j# w; F" Vselects only 8 routed experts in practice, it can scale up this number to a maximum of 13 experts1 V) w8 J8 g) c4 l" G
(4 nodes × 3.2 experts/node) while preserving the same communication cost. Overall, under
- z! v) r) C3 Qsuch a communication strategy, only 20 SMs are sufficient to fully utilize the bandwidths of IB4 J$ `3 z* L; X* X* X
and NVLink.
! G. ]5 V7 @& U9 A0 qIn detail, we employ the warp specialization technique (Bauer et al., 2014) and partition" H+ c: Z% S' r% ^- F7 p1 G
20 SMs into 10 communication channels. During the dispatching process, (1) IB sending, (2)( G; Q' p% R* m' ?- g
IB-to-NVLink forwarding, and (3) NVLink receiving are handled by respective warps. The
# D. q$ L4 Y Z" M0 B7 b% fnumber of warps allocated to each communication task is dynamically adjusted according to the# ^3 t7 g7 s1 E7 b
actual workload across all SMs. Similarly, during the combining process, (1) NVLink sending,
$ a6 \ R U. D+ N G( f1 L( f1 Z(2) NVLink-to-IB forwarding and accumulation, and (3) IB receiving and accumulation are also
( ~& ]8 i% y+ e1 I% G2 V5 fhandled by dynamically adjusted warps. In addition, both dispatching and combining kernels+ A7 P/ y' x5 N
overlap with the computation stream, so we also consider their impact on other SM computation6 s. j5 m6 k5 M
kernels. Specifically, we employ customized PTX (Parallel Thread Execution) instructions and5 y* Q& C5 Z- S
auto-tune the communication chunk size, which significantly reduces the use of the L2 cache
' L4 d4 A( ]* y" Uand the interference to other SMs.
2 ]* d6 ]% y$ ]
T; S3 r' u, D$ S Y通俗一点说,就是为了实现高效的跨节点全面通信。解决的问题本质上和唐家山老师日志里说的双机对拷的场景差不多。一般来说单机多卡之间用nvlink,多机多卡之间依赖IB网络,但nvlink的速率是IB网络的速率的3.2倍,需要通过一些优化来实现更好的传输策略。这是一整套方案。
0 l: t# c. c. B+ \% C h) G4 n* }: B2 q0 M( F, e% ]8 b! i2 }& g, l
我的理解,使用PTX在其中,是为了更精准的定制线程执行减少通信块分配传输之间的串扰。3 |* }, ^$ n9 H) K7 w
6 v% S- w8 s: F' a \目的不是为了绕cuda,反而是为了让cuda的效率更高。
; ^5 C$ m* o1 S$ Q
& s2 S4 Y* q* S" |( C类比一下,就好比发现网卡驱动在对拷特定内存块的时候会和应用的线程执行出现串行导致效率降低,而绕开操作系统定义的与网卡驱动的接口,直接使用网卡支持的指令集进行了优化。 |
评分
-
查看全部评分
|