TA的每日心情 | 开心 2020-4-8 10:45 |
---|
签到天数: 227 天 [LV.7]分神
|
r3 d5 b- ^7 L1 v; F( c在论文里,这是第3.2.2节的内容
0 k" J: L3 w5 i* T$ s: }
7 ], v# q8 d$ M# D4 b# G2 ^3.2.2. Efficient Implementation of Cross-Node All-to-All Communication- o! _, d2 e' R0 I
In order to ensure sufficient computational performance for DualPipe, we customize efficient1 [8 ^: Z3 ^/ T+ d. C
cross-node all-to-all communication kernels (including dispatching and combining) to conserve% Y" e; S6 ~; y* B* Y/ {
the number of SMs dedicated to communication. The implementation of the kernels is codesigned with the MoE gating algorithm and the network topology of our cluster. To be specific,
& m0 B1 @ ?5 Z4 I2 ?5 \: Bin our cluster, cross-node GPUs are fully interconnected with IB, and intra-node communications
2 a: |, i3 B2 C- d. B0 o3 T( K5 |are handled via NVLink. NVLink offers a bandwidth of 160 GB/s, roughly 3.2 times that of IB" q- T4 ]) h% Y) A" d1 e; J
(50 GB/s). To effectively leverage the different bandwidths of IB and NVLink, we limit each
9 |0 u9 I. D2 @' Xtoken to be dispatched to at most 4 nodes, thereby reducing IB traffic. For each token, when its
3 A; y$ I$ D; ?# x4 t' Krouting decision is made, it will first be transmitted via IB to the GPUs with the same in-node& n4 l, c9 F) `+ J$ q7 h" u* q! {
index on its target nodes. Once it reaches the target nodes, we will endeavor to ensure that it is
9 ]" X" E% g" |/ h. N4 I1 Linstantaneously forwarded via NVLink to specific GPUs that host their target experts, without9 b; ]( Q* A; N
being blocked by subsequently arriving tokens. In this way, communications via IB and NVLink/ R1 l' G+ q+ V" \
are fully overlapped, and each token can efficiently select an average of 3.2 experts per node3 S( H( ]& L t, c
without incurring additional overhead from NVLink. This implies that, although DeepSeek-V3( p/ p9 O. X! i
13
. j7 S( H" i* i" d& f" V+ r/ zselects only 8 routed experts in practice, it can scale up this number to a maximum of 13 experts
+ J, c0 _4 n" ~; M. g1 m2 L) E(4 nodes × 3.2 experts/node) while preserving the same communication cost. Overall, under
0 {& p% { T y0 c- W# t, Asuch a communication strategy, only 20 SMs are sufficient to fully utilize the bandwidths of IB% y- z6 ^# M2 {7 I
and NVLink.
: o" ^- @6 d, m6 P0 ]In detail, we employ the warp specialization technique (Bauer et al., 2014) and partition
c6 z. I' u( w; W+ t8 {20 SMs into 10 communication channels. During the dispatching process, (1) IB sending, (2)! M2 b( q, i' @2 |% }
IB-to-NVLink forwarding, and (3) NVLink receiving are handled by respective warps. The
! ?4 q- }# {8 onumber of warps allocated to each communication task is dynamically adjusted according to the
) l5 J, o5 z! m1 R0 X) |actual workload across all SMs. Similarly, during the combining process, (1) NVLink sending,+ ^8 C+ P& B& a) @: W7 l
(2) NVLink-to-IB forwarding and accumulation, and (3) IB receiving and accumulation are also
. e5 _' `. y4 W9 R3 R* L shandled by dynamically adjusted warps. In addition, both dispatching and combining kernels' e* D# O, B/ { J# e5 |
overlap with the computation stream, so we also consider their impact on other SM computation
1 @* j4 U, g- u6 k0 Skernels. Specifically, we employ customized PTX (Parallel Thread Execution) instructions and
& f4 w; N4 l. V$ \5 Uauto-tune the communication chunk size, which significantly reduces the use of the L2 cache
! |5 j/ l/ d) a/ J& hand the interference to other SMs.# f A/ W6 E0 }3 Y3 R
! R+ @% J% Z4 B5 v
通俗一点说,就是为了实现高效的跨节点全面通信。解决的问题本质上和唐家山老师日志里说的双机对拷的场景差不多。一般来说单机多卡之间用nvlink,多机多卡之间依赖IB网络,但nvlink的速率是IB网络的速率的3.2倍,需要通过一些优化来实现更好的传输策略。这是一整套方案。+ J K, G- Y; l5 f7 m
4 T, S! C) Z% y5 W, w& {( r我的理解,使用PTX在其中,是为了更精准的定制线程执行减少通信块分配传输之间的串扰。- a+ D5 p& j$ u: F! \
D' j# H k3 R+ l$ f
目的不是为了绕cuda,反而是为了让cuda的效率更高。
* G' f9 P. t9 O; S0 P9 u
4 T$ U4 Y- A; M类比一下,就好比发现网卡驱动在对拷特定内存块的时候会和应用的线程执行出现串行导致效率降低,而绕开操作系统定义的与网卡驱动的接口,直接使用网卡支持的指令集进行了优化。 |
评分
-
查看全部评分
|