TA的每日心情 | 开心 2020-4-8 10:45 |
|---|
签到天数: 227 天 [LV.7]分神
|
9 w+ c) [( \2 x在论文里,这是第3.2.2节的内容
' w2 B6 ?' ?) m- ^) h) i4 u. m; O% L2 _4 M0 ]8 D; s& o
3.2.2. Efficient Implementation of Cross-Node All-to-All Communication
$ F5 s6 ?! r7 A3 f5 K5 CIn order to ensure sufficient computational performance for DualPipe, we customize efficient
( E7 J& k; Q& y# ocross-node all-to-all communication kernels (including dispatching and combining) to conserve
2 \9 Z- g3 K4 ~1 Athe number of SMs dedicated to communication. The implementation of the kernels is codesigned with the MoE gating algorithm and the network topology of our cluster. To be specific,
# B8 M, j/ o! O# X' l, p% e xin our cluster, cross-node GPUs are fully interconnected with IB, and intra-node communications u) F# j, A V) A+ f! ~
are handled via NVLink. NVLink offers a bandwidth of 160 GB/s, roughly 3.2 times that of IB
% c# c4 B* ]0 @3 u# m- i(50 GB/s). To effectively leverage the different bandwidths of IB and NVLink, we limit each
5 I0 G1 l. }" o% m" ltoken to be dispatched to at most 4 nodes, thereby reducing IB traffic. For each token, when its
1 u, L7 o! m+ W' y6 O$ irouting decision is made, it will first be transmitted via IB to the GPUs with the same in-node
# _ e m+ J0 w$ Q- Q/ jindex on its target nodes. Once it reaches the target nodes, we will endeavor to ensure that it is
' T; l1 k% y" \% ~" S$ ginstantaneously forwarded via NVLink to specific GPUs that host their target experts, without5 \1 l9 F) i2 G" i
being blocked by subsequently arriving tokens. In this way, communications via IB and NVLink; N3 k1 C8 e, C7 l
are fully overlapped, and each token can efficiently select an average of 3.2 experts per node
; ]6 B: R5 ]- D/ h+ U: Awithout incurring additional overhead from NVLink. This implies that, although DeepSeek-V3- {1 ^; x! y: D! h3 G' R$ R9 j
13: u" U# R. G. @; Z/ C
selects only 8 routed experts in practice, it can scale up this number to a maximum of 13 experts
/ \$ N/ V6 Q/ m! u$ U(4 nodes × 3.2 experts/node) while preserving the same communication cost. Overall, under+ R& `0 j- G3 y% I( d" W) w5 \
such a communication strategy, only 20 SMs are sufficient to fully utilize the bandwidths of IB# m. z( j# k( V+ d! @
and NVLink.3 s1 e7 B/ ]! O
In detail, we employ the warp specialization technique (Bauer et al., 2014) and partition
- P8 x. L4 U& J20 SMs into 10 communication channels. During the dispatching process, (1) IB sending, (2)- ^( Q+ X7 m- B- r" ?
IB-to-NVLink forwarding, and (3) NVLink receiving are handled by respective warps. The
! P. c" i( a: U7 @, T# ~number of warps allocated to each communication task is dynamically adjusted according to the e: u% c' {" J) h! V/ k
actual workload across all SMs. Similarly, during the combining process, (1) NVLink sending,
# g6 G, o. s! B, W* @3 u$ x+ Q( |" U(2) NVLink-to-IB forwarding and accumulation, and (3) IB receiving and accumulation are also
3 H, c: ]) n% n" Fhandled by dynamically adjusted warps. In addition, both dispatching and combining kernels1 @, m0 S) W) f& ~. L
overlap with the computation stream, so we also consider their impact on other SM computation
+ v: o8 m! }1 [ C( m: dkernels. Specifically, we employ customized PTX (Parallel Thread Execution) instructions and; }. _ z( `- ^8 \
auto-tune the communication chunk size, which significantly reduces the use of the L2 cache
1 X* E/ m+ w& k5 ]5 R: L% X+ hand the interference to other SMs.& v. Y6 _& U1 y; w. j
; \6 U, y) J4 n+ k- t1 f5 l. l6 ~* E
通俗一点说,就是为了实现高效的跨节点全面通信。解决的问题本质上和唐家山老师日志里说的双机对拷的场景差不多。一般来说单机多卡之间用nvlink,多机多卡之间依赖IB网络,但nvlink的速率是IB网络的速率的3.2倍,需要通过一些优化来实现更好的传输策略。这是一整套方案。
y8 j$ B1 K/ V( S9 Y$ K5 q2 u' D/ w8 O1 g- Q) d
我的理解,使用PTX在其中,是为了更精准的定制线程执行减少通信块分配传输之间的串扰。
6 g5 I0 F5 Z% ?0 b
/ {1 r6 ^/ O; D目的不是为了绕cuda,反而是为了让cuda的效率更高。
) D8 S: H: ?2 O: l- P! i
! c: {3 y2 y( W: r/ S+ b( _类比一下,就好比发现网卡驱动在对拷特定内存块的时候会和应用的线程执行出现串行导致效率降低,而绕开操作系统定义的与网卡驱动的接口,直接使用网卡支持的指令集进行了优化。 |
评分
-
查看全部评分
|