TA的每日心情 | 开心 2020-4-8 10:45 |
|---|
签到天数: 227 天 [LV.7]分神
|
" e7 d- s; K1 d# a: g8 x% [7 U在论文里,这是第3.2.2节的内容
! u& j4 H8 d0 M- e5 V* i, |9 ]6 j: ?" L @* p
3.2.2. Efficient Implementation of Cross-Node All-to-All Communication
! L' A9 Z0 O& q1 Y1 m, w/ @! uIn order to ensure sufficient computational performance for DualPipe, we customize efficient
; {$ t; H. `) V. f5 W3 Hcross-node all-to-all communication kernels (including dispatching and combining) to conserve" O }9 P7 T. Q1 p# S
the number of SMs dedicated to communication. The implementation of the kernels is codesigned with the MoE gating algorithm and the network topology of our cluster. To be specific,& a% G. M! H8 m( S" L/ G
in our cluster, cross-node GPUs are fully interconnected with IB, and intra-node communications2 o. s( [9 ?; b' q
are handled via NVLink. NVLink offers a bandwidth of 160 GB/s, roughly 3.2 times that of IB
8 ?! b* Z/ Y% @6 F(50 GB/s). To effectively leverage the different bandwidths of IB and NVLink, we limit each
# r& B& n- S, f6 D# Xtoken to be dispatched to at most 4 nodes, thereby reducing IB traffic. For each token, when its7 M5 M# r# E c5 f( K2 Q/ Y
routing decision is made, it will first be transmitted via IB to the GPUs with the same in-node( \( }' v; K; l& ~2 k$ ?5 _5 b' R
index on its target nodes. Once it reaches the target nodes, we will endeavor to ensure that it is
; I1 _2 x2 `( G# ]& Winstantaneously forwarded via NVLink to specific GPUs that host their target experts, without6 {$ g; |, a. M
being blocked by subsequently arriving tokens. In this way, communications via IB and NVLink0 L& p) \* Z1 B9 P# J
are fully overlapped, and each token can efficiently select an average of 3.2 experts per node
) g$ \) u' W% d/ ]; K- Rwithout incurring additional overhead from NVLink. This implies that, although DeepSeek-V3
) f g) B4 o& x8 C, F9 J* \5 C6 o9 d13& ~1 N+ i2 }* _# r3 D7 t3 t
selects only 8 routed experts in practice, it can scale up this number to a maximum of 13 experts2 L" ?: V+ k% o6 Z& ^8 ]
(4 nodes × 3.2 experts/node) while preserving the same communication cost. Overall, under
1 v5 r0 u& S$ E& ^such a communication strategy, only 20 SMs are sufficient to fully utilize the bandwidths of IB
; L6 u' J$ @ V! d, V1 ^and NVLink.# W. D% Z V3 \2 G
In detail, we employ the warp specialization technique (Bauer et al., 2014) and partition9 i V: X5 v* z) d: X' J& g. J( ?
20 SMs into 10 communication channels. During the dispatching process, (1) IB sending, (2)
4 l. j& c$ M5 K x! vIB-to-NVLink forwarding, and (3) NVLink receiving are handled by respective warps. The
+ h8 B. j% m! N; x% J" J, jnumber of warps allocated to each communication task is dynamically adjusted according to the9 r. p I4 K2 `% @( M
actual workload across all SMs. Similarly, during the combining process, (1) NVLink sending,
% e5 \+ F. S: X4 l. w6 H(2) NVLink-to-IB forwarding and accumulation, and (3) IB receiving and accumulation are also3 t& h9 t! y0 U4 G- U- W% [
handled by dynamically adjusted warps. In addition, both dispatching and combining kernels9 i* X5 C0 `) P( w* R1 M- ?- ~
overlap with the computation stream, so we also consider their impact on other SM computation5 K/ j% C4 K n$ |8 g8 m
kernels. Specifically, we employ customized PTX (Parallel Thread Execution) instructions and% V& C& P& P# o' }% \, E
auto-tune the communication chunk size, which significantly reduces the use of the L2 cache4 A7 n8 w" ]6 G3 u
and the interference to other SMs.
! y1 m5 j/ W) K1 t; h7 A# C, d1 {* ^ I+ |% u/ T& L1 f/ E' I8 y
通俗一点说,就是为了实现高效的跨节点全面通信。解决的问题本质上和唐家山老师日志里说的双机对拷的场景差不多。一般来说单机多卡之间用nvlink,多机多卡之间依赖IB网络,但nvlink的速率是IB网络的速率的3.2倍,需要通过一些优化来实现更好的传输策略。这是一整套方案。0 z/ W0 c# s" u0 U
! }. H$ O4 ?! r1 x. t
我的理解,使用PTX在其中,是为了更精准的定制线程执行减少通信块分配传输之间的串扰。
' l, Z- s% d O5 P) I5 x3 e; [: N, m* @+ V9 Q
目的不是为了绕cuda,反而是为了让cuda的效率更高。
! A z/ k. j, E4 K) O( A5 b0 d! s' C! I) B: ] E, t) a5 _2 z$ T
类比一下,就好比发现网卡驱动在对拷特定内存块的时候会和应用的线程执行出现串行导致效率降低,而绕开操作系统定义的与网卡驱动的接口,直接使用网卡支持的指令集进行了优化。 |
评分
-
查看全部评分
|