TA的每日心情 | 开心 2020-4-8 10:45 |
|---|
签到天数: 227 天 [LV.7]分神
|
3 _6 e3 X+ m9 V0 m在论文里,这是第3.2.2节的内容
. @# w" F5 M$ c; I' u0 i9 P* t* {+ W( a% \
3.2.2. Efficient Implementation of Cross-Node All-to-All Communication
- j% M7 o+ S+ Y: F$ X) IIn order to ensure sufficient computational performance for DualPipe, we customize efficient
# Z8 T0 z1 }. ~- W% e- ]cross-node all-to-all communication kernels (including dispatching and combining) to conserve }+ R1 b9 N* x+ |. N* T- y
the number of SMs dedicated to communication. The implementation of the kernels is codesigned with the MoE gating algorithm and the network topology of our cluster. To be specific,, e$ g) |8 }4 f& e8 S+ \
in our cluster, cross-node GPUs are fully interconnected with IB, and intra-node communications) K) K* V* M, z4 r5 X- c/ i
are handled via NVLink. NVLink offers a bandwidth of 160 GB/s, roughly 3.2 times that of IB
: u7 J" o, F/ s) A1 X3 p; g(50 GB/s). To effectively leverage the different bandwidths of IB and NVLink, we limit each
$ K9 ~* W( B8 ~3 h* T3 p/ Ntoken to be dispatched to at most 4 nodes, thereby reducing IB traffic. For each token, when its
% j/ S% Z M/ Q# \; \3 ^routing decision is made, it will first be transmitted via IB to the GPUs with the same in-node
2 Z: M0 j7 P3 M0 w" h0 G0 z# xindex on its target nodes. Once it reaches the target nodes, we will endeavor to ensure that it is
" T6 G: o) P8 r1 r" R& Iinstantaneously forwarded via NVLink to specific GPUs that host their target experts, without4 r3 a0 g& [! }, j5 H) a
being blocked by subsequently arriving tokens. In this way, communications via IB and NVLink
0 l3 M6 i! \3 O4 z9 o* S% gare fully overlapped, and each token can efficiently select an average of 3.2 experts per node
- ]5 x/ I' X1 ^ w" z# }# L. lwithout incurring additional overhead from NVLink. This implies that, although DeepSeek-V3
}1 y$ \& q+ i. f1 d+ ~. Y8 l135 \2 t* I' J( b& u* z4 ]5 |# D
selects only 8 routed experts in practice, it can scale up this number to a maximum of 13 experts9 r, _; s0 X/ {: d9 U" {. W' j
(4 nodes × 3.2 experts/node) while preserving the same communication cost. Overall, under% x: e- H9 D. _$ z9 O$ ?( h' p
such a communication strategy, only 20 SMs are sufficient to fully utilize the bandwidths of IB
, e3 C, F3 L _and NVLink.
: l9 ?) z. s* B5 A: I) F/ sIn detail, we employ the warp specialization technique (Bauer et al., 2014) and partition. \* I* J8 D" O8 I% F, {
20 SMs into 10 communication channels. During the dispatching process, (1) IB sending, (2)
2 E# d8 a! a4 }2 \IB-to-NVLink forwarding, and (3) NVLink receiving are handled by respective warps. The7 D0 w% q9 {. ^6 `* b; k% i
number of warps allocated to each communication task is dynamically adjusted according to the0 s9 X' E% O1 C3 C
actual workload across all SMs. Similarly, during the combining process, (1) NVLink sending,, T- N2 B+ l) f, v# r3 @9 ?! Z) y) e% W
(2) NVLink-to-IB forwarding and accumulation, and (3) IB receiving and accumulation are also. D4 x) W# s" m) S0 ~
handled by dynamically adjusted warps. In addition, both dispatching and combining kernels7 w# @7 e( T9 {: s5 B. L. N0 v% d
overlap with the computation stream, so we also consider their impact on other SM computation) o+ y5 q" Z1 g* D
kernels. Specifically, we employ customized PTX (Parallel Thread Execution) instructions and. o, Q" Q4 T' S
auto-tune the communication chunk size, which significantly reduces the use of the L2 cache3 Z: g% ~, B1 j& ~6 ~6 O/ e
and the interference to other SMs.
0 d$ M0 d3 ^) L8 J3 M! \; p
4 P( j4 n' s2 a8 S4 E6 C3 ?通俗一点说,就是为了实现高效的跨节点全面通信。解决的问题本质上和唐家山老师日志里说的双机对拷的场景差不多。一般来说单机多卡之间用nvlink,多机多卡之间依赖IB网络,但nvlink的速率是IB网络的速率的3.2倍,需要通过一些优化来实现更好的传输策略。这是一整套方案。! Q9 }4 G, p4 l! T' R' o, i4 c9 S
" ^* \3 l% v: A1 A7 M( \' Y( \我的理解,使用PTX在其中,是为了更精准的定制线程执行减少通信块分配传输之间的串扰。- G( m4 c1 V, ^ R% b
s. l2 o3 {" r
目的不是为了绕cuda,反而是为了让cuda的效率更高。' x% j$ X- I- D, n& J
# x# X0 L& S* d2 Q0 u0 T; v0 g类比一下,就好比发现网卡驱动在对拷特定内存块的时候会和应用的线程执行出现串行导致效率降低,而绕开操作系统定义的与网卡驱动的接口,直接使用网卡支持的指令集进行了优化。 |
评分
-
查看全部评分
|