TA的每日心情 | 开心 2020-4-8 10:45 |
|---|
签到天数: 227 天 [LV.7]分神
|
& m3 M |$ z8 k, L6 L0 f0 W: U
在论文里,这是第3.2.2节的内容1 v, O, [" C6 Y0 E0 u% H& G
& g( _. G0 n2 [& z: {5 ~5 I3.2.2. Efficient Implementation of Cross-Node All-to-All Communication5 f: p8 ~ D5 m% Q+ S
In order to ensure sufficient computational performance for DualPipe, we customize efficient
% z. O9 Y# x1 Q: b' j) wcross-node all-to-all communication kernels (including dispatching and combining) to conserve
' N7 f% N- W0 W1 y; athe number of SMs dedicated to communication. The implementation of the kernels is codesigned with the MoE gating algorithm and the network topology of our cluster. To be specific,
1 i% e, M; N2 ?) Pin our cluster, cross-node GPUs are fully interconnected with IB, and intra-node communications0 W# Z6 k* ~+ k% M
are handled via NVLink. NVLink offers a bandwidth of 160 GB/s, roughly 3.2 times that of IB
& W# V& K$ Q+ J, D- ?' J% f(50 GB/s). To effectively leverage the different bandwidths of IB and NVLink, we limit each* E/ o! K0 N8 S
token to be dispatched to at most 4 nodes, thereby reducing IB traffic. For each token, when its
1 w: r) u) G0 a1 [routing decision is made, it will first be transmitted via IB to the GPUs with the same in-node O) [1 f G& F v* N* E3 T
index on its target nodes. Once it reaches the target nodes, we will endeavor to ensure that it is7 }2 p1 Q& f0 N) F$ k! D
instantaneously forwarded via NVLink to specific GPUs that host their target experts, without
0 j, w8 C2 k; r# b! x6 Lbeing blocked by subsequently arriving tokens. In this way, communications via IB and NVLink0 D" C/ E- V1 x
are fully overlapped, and each token can efficiently select an average of 3.2 experts per node- z4 F8 K) q0 h- d& ]
without incurring additional overhead from NVLink. This implies that, although DeepSeek-V3) l* z' \% N9 T" n1 ?6 ^+ K8 _2 C
13
9 `4 L, J6 J; kselects only 8 routed experts in practice, it can scale up this number to a maximum of 13 experts+ A+ x' `% @0 O! d+ l$ M
(4 nodes × 3.2 experts/node) while preserving the same communication cost. Overall, under
& \ h# {: e. usuch a communication strategy, only 20 SMs are sufficient to fully utilize the bandwidths of IB
* Q, e4 ?; j/ {7 Land NVLink.8 l6 e1 f: | L: V- _% ?$ M- A1 b i
In detail, we employ the warp specialization technique (Bauer et al., 2014) and partition
6 e% T/ s( k! N2 Z3 f& g" n0 B20 SMs into 10 communication channels. During the dispatching process, (1) IB sending, (2)
5 E0 ]8 t0 H1 K% k5 O/ a5 v% OIB-to-NVLink forwarding, and (3) NVLink receiving are handled by respective warps. The
. `( D8 D7 \5 G, `number of warps allocated to each communication task is dynamically adjusted according to the( g1 c. o. y# V& p; l: X
actual workload across all SMs. Similarly, during the combining process, (1) NVLink sending," x/ C! M# n, J4 l/ n r- B L7 x
(2) NVLink-to-IB forwarding and accumulation, and (3) IB receiving and accumulation are also
. H, T* V9 L) Y2 }# Z- Mhandled by dynamically adjusted warps. In addition, both dispatching and combining kernels
1 X' ]; I5 m# ]' a; p/ x0 l% goverlap with the computation stream, so we also consider their impact on other SM computation
# q: c; ^. f) n1 @7 I3 {! t) G& Bkernels. Specifically, we employ customized PTX (Parallel Thread Execution) instructions and
/ d% R' r% x; U1 Q8 V' K3 aauto-tune the communication chunk size, which significantly reduces the use of the L2 cache7 Q7 ~0 m# N1 r: R R+ w" g
and the interference to other SMs.& Z) B3 g! _! G: d/ T5 V( B
$ f" f5 I/ ~: B8 z* ^. d$ k' V通俗一点说,就是为了实现高效的跨节点全面通信。解决的问题本质上和唐家山老师日志里说的双机对拷的场景差不多。一般来说单机多卡之间用nvlink,多机多卡之间依赖IB网络,但nvlink的速率是IB网络的速率的3.2倍,需要通过一些优化来实现更好的传输策略。这是一整套方案。0 E8 I0 }8 w0 b
4 @. R3 U. r7 A$ e2 L) ^) _我的理解,使用PTX在其中,是为了更精准的定制线程执行减少通信块分配传输之间的串扰。
2 r$ \+ ^" O( Y; u5 ]( Y8 i0 }5 M9 [% ]1 c) z6 j! r
目的不是为了绕cuda,反而是为了让cuda的效率更高。
: e, V. _+ Q3 C) h& P/ U- U* r& }, _8 l
类比一下,就好比发现网卡驱动在对拷特定内存块的时候会和应用的线程执行出现串行导致效率降低,而绕开操作系统定义的与网卡驱动的接口,直接使用网卡支持的指令集进行了优化。 |
评分
-
查看全部评分
|