TA的每日心情 | 开心 2020-4-8 10:45 |
|---|
签到天数: 227 天 [LV.7]分神
|
3 }1 J! m, H9 d1 Z# W' E5 L0 h7 x* {
在论文里,这是第3.2.2节的内容: f$ ^, n s/ |8 n3 H
; |# R6 Q6 S s* E
3.2.2. Efficient Implementation of Cross-Node All-to-All Communication
* Y4 ]7 h& s9 D, @& ]' |In order to ensure sufficient computational performance for DualPipe, we customize efficient
$ C, E! T7 M3 O6 X- Icross-node all-to-all communication kernels (including dispatching and combining) to conserve
4 q/ m# ~9 b3 |7 ~the number of SMs dedicated to communication. The implementation of the kernels is codesigned with the MoE gating algorithm and the network topology of our cluster. To be specific,
0 d7 s' ]# V7 G, T: W# ]in our cluster, cross-node GPUs are fully interconnected with IB, and intra-node communications- }; }; q2 r+ T0 p
are handled via NVLink. NVLink offers a bandwidth of 160 GB/s, roughly 3.2 times that of IB
0 W$ k3 }- X- D1 h7 U) h(50 GB/s). To effectively leverage the different bandwidths of IB and NVLink, we limit each- L0 z* r* |( B& I8 m
token to be dispatched to at most 4 nodes, thereby reducing IB traffic. For each token, when its
1 O9 K# u8 J# C) h. F, q9 Zrouting decision is made, it will first be transmitted via IB to the GPUs with the same in-node0 ?9 S8 p0 X, D% n" e4 h4 N* N
index on its target nodes. Once it reaches the target nodes, we will endeavor to ensure that it is4 c, h3 [& P" ?+ {" F) k
instantaneously forwarded via NVLink to specific GPUs that host their target experts, without
4 A6 Q6 M5 _! M% obeing blocked by subsequently arriving tokens. In this way, communications via IB and NVLink
! P) P6 f* c; D. f$ A4 |8 Bare fully overlapped, and each token can efficiently select an average of 3.2 experts per node
, Q& f% c7 z5 ~% G8 kwithout incurring additional overhead from NVLink. This implies that, although DeepSeek-V3 D) }! c: I" e) F7 b2 j
13
8 S! Z, E; a F7 H: J5 M% Z" ^+ ?7 a- Mselects only 8 routed experts in practice, it can scale up this number to a maximum of 13 experts$ _6 ]$ t& {. f- R
(4 nodes × 3.2 experts/node) while preserving the same communication cost. Overall, under( ^/ N" e( l" J: x! f
such a communication strategy, only 20 SMs are sufficient to fully utilize the bandwidths of IB. D, l% b+ l& V+ T
and NVLink.
8 Q3 j0 X1 X0 N' W8 V- `In detail, we employ the warp specialization technique (Bauer et al., 2014) and partition
4 i, P5 b7 \+ s( H& g. S20 SMs into 10 communication channels. During the dispatching process, (1) IB sending, (2)
3 f& @6 } Y, @4 }6 Z4 M9 \IB-to-NVLink forwarding, and (3) NVLink receiving are handled by respective warps. The
5 O6 g0 a1 C* Wnumber of warps allocated to each communication task is dynamically adjusted according to the
* | b( I. y" n; V* A/ |* ?0 lactual workload across all SMs. Similarly, during the combining process, (1) NVLink sending,6 Z0 V9 _* H& p
(2) NVLink-to-IB forwarding and accumulation, and (3) IB receiving and accumulation are also) y+ G# o8 |$ }1 S. i+ j: ]
handled by dynamically adjusted warps. In addition, both dispatching and combining kernels H9 Z. n8 ]8 @: i# J0 M" o, B* D D
overlap with the computation stream, so we also consider their impact on other SM computation
% ]; P b5 ~( q0 ?9 A' C' C& Zkernels. Specifically, we employ customized PTX (Parallel Thread Execution) instructions and5 `7 ^ r9 G4 M6 D7 J
auto-tune the communication chunk size, which significantly reduces the use of the L2 cache$ [ m+ R8 z& G$ ] q, l% U; B m
and the interference to other SMs.: B# ?% _% m. ]5 b
* E3 g: h3 x; }0 X, p
通俗一点说,就是为了实现高效的跨节点全面通信。解决的问题本质上和唐家山老师日志里说的双机对拷的场景差不多。一般来说单机多卡之间用nvlink,多机多卡之间依赖IB网络,但nvlink的速率是IB网络的速率的3.2倍,需要通过一些优化来实现更好的传输策略。这是一整套方案。
$ t2 a& C4 \ I" B6 t5 ~3 p! Y6 I
9 F* v8 m. U! S* x2 w' d我的理解,使用PTX在其中,是为了更精准的定制线程执行减少通信块分配传输之间的串扰。
$ N4 h6 c6 `1 z% ~. X; O8 g- Z& A- _; [6 |/ k% F: g3 L( G5 B: V
目的不是为了绕cuda,反而是为了让cuda的效率更高。
+ m3 Y* S6 P3 j8 q
l, B M6 |3 D% c$ O5 m类比一下,就好比发现网卡驱动在对拷特定内存块的时候会和应用的线程执行出现串行导致效率降低,而绕开操作系统定义的与网卡驱动的接口,直接使用网卡支持的指令集进行了优化。 |
评分
-
查看全部评分
|