DeML OS Daily DeML OS 最新前沿分析
Explore Frontier
02.23
2026
Mon
📄
Paper
SeedFlood: A Step Toward Scalable Decentralized Training of LLMs https://arxiv.org/abs/2602.18181
Jihun Kim Gossip Zero-th Order Optimization

Notes

DeML OS Q & A 问答
Deep Dive 💬
02.23
2026
Mon
😇
What main problem does SeedFlood aim to solve in decentralized training?
It solves the communication bottleneck. Traditional costs scale linearly with model size, hindering large model (like LLM) training. SeedFlood significantly reduces message size, making communication overhead negligible and independent of model size.
😎
😊
How does SeedFlood use "zeroth-order updates" and "seed-reconstructible" properties to reduce communication?
Zeroth-order update vectors can be precisely reconstructed using a small random seed and a deterministic generator. Clients only broadcast this constant-sized seed. Receivers reconstruct the full high-dimensional vector, avoiding massive model update transmissions.
😎
🤓
What does SeedFlood compromise? What are its scenarios and limitations?
Compromises: 1) Slower convergence needing more iterations; 2) Variance from stochastic estimation affecting stability. Scenarios: Bandwidth-limited decentralized training for massive models (LLMs), complex networks, or high privacy needs. Limitations: Relies on zeroth-order optimizers, unsuited for all loss functions or ultra-high precision tasks.
😎