Compromises: 1) Slower convergence needing more iterations; 2) Variance from stochastic estimation affecting stability. Scenarios: Bandwidth-limited decentralized training for massive models (LLMs), complex networks, or high privacy needs. Limitations: Relies on zeroth-order optimizers, unsuited for all loss functions or ultra-high precision tasks. 妥协方面:1) 收敛较慢,需要更多迭代;2) 随机估计引入方差,影响稳定性。适用场景:带宽受限、超大模型(如LLM)、复杂网络或高隐私要求的去中心化训练。局限性:高度依赖零阶优化器,不适合所有损失函数或极高精度的任务。