*Result*: Communication-Efficient Distributed Sparse Learning with Oracle Property and Geometric Convergence.
*Further Information*
*This article introduces two highly efficient distributed non-convex sparse learning algorithms. Our approach accommodates non-convexity in both the loss function and penalty, acknowledging the potential non-uniqueness of local minimizers due to the inherent non-convexity. The development of an algorithm that ensures convergence to a locally minimal solution with desired statistical properties becomes imperative in this context. To overcome this challenge, we propose a strategy involving the relaxation of non-convexity in the penalty function through a local linear approximation. Addressing non-convexity in the loss function, we employ the proximal homotopy method, initiating the process with a relatively large regularization parameter that is gradually shrunk toward the target parameter. Theoretical considerations form the cornerstone of our work, in which we provide an explicit statistical rate for the approximated solution in each iteration. Our contributions extend to establishing oracle properties and demonstrating asymptotic normality, marking a significant advancement as the first asymptotic normality result for the approximated solution yielded by the algorithm. Consequently, our theoretical framework offers valuable insight into setting the optimization error, especially when undertaking statistical inference for the approximate estimator. In terms of computational efficacy, our algorithm exhibits a geometric convergence rate within each middle loop. We substantiate the validity of our theory through a comprehensive presentation of numerical results. This multifaceted analysis underscores the applicability of our proposed algorithms, consolidating their credibility in both the theoretical and computational domains. Supplementary materials for this article are available online, including a standardized description of the materials available for reproducing the work. [ABSTRACT FROM AUTHOR]
Copyright of Journal of the American Statistical Association is the property of Taylor & Francis Ltd and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)*