Sparse optimization is a series of basic problems in the field of optimization and has important applications in science and engineering. The efficient algorithms for solving sparse optimization has always been an important topic and research direction in the field of numerical optimization. This paper introduces a class of reduction methods for solving large-scale sparse optimization. By using the sparsity of gradient, the framework filters out most of the variables that do not need to be updated. Specially, the original large-scale problem is decomposed into a series of small-scale subproblems, which greatly reduces the calculation of the matrix vector multiplication in the iterative process of the subproblems. Meanwhile, through the reduced framework, we can obtain the subproblems with a better number of conditions, which greatly improves the stability and efficiency of the algorithm. Based on this framework, our algorithm can efficiently solve sparse signal compression sensing problems with tens of millions of variables and samples.