With the intrinsic sparsity arising from data modeling and the dimensionality reduction demands in large-scale computations, sparse optimization has attracted significant attentions from both academic and industrial communities, with extensive efforts distributed in a variety of specific applications including compressed sensing, signal and image processing, machine learning and neural networks, etc. As the sparsity characterizations are nonconvex and nonsmooth in general, and the underlying optimization for practical problems are of huge sizes, traditional optimization approaches have been facing big challenges in both theory and algorithms for handling large-scale sparse optimization. In this talk, we will focus on the how to appropriately exploit the inherent sparse and dimension reducible structures from the optimization models based on in-depth nonsmooth and variational analysis, and propose two types of Newton-type methods with fast theoretical convergence and superior numerical performances in terms of computation time and solution accuracy for solving large-scale sparse optimization models.