This paper proposes an improved quasi-Newton penalty decomposition algorithm for the minimization of continuously differentiable functions, possibly nonconvex, over sparse symmetric sets. The method solves a sequence of penalty subproblems approximately via a two-block decomposition scheme: the first subproblem admits a closed-form solution without sparsity constraints, while the second subproblem is handled through an efficient sparse projection over the symmetric feasible set. Under a new assumption on the gradient of the objective function, weaker than global Lipschitz continuity from the origin, we establish that accumulation points of the outer iterates are basic feasible and cardinality-constrained Mordukhovich stationarity points. To ensure robustness and efficiency in finite-precision arithmetic, the algorithm
incorporates several practical enhancements, including an enhanced line search strategy based on either backtracking or extrapolation, and four inexpensive diagonal Hessian approximations derived from differences of previous iterates and gradients or from eigenvalue-distribution information. Numerical experiments on a diverse benchmark of 30 synthetic and data-driven test problems, including machine-learning datasets from the UCI repository and sparse symmetric instances with dimensions ranging from 10 to 500, demonstrate that the proposed algorithm is competitive with several state-of-the-art methods in terms of efficiency, robustness, and strong stationarity.