This paper studies a class of double-loop (inner-outer) algorithms for convex composite optimization. For unconstrained problems, we develop a restarted accelerated composite gradient method that attains the optimal first-order complexity in both the convex and strongly convex settings. For linearly constrained problems, we introduce inexact augmented Lagrangian methods, including a basic method and an outer-accelerated variant, and establish near-optimal first-order complexity for both methods. The established complexity bounds follow from a unified analysis based on new inexact proximal point frameworks that accommodate relative and absolute inexactness, acceleration, and strongly convex objectives. Numerical experiments on LASSO and linearly constrained quadratic programs demonstrate the practical efficiency of the proposed methods.