Robust optimization (RO) is one of the key paradigms for solving optimization problems affected by uncertainty. Two principal approaches for RO, the robust counterpart method and the adversarial approach, potentially lead to excessively large optimization problems. For that reason, first order approaches, based on online-convex-optimization, have been proposed (Ben-Tal et al. (2015), Kilinc-Karzan and Ho-Nguyen (2018)) as alternatives for the case of large-scale problems. However, these methods are either stochastic in nature or involve a binary search for the optimal value. We propose deterministic first-order algorithms based on a saddle-point Lagrangian reformulation that avoids both of these issues. Our approach recovers the other approaches' O(1/epsilon^2) convergence rate in the general case, and offers an improved O(1/epsilon) rate for problems with constraints which are affine both in the decision and in the uncertainty. Experiment involving robust quadratic optimization demonstrates the numerical benefits of our approach.