In stochastic programming and stochastic-dynamic programming discretization of random model parameters is often unavoidable. We propose a quasi-Newton learning algorithm to discretize multi-dimensional, continuous discrete-time Markov processes to scenario lattices by minimizing the Wasserstein distance between the unconditional distributions of process and lattice. Scenario lattices enable accurate discretization of the conditional distributions of Markov processes while avoiding the exponential growth typically observed with scenario trees. Unlike gradient descent methods, the proposed quasi-Newton method does not require manual calibration of stepsize rules but automatically adjusts stepsizes by scaling updates with an approximation of the Hessian of the distance function. Numerical results based on various stochastic decision problems demonstrate that the proposed method achieves superior discretization of stochastic processes and results in lower optimality gaps compared to existing first-order methods. Furthermore, the method can numerically handle higher-order Wasserstein distances, proving advantageous for discretizing high-dimensional processes.