Planning under uncertainty is a frequently encountered problem. Noisy observation is a typical situation that introduces uncertainty. Such a problem can be formulated as a Partially Observable Markov Decision Process (POMDP). However, solving a POMDP is nontrivial and can be computationally expensive in continuous state, action, observation and latent state space. Through this work, we consider a restricted POMDP problem, where we alleviate the dependency of the latent state on the controllable action. Our proposed approach involves using an Extended Kalman Filter (EKF) for latent state estimation and a Particle Filter for goal estimation in conjunction with an iterative Linear Quadratic Regulator (iLQR) to find an optimal trajectory that minimizes the cumulative model cost. As a means to evaluate the feasibility and optimality of our solution, we compare this approach against the naive strategy of planning to only the goal rolled out from the immediate observation made by the agent.