We present a filter linesearch method for solving general nonlinear and nonconvex optimization problems. The method is of the filter variety, but uses a robust (always feasible) subproblem based on an exact penalty function to compute a search direction. This contrasts traditional filter methods that use a (separate) restoration phase designed to reduce infeasibility until a feasible subproblem is obtained. Therefore, an advantage of our approach is that every trial step is computed from subproblems that value reducing both the constraint violation and the objective function. Moreover, our step computation involves subproblems that are computationally tractable and utilize second derivative information when it is available. The formulation of each subproblem and the choice of weighting parameter is crucial for obtaining an efficient, robust, and practical method. Our strategy is based on steering methods designed for exact penalty functions, but fortified with a trial step convexification scheme that ensures that a single quadratic optimization problem is solved per iteration. Moreover, we use local feasibility estimates that emerge during the steering process to define a new and improved margin (envelope) of the filter. Under common assumptions, we show that the iterates converge to a local first-order solution of the optimization problem from an arbitrary starting point.