The tractability of optimization problems depends critically on structural properties of the objective function. Convexity guarantees global optimality of local solutions and enables polynomial-time algorithms under mild assumptions, but many problems arising in modern applications—particularly in machine learning—are inherently nonconvex. Remarkably, a large class of such problems remains amenable to efficient optimization due to additional structure that weakens or generalizes convexity without forfeiting favorable algorithmic behavior. This paper surveys and systematizes notions of convexity and its generalizations, while also providing new comparative insights and explicit inclusion relationships among these function classes. We present a coherent taxonomy of functions that generalize, strengthen, and relax convexity, consolidating definitions, equivalent characterizations, closure properties, and hierarchical relations that are currently scattered across the optimization, operations research, and machine learning literature. Particular emphasis is placed on quasar-convexity, a recently introduced geometric condition that captures structured nonconvexity while enabling convergence guarantees comparable to those of convex optimization for many first-order methods. Through explicit inclusion diagrams and systematic comparisons, we clarify the relationships among classical generalizations, geometric variants, regularity conditions, and partial convexity notions. The resulting “convexity zoo” provides a comprehensive reference for researchers seeking to understand and exploit structured nonconvexity in contemporary optimization.