Title: Bilinear Classes: A Structural Framework for Provable Generalization in RL Abstract: Tackling large state-action spaces is a central challenge in reinforcement learning (RL). Theoretically, there is a growing body of results showing how sample efficiency is possible in RL for particular model classes e.g. State Aggregation, Linear MDPs, Linear Mixture MDPs, Block MDPs, FLAMBE, Reactive PSRs, Linear Bellman Complete and many more. This work introduces Bilinear Classes, a new structural framework, which incorporates nearly all existing models in which a polynomial sample complexity is achievable, and, notably, also includes new models, such as the Linear Q*/V* model in which both the optimal Q-function and the optimal V-function are linear in some known feature space. Our main result provides an RL algorithm which has polynomial sample complexity for Bilinear Classes; notably, this sample complexity is stated in terms of a reduction to the generalization error of an underlying supervised learning sub-problem.