I show that the Parameterized Expectations Algorithm (PEA) can be naturally generalized via the bias-corrected Monte Carlo (bc-MC) operator, initially proposed to solve economic models using neural networks. When combined with a parameterized expectations approach and under a linearity assumption on the conditional expectation, the gradient of the bc-MC loss function is equal to that of the PEA in a neighborhood of the model’s solution. This leads to a new variance-reduced computational approach to solve economic models, which I refer to as the bc-MC-PEA, extending the PEA to multiple innovation draws for each state vector draw.