Maria Strazzullo

Politecnico di Torino - Italy

Title: Reference-free and reference-guided reinforcement learning for evolve-filter regularization of convection-dominated flows

Abstract

This talk proposes a reinforcement learning (RL) framework for the dynamic selection of the filter parameter in Evolve–Filter (EF) regularization of incompressible turbulent flows. EF is a common approach to alleviate numerical oscillations related to coarse mesh discretizations that do not scale down to the Kolmogorov scale. Classically, the filter action is determined a priori through a parameter, the filter radius, which is constant in time, often leading to over-diffusive effects.

In contrast, the RL agent adaptively controls the filter radius in time, balancing numerical stability and physical accuracy without relying on heuristic choices. The approach is assessed on flow past a cylinder (Re = 1000) and decaying homogeneous turbulence (Re = 40000). Both reference-guided and reference-free reward formulations are considered. In the reference-guided setting, the agent is trained using direct numerical simulation (DNS) data over a limited time window and subsequently tested in extrapolation. In the reference-free setting, the reward is defined exclusively through physics-based indicators, removing the need for reference solutions and significantly reducing computational cost.

Results show that the RL-EF strategy prevents numerical instabilities while reducing the excessive dissipation of standard EF methods. The learned policies capture the relevant flow dynamics across scales, with reference-free performance comparable to the reference-guided case.