Interpreted machine learning in fluid dynamics: explaining relaminarisation events in wall-bounded shear flows

JOURNAL OF FLUID MECHANICS(2022)

引用 4|浏览1
暂无评分
摘要
Machine Learning (ML) is becoming increasingly popular in fluid dynamics. Powerful ML algorithms such as neural networks or ensemble methods are notoriously difficult to interpret. Here, we introduce the novel Shapley additive explanations (SHAP) algorithm (Lundberg & Lee, Advances in Neural Information Processing Systems, 2017, pp. 4765-4774), a game-theoretic approach that explains the output of a given ML model in the fluid dynamics context. We give a proof of concept concerning SHAP as an explainable artificial intelligence method providing useful and human-interpretable insight for fluid dynamics. To show that the feature importance ranking provided by SHAP can be interpreted physically, we first consider data from an established low-dimensional model based on the self-sustaining process (SSP) in wall-bounded shear flows, where each data feature has a clear physical and dynamical interpretation in terms of known representative features of the near-wall dynamics, i.e. streamwise vortices, streaks and linear streak instabilities. SHAP determines consistently that only the laminar profile, the streamwise vortex and a specific streak instability play a major role in the prediction. We demonstrate that the method can be applied to larger fluid dynamics datasets by a SHAP evaluation on plane Couette flow in a minimal flow unit focussing on the relevance of streaks and their instabilities for the prediction of relaminarisation events. Here, we find that the prediction is based on proxies for streak modulations corresponding to linear streak instabilities within the SSP. That is, the SHAP analysis suggests that the break-up of the self-sustaining cycle is connected with a suppression of streak instabilities.
更多
查看译文
关键词
turbulent transition, machine learning, low-dimensional models
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要