A Numerical Simulation of a Single Shock-Accelerated Particle
crossref
Univ Missouri
Abstract
[ACCESS RESTRICTED TO THE UNIVERSITY OF MISSOURI AT REQUEST OF AUTHOR.] Particle drag models, which capture macro viscous and pressure effects, have been developed over the years for various flow regimes to enable cost effective simulations of particle-laden flows. The relatively recent derivation by Maxey and Riley has provided an exact equation of motion for spherical particles in a flow field based on the continuum assumption. Many models that have been simplified from these equations have provided reasonable approximations; however, the sensitivity of particle-laden flows to particle drag requires a very accurate model to simulate. To develop such a model, a 2D axisymmetric Navier-Stokes direct numerical simulation of a single particle in a transient, shock-driven flow field was conducted using the hydrocode FLAG. FLAG's capability to run arbitrary Lagrangian-Eulerian hydrodynamics coupled with solid mechanic models makes it an ideal code to capture the physics of the flow field around and in the particle as it is shock-accelerated -- a challenging regime to study. The goal of this work is twofold: to provide a validation for FLAG's Navier-Stokes and heat diffusion solutions, and to provide a rationale for recent experimental particle drag measurements. It was found that the particle temperature and kinematic results closely line up with those predicted by well-established heat transfer and drag models, validating the numerical solutions. The rational for the measurements, with this validation in mind, is that there is an experimental bias towards introducing smaller particles than expected.
MoreTranslated text
Key words
Particle-Laden Flows,Two-Phase Flow,Discrete Particle Simulation,Gas-Solid Flow
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined