In the following set of posts I shall present what I find as the most effective best practice guidelines to the challenge of modeling turbulence. I shall try to keep the posts as practical as I can from an engineering standpoint, and as pointed as I can towards the best practice guidelines themselves. To be able to so without getting bogged down with the many important details underlying turbulence modeling, I will mostly link to posts or to specific notes (such as the 🔥Slideshare Alert🔥 series) I have published in the past, where anyone who wants to read more may delve as in-depth as one likes. Most of these may be found in “Communicating CFD…”.
As the saying goes: “All models are wrong but many are useful…”, so the intention of this set of posts is to serve as a kind of “road map” for the average CFD practitioner to be able to use turbulence models with confidence, such that at the very least the possible range of validity is maintained.
Even though it will by no means be all encompassing nor highly accurate I hope for the essence to be captured, and even more so communicated…
Turbulence Modeling: Motivation
Our voyage to establishing this best practice “road map” begins with an overview of turbulence modeling, intended to create an equal conceptual basis standpoint. We do that by looking at the main issues which are unique to turbulent flows.
In the following posts I shall focus on turbulent models derived from the Reynolds-Averaged Navier-Stokes (RANS) branch, by which most of turbulent related applications are solved nowadays, a trend that shall not expire soon, but I will also supply best practice guidelines for Scale-Resolving Simulations (SRS), and especially the now emerging field of hybrid simulations, and in order to that I will have to propose a set of best practice guidelines for Large-Eddie Simulations (LES – which will by no means be all inclusive, but should suffice to allow a practitioner to best approach such modeling).
So… What motivates us to talk about turbulence?
Well, far and foremost, it is everywhere! A wonderful illustration for an engineering application involving turbulent flow is that of the flow around a Formula I race car. An incoming turbulent flow is subtly manipulated through the race car’s front wing’s Y250 to best interact with the “messiness” created by the wheels, such that the turbulent flow is delivered to the race car’s rear wing in a way that the most down force is produced from the car as a whole:
Most of nowadays CFD simulations are conducted with the Reynolds Averaging approach, and that’s were we are first heading with our best practice guidelines.
This “working horse” of industrial CFD is based on the Reynolds decomposition according to which the flow variables are decomposed into mean and fluctuating quantities. After the decomposition is applied to Navier-Stokes equation an extra term known as the Reynolds Stress Tensor arises and a modelling methodology is needed to close the equations.
The first set of posts (divided to few separate posts to keep the posts length at bay… 🤓) will focus on the best practice guidelines for a specific kind of RANS closure models: Eddy Viscosity Models (EVMs).
RANS EVMs are models based on the Boussinesq Hypothesis (a thorough description in Understanding the Boussinesq Hypothesis and the Eddy-Viscosity Concept). There are many variants of EVMs, but I shall artificially divide them into two groups:
- Standard EVMs – models which most frequently used, and by that I mean that if we take the total amount of turbulence containing CFD simulations, they would consist of more than 95% of the simulations conducted.
- Task specific EVMs – models which are designed to perform a specific task which is supposed to improve the physical fidelity of the model (such as to model transition or capture small scale turbulent content).
Note that there is no such formal demarcation, but it will become useful to ascribe to them a (very) different set of best practice guidelines, which will also allow, taking into account an extended range of validity or lack there of (mostly the latter), to improve our decision making in the choice of turbulence models.
In this post (PART I), a general description of the Standard EVMs shall be presented along with meshing guidance.
The next post, “Turbulence Modeling Best Practice Guidelines: Standard EVMs – PART II”, shall present important topic of V&V along with some in-depth model specific practical best guidelines on near-wall treatment, which essentially follows my past experience with these popular turbulence models.
Best Practice Guidelines for Standard Eddy Viscosity Models (EVMs)
There are many variants for standard EVMs as closure models, most of which are predicated either on one transport equation for the eddy viscosity itself, best known of such models is the Spalart-Allmaras model, or turbulent models for which there are two transport equations, for parameters which are essentially equivalent to the turbulent length scale and its time scale such as the k-ε and its variants and the k-ω and its variants, and from which the eddy viscosity may be extracted:
There are also lower levels of turbulence modeling approaches, such as algebraic turbulence models and some which contain only one differential equation for the turbulent kinetic energy along with a pre-specified turbulent length-scale, but these have a very limited range of applicability, which in the past was somewhat reasonable due to scarcity in computational resources, but nowadays is simply not an issue, so I will not cover best practice guidelines for such models, although they still have quite an educational value.
Tip No. 1: Know Thy Models – EVMs crash course
The first tip is all about basically distinguishing between what I’ve referred to as standard EVMs. As alluded before, this post is about being practical (from an engineering standpoint), so the full details about each of the models is not going to be presented. You may read all about it in former and dedicated posts. It is important though that at the very least, one would know how what major features distinguishing between these models.
The roots of this family of turbulent models are traced to the beginning of the 70’s of the previous century, and to the Imperial College of London, where Jones and Launder (1972) and then Jones and Spalding (1974) presented a publication
…advocates that computational economy, range of applicability and physical realism are best served at present by turbulence models in which the magnitudes of two turbulence quantities, the turbulence kinetic energy k and its dissipation rate ϵ, are calculated from transport equations solved simultaneously with those governing the mean flow behavior.
B.E.Launder D.B.Spalding – The numerical computation of turbulent flows (1974)
This family of turbulence models still persist as arguably the most popular family of turbulence models, and may also be found in all commercial codes available.
The Jones-Launder based k-ε (which from now on I shall simply as k-ε) turbulence model variants proved to be a very robust framework for RANS simulations, performing very well (perhaps somewhat surprisingly) way beyond what could (roughly) be considered to be its theoretical range of validity.
The major feature of this model, one that should always be kept in mind, is its “wall sensitivity”. This is due to the fact that the formulation simply cannot be integrated all the way to a solid boundary. Instead, either the law of the wall must be employed to provide velocity “boundary conditions” away from solid boundaries, or some manipulation to exchange the formulation with a different one must be employed. I’ve presented a thorough explanation of the different ways this is actually done in Turbulence Modeling – Near Wall Treatment.
Later tips shall regard specific subtleties but for now it suffices to say that this model and its variants (Standard/Realizable/RNG) should perform very well for high Reynolds, steady, equilibrium flows without a large adverse pressure gradient (we find latter in flows over convex surfaces for example), for which explicitly modeling the effect of the inner most portion of the turbulent boundary layer is not necessary .
It doesn’t mean that we should not use the model for other types of flows, even some we clearly identify as outside its supposed range of validity, just that we must take extra precautions (explained in the model-specific guidelines of the following post PART I-b) upon doing so.
This family of models has many extensions, and I have a post dedicated to each child of the extended family, but here I will follow my artificial demarcation and present a (very) short description of them.
Historically speaking it was the turbulence research (and far beyond…) giant Andrey Kolmogorov who proposed the first two-equation model of turbulence. Kolmogorov chose the kinetic energy of the turbulence as his first parameter, and his second parameter was the dissipation per unit turbulence kinetic energy, namely ω. As always, Kolmogorv’s way of arriving to miraculous descriptions basing on foundational principles such as dimensional analysis is amazing, but basically the model had flaws (some were later removed by Imperial college team led by Launder and Spalding), and in any case was less of in a computational intent.
The k-ω family of turbulence models as we’ve learned to appreciate them today is based on the model developed by David C. Wilcox, in an effort to alleviate specific issues readily apparent in popular k-ε models, specifically, stiffness in the innermost portion of the turbulent boundary-layer, and a tendency to delay separation in adverse pressure gradient situations (the main precursor for separation in aerodynamic bodies).
The fact that the ω equation of the original Wilcox model model proved to show high sensitivity to conditions in the free-stream conditions, led further developments, specifically those by Florian Menter, which were mostly designed to improve the model’s ability for aerodynamic flows with the development of the Baseline (BSL) and Shear Stress Transport (SST) k-ω models, which later found their way to most codes.
Most people when asked about the k-ω SST would immediately say: “The model which incorporates k-ω model in the inner portion of the boundary-layer and k-ε model in the outer portion”, which is true, but bares no relation whatsoever to the Shear Stress Transport concept… Combining these two models into one model where we use the k-ω model in the inner portion of the boundary-layer then smoothly switch to the k-ε model when we move away from the wall, and into the free stream area, is what the BSL model is all about.
Although the BSL showed an improved prediction for separation in aerodynamic bodies, it was noted upon observing experimental evidence that a hypothesis by Peter Bradshaw (another Imperial College giant…) could improve predictions for these non-equilibrium flows going into separation. Out came SST…
Although the Wilcox-based k-ω, without the BSL alteration appears in many codes, the problem alluded above precludes it from being used for actual engineering application. In short, don’t use the model unless you have a clearly defined motive for doing so…
It suffices to say at this point that using either the BSL or SST k-ω model has one clear and inherent advantage which should not be taken lightly over all k-ε variants and it is the wall insensitivity.
A second advantage for using these models is of course for the specific phenomena they were designed for (aerodynamic flows), namely more sensitivity to adverse pressure gradient (the same sensitivity could prove disadvantageous for other engineering applications).
The SST concept was found in many test cases to improve predictions for these non-equilibrium flows such as aerodynamic flows just before going into separation, but we should be very careful upon saying that because the abilities of the k-ω SST (or any linear EVM-based RANS model for that matter) to perform when such unsteady phenomena occurs is very limited.
The Spalart-Allmaras (SA) Turbulence Model
The Spalart-Allmaras (SA) Turbulence Model has been developed for aerodynamic ﬂows. The formulation blends from a viscous sublayer formulation to a logarithmic formulation based on y+. In as such, no addition of highly non-linear damping functions for laminar/viscous sublayer modeling is in use. This means that contrary to the k-ε family of turbulent models, and just as the k-ω family, it is wall insensitive.
Although the SA turbulence model is a one equation model, it is considered as “closed”, meaning that the model possess sufficient equations for constructing the eddy viscosity with no direct use for experimental results.
Not that the other families of models were also derived initially as to have specific beneficial qualities for very specific flow applications, but the SA model is even more so – for external aerodynamic flows, and although it was improved to include corrections to deal with features such as rotation and heat transfer for example, it was is seldom used for applications which are not aerospace industry related.
Tip No. 2: Meshing for Standard EVMs
There are many important tips and best practice for meshing, and some general remarks about meshing are always true whether the flow is turbulent or not. I am not planning on an exhaustive list, but just those guidelines which are most tightly interact with turbulence modeling with standard EVMs.
Also note that these guidelines will be considerably different for Task-Specific EVMs, and even more so for LES, both covered in following posts of this series, so by no means use them as general guidelines.
Near wall behavior and y+ estimation:
All EVMs are calibrated in accordance with the Law of the Wall. This means that the picture of a turbulent boundary layer over a flat plate should be kept in mind whenever we prepare the mesh to best capture near-wall behavior to the extent appropriate for the application.
Roughly speaking, we may distinguish between two near-wall modeling options:
- Flow applications that demand resolving the viscous sublayer:
This type of applications are such that the inner most portion of the turbulent boundary layer (in as much as EVMs supply us with sufficient physical fidelity) is very important for obtaining valuable results.
Commonly encountered examples are: forced convection heat transfer, calculation of aerodynamic drag, flow which may include separation and reattachment due adverse pressure gradient…
- The first cell away from from the solid boundary (to be more precise, the center of the cell), should be located at y+<1.
- y+ may be grossly estimated apriori through the following procedure (it’s quite easy to prepare an Excel spreadsheet calculating the different parameters). Note that the estimation is very insensitive to the length scale L, as long as the order of magnitude is kept (i.e. it could be 1 or 7 without much effect, but not 1000 instead of 1…).
- The y+ is essentially solution-dependent, so as part of the post-processing, you may display surface contours of y+ on the solid boundary to verify that the resolution is sufficient. If not you need to variate this value. You may do that by simply returning to the mesh and remeshing (in some mesh generators it could be done more efficiently).
- Flow applications that do not demand resolving the viscous sublayer:
We will use such a meshing approach for cases where the effect of integrating through the viscous sublayer is of secondary importance or when the flow undergoes geometry induced separation. Common examples are aerodynamic lift calculations, and flow surrounding a bluff body.
Essentially the process is quite the same as described above, only that the first cell away from the wall shall be positioned such that y+ >30 (in the log-layer), but not too much above since we still must have an appropriate resolution to capture the (normal) gradients in the turbulent boundary layer with sufficient resolution.
It is very important to note, that the extent of the log-layer, or in other words its upper bound stretches to, is flow dependent. Low Reynolds flows for example might have a very low y^+ cutoff for the log-layer.
Topology and structure of mesh guidance – turbulent boundary layer:
The most important thing to remember, and is a consensus among CFD practitioners, is the impact of alignment of flow gradients on phenomena like numerical diffusion and dispersion, plus in general the calculations of gradients.
Not to delve to deep into the topic, looking at the pure convection equation:
The above PDE’s modified equation (the PDE which the exact solution to the discretized equations satisfies) is:
Meaning that the numerical behavior of the discretization scheme largely depends on the relative importance of dispersive and dissipative effects.
In the near-region, and of course considerably more pronounced in turbulent flows, the highest flow gradients are in the wall.
Tetrahedral meshes are obviously impossible to place with a constant face normal direction, and therefore produce more truncation (see the simple illustration below).
Tet meshes do present several geometric assets, such as planar faces and well defined face and volume centroids. Tet meshes can approximate almost any arbitrarily shaped geometry in great detail.
Therefore, to capture the turbulent boundary layer we use prism cells (these could be Poly prism, Hex, or triangular, as long as the alignment of the face normal with (normal to solid boundary in a turbulent boundary layer) flow gradients is maintained (sometimes called Orthogonal Layers).
It is recommended to place at least 15 layers of prism cells one above the other, with a growth rate of 1-1.2 (due to the logarithmic behavior of described in the Law of the Wall).
Always remember that it is not enough to simply create 15 orthogonal layers at the appropriate y+, but they must cover the entirety of the turbulent boundary layer. Having only 5 layers inside the boundary layer, and the other 10 stretching way beyond the boundary layer thickness is simply insufficient resolution!
Also, note that 15 is not a golden number or a sweet-spot for the number of orthogonal layers, but an estimation minimum number of layers for the entirety of the boundary layer, and this is true even for flow applications which do not demand resolving the viscous sublayer, i.e. y+>30.
Furthermore, there are many applications for which the best practice for the number of layers shall be 30-40.
Since nowadays most mesh is not generated with a zone-specific algorithm (constant thickness, last aspect ratio, first layer height, or whatever definition such that enough parameters are defined…), the choice of algorithm may have a large impact on proper coverage of the entirety of the turbulent boundary layer with such orthogonal layers in all of the domain. For example, specific local surface mesh refinement may have a substantial impact on the total width of the orthogonal layers (by using an aspect ratio controlled algorithm for example), even when boundary layer itself doesn’t change much. Such anomalies should not be disregarded!
The best way to check whether the orthogonal layers cover the entirety of the turbulent boundary layer is to present contours of the Turbulent Viscosity Ratio (EVR – which is simply the ratio between the turbulent to molecular viscosity). This parameter most clearly displays the extent of the boundary layer since it has its maximum value around its midst (In contrast to following the velocity contour plot which may be somewhat deceiving as the velocity gradients normal to the wall fall is the free-stream is approached).
Topology and structure of mesh guidance – outside of the turbulent boundary layer:
Outside of the turbulent boundary layer, there is much more than meets the eye, although the general conception is that a hexahedral mesh may be placed such that it is much more aligned with the flow gradients than tetrahedral meshes, which are obviously impossible to place with a constant face normal direction, and therefore produce more truncation (see the simple illustration of quad Vs. triangular elements above), Tet meshes do present several geometric assets, such as planar faces and well defined face and volume centroids. Moreover, Tet meshes can approximate almost any arbitrarily shaped geometry in great detail, and the lack of alignment with flow gradients in the near wall region is avoided through the use of prism-like orthogonal layers smoothly connected to the Tet mesh covering the rest of the domain.
Furthermore, automatic volume-fill methods such as Delauny and advancing-front have been well studied, developed and suited for Tet mesh, providing currently a very robust solution for meshing ultra complex geometries in 3D, especially when mesh morphing (such as if ice accretion effect on a wing shape is to be simulated) and adaptivity are concerned.
Poly mesh, also a viable option in most mesh generators would not be as aligned with the flow gradients for flows of structured orientation as much as a hexa mesh would, but due to the multiplicity of faces it has 6 optimal direction (if it has 12 faces) in contrast with hex which have 3 optimal faces, it may be more accurate than hex mesh for recirculating flows even for those which seem optimal for hex as they have a cartesian domain such as the qubic lid-driven cavity: