“Turbulence Modeling Best Practice Guidelines: Task specific EVMs – PART I“ presented a general description of the Standard EVMs along with meshing guidance.

The next post, “Turbulence Modeling Best Practice Guidelines: Standard EVMs – PART II”, discusses what is arguably the important topic – V&V, along with some in-depth model specific practical best guidelines on near-wall treatment, which essentially follows my past experience with these popular turbulence models.

In part III, I shall delve to the domain of what might be termed “task specific EVMs”, all those models aimed to achieve specific fidelity and/or features unattainable by standard EVM’s. Specifically I shall focus on four of them, namely:

*v2f model.**Scale-Adaptive Simulation (SAS) model.**Partially-Averaged Navier-Stokes (PANS) Method.*

*Help developing the blog by donating in Patreon*

### The v2f Turbulence Model

For incompressible NSE the pressure the pressure in a ﬂuid is by nature elliptic*. *What this means is that* the effect of pressure at one point will affect the entire flowfield instantaneously*.

This sentence, albeit presents a simplistic view on the nature of pressure, is misleading in more than one way:

- First, Real fluids such as gasses are actually highly compressible (regardless of the Mach number). incompressibility is somewhat of an approximation even for liquids. It is true that even for gases (at low Mach numbers), a ﬂow can act as if it were incompressible, in that we can make very accurate predictions using equations subsequently to approximating the density as constant. Nonetheless, even for low Mach numbers, when pressure differences, and density differences are all small, the density differences are of the same order of magnitude as the pressure differences. The reason we may neglect density changes and not pressure changes is due to the density’s role in NSE (and continuity) equations. As pressure differences in NSE (appears under gradient) and small velocity differences have a huge impact on the flow, a small difference in density affects the flow much less such that even in the presence of large velocity disturbances it is justified to use the incompressibility approximation as long as the velocity is much less than the speed of sound.

- Second, writing that
*the effect of pressure at one point will affect the entire flowfield instantaneously*, might suggest a one-way causation, such that pressure gradient causes acceleration and by that induces velocity (Newton’s second law). Although this is not false, it’s incomplete though. NSE dependent variables such as pressure and velocity hold a reciprocal, circular relation. So as the pressure gradient causes the acceleration, the acceleration sustains the pressure gradient.

Given the simplistic view given above, the importance high fidelity modeling of near-wall effects seems quite clear. Not to be vague, I shall add that the definition of what is exactly this “near wall” is not as important but it stands for the region in proximity to a solid boundary where the assumption of eddy viscosity modeling of homogeneous turbulence to simplify the pressure-strain redistribution tensor doesn’t hold.

Now to continue with my reasoning for relating the above to curvature effects, I shall address yet another issue relating to boundary-layer pressure effects. In first-order boundary-layer theory is customary to ignore the pressure gradient normal to the surface by assuming that the pressure gradient normal to the wall is zero. Nevertheless it is important to remember that a flat wall is a prerequisite for such an assumption but it’s extremely inaccurate if the wall has pronounced curvature.

So consistent with the local mean velocity and streamline curvature, there will always be a normal pressure gradient within the boundary layer when relating to practical engineering applications.

#### The concept of elliptic relaxation

In the framework of 2-equation eddy viscosity models such as the k-ε Turbulence Model it is possible to bypass modeling near wall behavior by employing the law of the wall and providing velocity “boundary conditions” away from solid boundaries (what is termed “wall-functions”). In order to integrate the equations through the viscous/laminar sublayer a “Low Reynolds” approach must be employed. This is achieved as additional highly non-linear damping functions are needed to be added to low-Reynolds formulations (low as in entering the viscous/laminar sublayer) to be able to integrate through the laminar sublayer (y+<5). This again produces numerical stiffness and in case is problematic to handle in view of linear numerical algorithms and in any case it does not break the assumption of homogeneity as the wall-normal velocity, a key contributor to mixing is severely damped in the near wall region.

As explained above, especially for pronounced curvature, pressure effects in the wall normal direction render the homogeneity assumption quite inaccurate as the near wall area not homogeneous in this sense and one shall expect the wall normal velocity gradient to be far from constant.

In order to overcome this drawback the elliptic relaxation concept was devised (P. Durbin). Following the above explanation and taking into account the mechanism by which RSM damping occurs, through inviscid blocking of the energy redistribution by the pressure ﬂuctuations, the main idea is to construct an approximation two-point correlation (which is non-existent standard eddy viscosity formulations as they are 1-point closures) in the integral equation of the pressure redistribution. Then, the redistribution term is deﬁned by a relaxation equation of an elliptic nature.

As the complete formulation shall appear in the following paragraph It’s interesting to note that the elliptic nature is utilized in the k-ω turbulence model only by inspecting the ω-equation in the near wall region when combined with the specified ω values at the wall :

The implication of such behavior in the case of the k-ω turbulence model is the straightforward integration through the laminar sublayer without additional numerically destabilizing damping functions or two more transport equation (which shall generally cause stabilization issues due to reciprocity between the variables).

#### So Why v2f?

In the v2−f model, the variable v^2, and its source term f , as variables in addition to the k and ε (turbulence kinetic energy and turbulence dissipation) parameters of the k−ε eddy-viscosity turbulence model.

The model hence solves for three transport equations for the turbulence kinetic energy, turbulence dissipation and the normal velocity squared, while a fourth elliptic relaxation equation is solved for the source term. The reason for choosing v^2, as explained above is its similarity to the second moment closure of the wall normal Reynolds stress in the near wall region.

The derivation of the elliptic relaxation equation is quite complex originating from the Pressure-Poisson equation with the *rapid* and *slow parts *of the pressure laplacian and involves Green’s function as solution for a modified Helmholtz equation – there is no way I blog such an exhausting derivation…

The model formulation becomes:

With pressure-strain term defined:

and the relaxation equation is solved for the source term f of the normal velocity:

where the turbulent time and length scales are determined as:

Subsequently to performing the surgical identification of the different terms in the transport and elliptic relaxation equations, it should be remembered that we are still left out with some added constants to be calibrated.

In turbulence modeling calibration of the model is at least as important as the derivation of the model itself. Calibration is achieved with the help of experimental and numerical results of the type of ﬂow that should be modeled. The calibration process is also the first step in which the range of validity of the model would be revealed to close inspection and not just postulated from physical reasoning.

For the v2-f turbulence model the calibrated closure constants are:

However, as much the physical reasoning behind the model is sound, the original formulation, is found to be very sensitive to boundary conditions at the wall, a fact which hampers its computational use severely.

**The ζ – f turbulence model**

To alleviate the stiffness of the v2-f formulation D.R. Laurence et al. devised a model of which stiffness at the wall is much less severe. The formulation is achieved by transformation of the v2 equation to a ζ=v^2/k and another in the elliptic operator for the source function f.

The transformation renders the ζ as not directly dependent on the turbulence dissipation ε and the complete formulation takes the form:

now the boundary conditions for the source function and for ζ go to zero at the wall which makes it possible to solve the system uncoupled. Actually, the stiffness of the original formulation could be avoided if the equations for v2 and f where to be solved simultaneously, but most codes (commercial or in-house) use segregated solvers.

The v2-f model is still inferior to RSM for highly 3D, swirling flows with strong secondary circulation as it holds only one attractive feature of RSM (e.g. energy blocking) but recent advancements as the incorporation of the model in Wall-Modeled Large-Eddy Simulation (WMLES) as a hybrid RANS-LES approach may be quite an interesting utilization of the v2-f model.

#### Additional Potential Use: Incorporation of v2-f turbulence model in hybrid RANS-LES simulations

One of the most popular hybrid RANS-LES models is Detached Eddy Simulation (DES) devised originally by Philippe Spalart. The term DES is based on the Idea of covering the boundary layer by RANS model and switching the model to LES mode in detached regions thereby cutting the computational cost significantly yet still offering some of the advantages of an LES method in separated regions.

Although the Spalart-Allmaras (SA) Turbulence model has been widely used for DES its near-wall damping, a result of direct construction of the eddy-viscosity transport equation, does not distinguish between velocity components. As explained in the above paragraphs the v2-f formulation models the suppression of wall normal velocity fluctuation caused by non-local pressure-strain eﬀects. This anisotropy has been shown to improve prediction of separation and reattachment.

Such a hybrid RANS-LES methodology, devised by K. Sharif (NASA). As CDES∆ k^3/2/ε the model switches to RANS v2-f turbulence model.

In LES mode 1-transport equation for the turbulence kinetic energy suffices (since the length scale is grid-dependent), but three equations (v2-f and ε) besides the turbulence kinetic energy are required for the RANS mode. In order to achieve that, the v2-f model is reduced to 1-eq SGS in the LES mode. This is done by modification of the coefficient in the elliptic relaxation equation for the source term f, such that v^2 is equal to 2/3k in the LES mode if isotropic turbulence is assumed.

*DES hybrid RANS-LES formulation based on v2-f turbulence model *

#### Enabling the v2-f model in ANSYS Fluent

This is a simple one… to enable the incorporation of the model type the following in the command window: (allow-v2f-model)

### Scale-Adaptive Simulation (SAS) model

In a variety of rectilinear steady flows (ranging from zero-adverse pressure gradient boundary-layers, channel flows, etc’…) RANS models perform well to predict the mean flow statistics and is relatively inexpensive, but the flow physics it can predict to an acceptable physics fidelity is very limited due to the basic fact that most are essentially one-point closures.

An interesting methodology to simulate LES like unsteadiness, lies in the midst of RANS and LES and is especially attractive for flows of which strong instabilities of the flow exist, is termed *Scale Adaptive Simulation (SAS)* (Menter and Egorov, also available in the Fluent code).

Menter-Egorov URANS – “Scale-Adaptive Simulation” (SAS) is based on Rotta’s exact transport equation for kL (from 1970), uses a relation between the integral length scale:

and the diagonal two-point correlation tensor measured at a location x with two probes at distance:

Rotta’s exact transport equation for Ψ=KL reads:

Here the mean velocity is aligned with x-axis and the mean shear with y-axis.

Now enters the most important (and fun?… ) part subsequently following the mathematical endeavor in each and every construction of closure transport equations, the surgical identification and simplification by physical reasoning of the terms in the initial transport equation. As the left hand side of the above equation is the *advection of *Ψ=KLthey are identified, and while examining the equation, it is found that what distinguishes the model from other 2-equation eddy viscosity closures is a production term in the second line, namely:

This is actually the mean flow gradient measured at the location of the second probe. Expanding it to a Taylor series:

Rotta postulated the second derivative as negligible and left the third derivative in the expansion. The reasoning behind doing so relies on the observation that in homogeneous turbulence the correlation function inside the integral is symmetric with respect to the distance between the fixed and traversing probe (ry). The product of correlation function inside the integral and distance between the fixed and traversing probe (ry) is therefore asymmetric and the integral becomes zero.

Leaving the third derivative as the length-scale determining term was found by Menter and Egorov to be somewhat problematic. First, there is no actual physical reasoning to support such a large contribution from the third derivative.

The understanding that besides the physical objective presented above, leaving just the first derivative does not distinguish the transport equation from any other 2-equation closure methodology leads to the second reason and the cause for Menter-Egorov model variation to include a production term (non-existent in any 2-eq RANS) that reproduces the turbulent spectrum and retains small-scale (high wave-number) behavior due to its actual dependency on a second derivative of the velocity gradient as opposed to rotta’s suggestion to retain third order derivatives which was found inconsistent as it does not retain the “law of the wall”.

The argument in retaining the second derivative of the expansion is that homogeneous turbulence can only exist when there is a constant shear (or none), only then (by definition), is the second derivative zero. So the argument is that it is an inhomogeneous term by nature and hence left as a leading order contributor, also found to be consistent with the “law of the wall”.

#### The KSKL Model

The acknowledgement that still, the integral multiplying the second derivative must be zero under homogeneous flow conditions led Menter and Egorov to assume the ratio of the turbulent length scale to the von Karman length scale as the measure for non-homogeneity:

meaning:

so the ratio goes to zero for homogeneous flows, and the Taylor expansion after the surgical identification and simplification gets the form:

The equation for Ψ=KL is exactly as rotta’s except for the specific alteration:

The final form as presented by Menter and Egorov is for Φ, the square root of KL, which is proportional to the eddy viscosity, hence the turbulence kinetic energy and the square root of KL transport equations could be transformed to an eddy viscosity transport equation under a straightforward procedure (as explained in: Understanding The Spalart-Allmaras Turbulence Model) and so the 2-equation turbulence model reads:

with:

What distinguishes the KSKL model from other 2-equation closures is the fact that in the last, the turbulence length scale (which may be defined on dimensional grounds by the transported variables) will always approach the thickness of the shear layer, while for KSKL model, the behavior is such that it allows the identification of the turbulent scales from the source terms of the KSKL model to a measure of both the thickness of the shear layer but also for non-homogeneous conditions, as the Von-Karman length scale is related to the strain-rate, individual vortices have locally different time constants (inversely to turnover frequencies) and therefore from a certain size dependable upon the local strain rate, they may not be merged to a larger vortex.

Meaning that the Von-Karman length scale gives a first order estimation for the spatial variation.

As such the model is a 2nd generation URANS based 2-eq model (i.e. independent explicitly on the step size of the computational grid as in LES – closure is achieved) able to get very good quantitative results for many unsteady flows even on a relatively coarse mesh.

#### Additional Potential Use: Incorporation of SAS in Hybrid RANS-LES Methodology

Many hybrid RANS/LES which introduces the grid spacing into the turbulence model in order to achieve LES treatment, suffer from the*“Modeled Stress Depletion” *(MSD) Phenomena related to the switch from RANS to LES on an ambiguous grid setup. In DES for example, the hybrid formulation has a limiter switching from RANS to LES as the grid is reduced. The problem with natural DES is that an incorrect behavior may be encountered for flows with thick boundary layers or shallow separations. It was found that when the stream-wise grid spacing becomes less than the boundary layer thickness the grid may be fine enough for the DES length scale to switch the DES to its LES mode without proper “LES content”, i.e. resolved stresses are too weak (hence the term “Modeled Stress Depletion” or MSD), which in turn shall reduce the skin friction and by that may cause early separation.

This does not occur in SAS as it does not incorporate an explicit dependence on the grid to the turbulence model.

Furthermore, while the ultimate goal in hybrid RANS-LES modeling is a model that may work in the RANS limit, LES limit and smoothly connect them at their interface (might it be zonal or monolithic formulation), it seems that in particular the interface termed “the grey area” is the most troublesome resolve.

The main reason for that is in the fact that although seemingly the same form of formulation for the governing filtered equation is achieved, the nature their derivation and their simulation objectives are fundamentally very different.

The RANS equations assume that a time average is much greater than the turbulent eddies time scale, hence turbulent stresses may be replaced by their averaged effect. usually this is done by defining an eddy viscosity (see Understanding The k-ω SST Model) proportional to the mean strain rate and resulting in a flow that is computationally very stable even at highly turbulent unsteady regions as the effective viscosity can be of orders of magnitude larger the molecular viscosity.

On the other hand, in an LES the formulation is derived by spatial filtering separating the scales that can be directly calculated from those that must be modeled (due to grid resolution – “filter width”). Generally the subgrid scales are also replaced with an effective viscosity that must be low enough as to not artificially damp the growth and transport of the resolved large-scale eddies that are supposed be captured.

In the Interface region the modeled turbulent stresses formerly derived by RANS may easily be too large to maintain those unsteady features desired to be captured by LES, and on the other hand not too large to replace all the turbulent stresses for the upcoming RANS state.

The end result is often contamination of the LES region due to inconsistent treating of the turbulent stresses in the interface. The “grey area” (A dedicated post shall soon be written ) is indeed one of the most important issues to be resolved as far as RANS-LES hybrid methods are concerned.

Recent proposals in the field of zonal hybrid RANS-LES include the incorporation of the SAS model both to supply unsteady content for the RANS-LES interface and performed as frozen simulation in the LES zones to serve for the purpose of a smooth switching at LES-RANS interface, as the SAS model will essentially perform as RANS on coarser grids.

*Partially-Averaged Navier-Stokes (PANS) Method*

*Partially-Averaged Navier-Stokes (PANS) Method*

In PANS method, the so-called “partial averaging” concept is invoked, which corresponds

to a filtering operation for a portion of the fluctuating scales. This concept is based on the observation that the optimum *resolved-to-modeled ratio *will change from one engineering application to another depending on the reciprocal relations between the level of physical fidelity intended, geometry at hand and computational resources available.

The most important feature which is in the foundation of the approach is the *averaging-invariance property* of Navier-Stokes equation which amounts to the fact that for any resolved-to-modeled ratio achieved by filtering (i.e. partial filtering), the *sub-filter scale stress* has the same characteristics as the Reynolds stress,** therefore similar closure strategies as for RANS may be employed**.

This is a very attractive feature since RANS closure strategies are very mature and well-tested as RANS has truly been the work horse for most large-scale engineering applications, in contrast with LES closures which are mostly algebraic and suffer from lack of complex engineering applications validity.

The original PANS model is therefore based on the 2-equation RANS modelling concept and solves two evolution equations for the *unresolved kinetic energy and dissipation*.

LES Vs. PANS

It is widely known and goes all the way back to Richardson and granted a more precise view by Kolmogorov, that in turbulence physics, large scales contain most of the kinetic energy and much of the dissipation occurs in the smallest scales, **The smaller the unresolved kinetic energy is, the smaller is the modeled-to-resolved ratio and the greater are both computational effort and physical fidelity for a suited numerical resolution. moreover, the highest value that could be attained for the unresolved dissipation implies that RANS and PANS unresolved scales are the same.**

The end result for the evolution equations (different coefficients and parameters definitions may be found at S. Girimaji 2005)

The PANS methodology has some very attractive features:

- The PANS methodology is based on the kinetic energy content and the RANS 2-equation closure methodology rather than on a grid-dependent filter, rendering the model as closed in contrast to LES which is essentially an unclosed method.

Perhaps new advances on the route to “grid-independent” LES modelling (S. Pope, U. Piomelli) shall resolve some of the issues but it shall take some time before such methodologies shall find their way to general purpose CFD codes as most of the exploit dynamic LES non-local concepts. - As the sub-grid scale filter is independent on the grid resolution explicitly but on the unresolved kinetic energy and dissipation there is a decoupling between the physical and numerical resolution.
- The two evolving parameters unresolved kinetic energy and dissipation may be either constant (as a fraction of RANS) or spatial and time dependent (such as in DES) rendering PANS as more of an infrastructure for resolved-scale simulation rather than a simple modelling approach.

PANS simulation of rotationally oscillating bridge segment (by Chalmers University)

#### Additional Potential Use: Incorporation of PANS in Zonal Hybrid RANS-LES *(L. Davidson – Chalmers University)*

A new advancement in the field of hybrid RANS-LES zonal method is the employment of its attractive features to construct a straightforward hybrid infrastructure.

In the application PANS is applied in the URANS sub-domain where the unresolved kinetic energy parameter is unity and a tuned value of 0.4 in the LES sub-domain. As stated in former paragraphs the imminent issue is consistently defining the interface RANS-LES layer, and in this modelling approach it is done through the use of the unresolved kinetic energy gradient which gives rise to an additional term in

the momentum equations and the K equation above **only** in the interface and acts as a forcing term in the momentum equation to create a smooth RANS-LES interface.

### IN SUM…

Specific task EVM’s promise extraordinary “on paper” features, which when optimally practiced with a **deep understanding** of the model at hand should be able for the objective of the practitioner too be attained.

Nevertheless, none of the models has to this date obtained a wide use in the CFD practitioners community, and exceptional V&V conducted is still too limited, and case specific.

Furthermore, although models like SAS and PANS do not require LES grid requirement, their grid requirement is still somewhat overwhelming if high fidelity post-processing quality is to be obtained on the fly (as low fidelity results from standard EVM’s), mostly without much added value besides pretty seemingly unsteady turbulent content.

My final conclusion:

Delve deeply into the backbone of theory underlying these models, then perform cases specific V&V and compare results to a trusted benchmark LES. You might just find wonder in them!

**Stay tuned for Turbulence Modeling Best Practice Guidelines: Standard EVMs – PART IV, where model specific guidelines shall be presented om a per model basis.**

Pingback: All About CFD… – Index – Tomer's Blog – All About CFD…