As far as we know, there is only one objective physical reality, but there are many ways of describing it. Usually, we refer to these ways as “theories” or sometimes “models”.
Ever since the days of Aristotle and his contemporaries, science has discovered many such theories or models, some may be harder to perceive but otherwise of greater precision and wider applicability.
Moreover, it is also very clear that it is not good enough that these theories have some predictive success or interesting heuristic explanations in it of themselves, but they also have to fit together.
We usually prefer to use the vocabulary of the less fundamental theory simply because it may be much more useful or when we look at the system more broadly.
An example related to our field of interest, involves the air in a room. We know that air is a gas, and we know we may describe it as having properties such as temperature, density, velocity, viscosity, pressure, etc’… In other words, we specify Thermodynamic State. under this very useful description of air, we think of it as a continuous fluid, and all of these descriptive properties have specific values at every point in the room, regardless of us actually knowing them.
On the other hand, we also know that at a more fundamental level, air is composed of individual atoms and molecules, we even know it’s mostly nitrogen and oxygen, with trace bits of other elements.
So we could try to describe the air in our room simply by specifying velocity and positions for each and every one of the notoriously high number of these molecules. This even has a name: The Kinetic Theory of Gases, and at least as far as physics is concerned it is a legitimate way to describe the air in the room.
(Those acquainted with the field of combustion learn first to separate the Thermodynamic Description in which simply saying we are interested in the final state of the system in a particular setup, its equilibrium (perhaps after a very long time) and not on how it got to be there, Vs.the Kinetics Description, which will tell you exactly how it got to be there. Different objective entirely…)
Pierre-Simon Laplace, an 18th century scientist and philosopher, proposed a scientific/philosophic notion of a demon with the ability to know the precise location and momentum of every atom in a closed system (Laplace actually referred to the entire universe), such that every value in their past and future could be calculated by him according to classical physics.
(note: although some would scream: CHAOS!!!… actually chaos has nothing to do with that. While chaos presents the tendency of very small discrepancies in initial conditions to have exponentially large differences in a later state, the demon has full and precise knowledge of the initial state… ).
The specification of the state of each molecule at every moment in time is a self-contained description of the system, but in practice though, missing the slightest information about the state of some molecules, would not allow such a description. In other words, we would have to be as smart as Laplace’s demon to actually gain value from describing a system with the kinetic theory vocabulary.
The Macroscopic Description
Just as there are equations that can tell us about how the molecules and evolve as time progresses, there are separate equations which tell us how the fluid properties evolve in time.
Note that even though no one thinks we shall acquire the possible resources to act as an equivalent to Laplace’s demon or even close, the macroscopic description, presenting an entirely different vocabulary to speak about physical phenomena, gives us a very valuable description of the physical world. Aeronautical engineers help “heavy than air” objects to fly, and atmospheric scientists give us the weather forecast by solving these macroscopic world derived equations every day! This is no less than incredible!!…
Mapping between Microscopic and Macroscopic Realms
The two levels of descriptions, the microscopic-molecular and that of the macroscoping-continuum assumption are totally autonomous, but also, they have a different range of applicability, manifested in their different vocabularies. For example, we are not allowed to talk about the temperature of an atom, or about the pressure exerted on it. This is described by the concept of emergence, when an entity is observed to have properties its parts do not have on their own. These properties or behaviors emerge only when the parts interact in a wider whole. Note that it doesn’t mean that an emergent description can contradict what the laws of physics allow to happen on a lower level description, it is actually fully constrained by what the lower-level description allows, even if we do not have enough information or resources to show that it does.
Moreover, It is especially straight forward for the case of fluid dynamics, to show that the macroscopic description entailed by it can be directly obtained from the microscopic description. In other words, there is an explicit mapping from the world of molecules to that of fluids. The macroscopic description serves as an effective theory for the microscopic theory, even though historically speaking, long before we knew it was made of molecules, we’ve invoked the vocabulary describing air’s pressure and velocity.
On the other hand, when mapping a state from the macroscopic realm to one state in the microscopic, we find that there are many different states in the microscopic theory that may describe the same state in the Macroscopic theory.
As an example for this explicit mapping from the microscopic to the macroscopic realm, we may take the example of the macroscopic property viscosity:
We regard an average number of molecules moving through an (unit) area in a specific direction.
For ideal gas the molecular velocity is following the maxwellian distribution (note: this is a manifestation many different states in the molecular theory get mapped to the same state in the fluid one), such that all directions are equally possible, and the average molecular velocity is the thermal velocity:
On average half of the molecules follow to the positive side and the others to the negative. if we take the vertical velocity these becomes:
Now we integrate on a hemisphere:
And get that the total molecules on the route for the positive direction:
We look at a typical molecules and the route they make without colliding:
In their way from P to Q each molecule is said to be “typical of where they come from”, hence each molecule from P carries about a negative momentum:
This means that the total momentum flux from to the negative side (to first Taylor expansion approximation):
On the same grounds, the total momentum flux from to the positive side (to first Taylor expansion approximation):
Summing both sides it becomes:
Now we may write:
And we’ve mapped viscosity!!
The continuum assumption underlying fluid dynamics would not be valid if the effects of particular molecules were important individually, rather than only in aggregate. For this to not be the case, and for the fluid dynamics description to be valid and autonomous to the “black box” of the microscopic realm we need to assure Scale Separation.
Scale separation is one reason why aerodynamic engineers are not to much worried whether upcoming experiments in CERN’s Large Hadron Collider (LHC) will find, or fail to find superpartners in collider experiments, strengthening the credence of supersymmetry theories, simply because no finding will change anything in the macroscopic realm they perform their calculations at…
In the case of the continuum assumption and the above derivation this means that we want the ratio of the molecular mean free path length to a representative physical length scale to be much smaller than 1.
Fortunately, for most day to day engineering applications serving as candidates for a fluid dynamics inquiry, there is no problem meeting this criterion.
How Does All This Relate to Turbulence?…
The Basis for Day-to-Day Turbulence Modeling: Mixing Length Theory
It was the German engineer (arguably the most prominent figure in leading to what is our current understanding of aerodynamic flows phenomenology) Ludwig Prandt’l, who first saw this connection and eloquently explained the analogy between turbulent motions and molecular mixing. Although molecules and turbulent eddies are fundamentally VERY different, this amazing analogy is directly responsible for billions of dollars in fuel budget savings in the past 50 years…
Prandt’l first hypothesized that fluid flow as consisting of collections of fluid parcels moving about randomly with some characteristic speed over some characteristic length scale, would essentially retain their momentum. This hypothesis is based on a similar one from the kinetic theory of (rare) gasses of which molecules moving about randomly (Brownian motion) following the maxwellian distribution, such that all directions are equally possible, with some characteristic speed (the average molecular velocity being the thermal velocity), over some characteristic length scale (being the mean-free-path) are holding their characteristic momentum from the velocity layer they where coming from:
In the molecular level a decomposition we may propose a decomposition of the following kind:
(While U is defined by U(y) and u” is molecular random movement).
The sudden flux of every property through y=0 is proportional to the normal to plane velocity normal to plane. Concerning the description above it is v”. Hence the sudden change in momentum through a differential element dS may be described as:
After conducting an ensemble average this becomes:
By definition the stress acting on y=0 may be written as:
Breaking the stress into hydrostatic pressure and viscous stresses (shall be proven extremely useful later in breaking aerodynamic drag to its distinguished constituents):
Will allow for the following relation between the momentum transfer of colliding molecules and the earlier defined viscous stresses.
Prandtl now postulated the following:
Momentum transfer by molecule collisions—>Momentum transfer by turbulent motion
Mean free path —————>Mixing length
Thermal velocity————–>Mixing velocity
Random (Brownian motion)—————->Turbulent motion
Molecular transport of momentum—–>Turbulent transport of momentum
It is very straightforward to write the following, derived directly from the above:
and by that:
This is the Boussinesq Hypothesis, which is the basis for eddy viscosity models, which is to all practical engineering purposes, almost the only type of turbulence models in use.
Still, as engineers, many questions seem interesting:
- Can we actually determine which turbulence model is the most valuable to the many engineering applications we have?
- Could modeling error due to different types of turbulence closure models, and by that, our choice of turbulence model be different even in different parts of a domain for a specific application?
- Does the abundance of turbulence modeling choices produce a noticeable engineering value?…
- Some models portray to include “more physics”, whether it’s RANS transition modeling approaches, Scale-Adaptive simulation (SAS), and hybrid RANS/LES. Is that consistently so from an engineering value standpoint?
- Do we even know enough about complex phenomena such as the transition from laminar to turbulent regimes to incorporate them in such a simplistic and limited in it’s range of applicability framework, such as RANS?
- Although it is extremely satisfying to present post-processing results which include turbulent content, are models such Scale-Adaptive simulation (SAS) and Detached-Eddy Simulation (DES) actually produce a more accurate physical description of unsteady phenomena, or is the accuracy hampered by compounding physical model errors (e.g. non-sufficient resolution, grey-areas)?
I will try to give my honest and modest opinion to some of the questions raised above.
Considering that RANS models typically already have limitations covering the most basic self-similar free shear flows with one set of constants, there is little hope that even advanced Reynolds Stress Models (RSM) methodologies will eventually be able to provide a reliable foundation for all such flows. So SRS is not a specific turbulence model, but one of which the turbulence content suffices. This means that turbulence models such as Partially Filtered Navier Stokes (PANS) Model and Scale-Adaptive simulation (SAS) model, and even some RANS transition modeling approaches, based on could also be counted as SRS methodologies. Indeed they claim to be able to resolve smaller scales to the way they are built, Some models portray to include “more physics”, whether it’s RANS transition modeling approaches, Scale-Adaptive simulation (SAS), and hybrid RANS/LES. Is that consistently so from an engineering value standpoint is debatable, as it may grant the engineer with an qualitative unsteadiness phenomenology viewpoint. As for a replacement for LES, it is quite obvious that transition mechanisms are highly impacted by many parameters unavailable in LCTM models. A LES is essential for quantitative transition prediction.
Many more questions arise, and it seems quite troubling that there doesn’t seem to be a textbook answer repeatedly found… But the problem is not only due to start to the Boussinesq Hypothesis, and not bound to the Reynolds-Averaged Navier-Stokes (RANS) methodology. The problem is much deeper, and is evident also in Large Eddy Simulation (LES) type of methodologies.
In contrary to RANS, where all the turbulence is essentially modeled, in LES the large energetic scales are resolved while the effect of the small unresolved scales is modeled using a subgrid-scale (SGS) model and tuned for what is almost (but certainly not quite…) universal character of these scales.
The problem, is that in turbulence, there is the spectral gap problem, or in the words of the beautiful poem by fluid dynamics pioneer Lewis Fry Richardson:
Big whorls have little whorls,
which feed on their velocity;
And little whorls have lesser whorls,
And so on to viscosity.
This reflects the physical notion that mechanical energy injected into a fluid is generally on fairly large length and time scales, but this energy undergoes a “cascade” whereby it is transferred to successively
smaller scales until it is finally dissipated (converted to thermal energy) on molecular scales. But the picture one also get is that of eddies (or scales) constantly interacting throughout the spectrum… No scale separation.
And indeed, the most serious problems we have in LES arise with wave-numbers around the filter cutoff, due to scale interaction between resolved and unresolved scales which needs to somehow be captured. There has been many attempts to alleviate this problem (e.g. Scale-Similar Models), but these methodologies are either cumbersome to implement for engineering purposes, or they have a very narrow range of types of flows where they work.
It seems too inaccurate to consider the averages/fiters applied to NSE, and the ad-hoc reasoning in establishing closure models as higher level descriptions of turbulence, the way the continuum assumption and the macroscopic description of fluid dynamics and thermodynamics is to the lower level microscopic description of the kinetic theory of gasses.
The abundance of turbulence modeling approaches, and the race of commercial vendors to present the most extensive portfolio of turbulence models does not have a clear and quantitatively measure for the added value they bring to our description of physics and its mapping to engineering.
My intention is not to portray myself as a pessimist or as an opposer to turbulence modeling. On the contrary. It actually seems that turbulence models are succeeding in providing engineering value way beyond expected from their theoretical range of applicability…
But wishful thinking is a powerful force, and it makes sense to guard against it. It is important to realize, or at the very least be very modest and cautious about our expectations from turbulence modeling approaches. In particular, I feel that due to turbulence being notoriously hard to predict and due to the very limited theoretical range of applicability of turbulence modeling approaches it is much preferable to deeply understand turbulence modeling to its basic constituents before admiring the abundance of choices and testing them for what will ultimately be found as a narrow application dependent validation…