A successful CFD analysis workflow consists of the following key elements (sometimes by repeating cycles, and sometimes with additional elements as-per-cause):
- Definition CFD modeling goals according to the model problem at hand:
Physics gives us a description of reality or in other words, a model, by such CFD modeling goals should conform to the level by which the model describes reality.
- Performance of pre-calculation to establish a deep understanding of the problem at hand, achieving bounds and a route for exploration. If one can not consolidate a deep understanding of the physics of the model problem with pen and paper there is value to a CFD exploration.
- Creating a geometry description of the model problem incorporating simplifying assumptions (excluding CFD “passive features, e.g. bolts) and approximations (e.g. symmetry).
- Defining a mesh according to accuracy (desired mesh quality), efficiency (desired cell count) and ease of generation (desired mesh topology), all the while remembering that these are not orthogonal vectors so the best compromise should be chosen.
- Setting up the domain and physics:
- Prescribing operating and boundary conditions to conform with physics of the model problem and the definition of the domain.
- Selecting appropriate physical models (turbulence, combustion, radiation, multiphase, etc…) to conform with the physics of the model problem.
- Definition material properties for solid/fluid/mixture to conform with the physics model (e.g. constant or dependent).
- Prescribing initial conditions or initial values based on an “educated guess” or previous solution.
- Setting up the solver:
- Set up the solver type (density or pressure based and steady or transient) to conform with the physics of the model problem.
- Choice of solution algorithm (pressure-velocity coupling for formulation and flux methodology for pressure and density based solvers respectively).
- Setup of spatial and/or temporal discretization according to the level of accuracy to be achieved and tuning the solution controls (under-relaxation factors, intrinsic iteration loops, multigrid, etc’…) to encourage convergence or accelerate convergence (no orthogonal vectors here also so again the best compromise should be chosen).
- Setup of solution monitors for equation residuals and key quantitative measurements.
- Compute the solution by iteratively solving the discretized conservation equations until convergence is achieved (changes in solution variables monitored by residuals, overall imbalances minimization and unchanged quantities of interest).
- Examination of the results basing upon formal specified processes and specialized post-processing tools:
- Overall pattern produce qualitative physical results.
- Key features according to the physics of the model problem are resolved.
- Flux balances are conform.
- Comparison of integral quantities (e.g. drag or lift) and flow statistics (according to applicable modeling level deemed by the physical model): mean velocity profile (first order statistics), R.M.S. profile (first order statistics), PSD ( one-point spectral analysis), correlations (two-point spectral analysis), etc’…
- Error analysis (discretization, iteration, systematic, round-doff, model).
- Considering revising the model:
- Physical models: resolving physical features.
- Boundary/initial conditions: adequate domain, switch to prescribed “real” BC/IC, adjust boundary zones values.
- Mesh: replace topology, revise boundary-layer description, change resolution, etc….
Even the above is still somewhat partial as specialized issues might arise and seem important considering the vast number of simulations that could, in theory, be thought of.
The following set of posts shall emphasize one of the key elements in the above assumed workflow regarding the setting up the solver.
In what follows I shall outline and describe the characteristics of two distinct types of solvers I shall name as pressure-based and density-based solvers. I know some might regard it as an obvious choice as these are the two parent methodologies in ANSYS Fluent. Nevertheless, the algorithms and their control are fairly general to most general purpose CFD commercial softwares.
Pressure-Based Vs. Density-Based Solver
traditionally, the pressure-based approach was developed for low-speed incompressible flows (in accordance with the rule-of-thumb of 0<M<0.3), the density-based approach on the other hand was mainly used for high-speed compressible flows. Although both methods have been extended and reformulated to solve a wide range of flow conditions beyond their traditional intent, there are still preferred states by which the pressure-based solver is to be employed and vice-versa.
Not getting into small details (yet… 😉 ), it is customary in both methods to obtain the velocity field from the momentum equations. For the density-based approach, the continuity equation is used to obtain a density field while the pressure field is determined from the equation of state. On the other hand, in the pressure-based approach, the pressure field is extracted by solving a pressure or pressure correction equation which is obtained by manipulating continuity and momentum equations (interestingly noting that the continuity equation is actually an equation for pressure which is not explicitly there) .
As in nowadays practice the pressure solver serves as the default solver for most applications and handles a Mach number range of 0 to 2~3, we shall kick-off with its description while completing the task with the density-based approach.
Segregated Algorithms: Motivation
I shall start my discussion by presenting a slight difficulty stemming from the incompressible Navier-Stokes equations (steady-state) themselves:
In the above ρ and µ are constant density and viscosity and the last term is a body force (recognizing that if the body force arises from buoyancy, rotation and electromagnetic fields, depends upon further dependent variables, we shall add an additional equation as in the case of temperature and the energy equation for example). The dependent variables in this equation are the velocity vector u and the pressure p.
A fundamental observation associated with the above form (primitive form) of NSE is that the momentum equations may be viewed as an equation for the velocity field as it could be shown that the pressure could be eliminated from the momentum equation. This is a very important notion from a computational standpoint as it sets up the framework for the development segregated algorithms. As we would like for a firm foundation for later described segregated algorithms I shall explain that notion with pure mathematical reasoning.
We start our discussion on this notion with a theorem taken from the field of vector calculus and named after the physicist and physician (he had quite a contribution in the field of eye vision…) Hermann von Helmholtz. The theorem, also known as the fundamental theorem of vector calculus, states that any sufficiently smooth, rapidly decaying vector field in three dimensions, can be resolved into the sum of an irrotational (curl-free and by such may be represented as the gradient of a scalar potential) vector field and a solenoidal (divergence-free) vector field. this is known as the Helmholtz decomposition.
In accordance with Helmholtz theorem:
Furthermore, we shall continue our path for the achieving the elimination of pressure from the momentum equation, by defining a linear operator which is the orthogonal projection of v, or plainly speaking a mapping from v to u:
The above is termed the Leray Projection named after the french mathematician Jean Leray and could informally seen as the projection on the divergence-free vector fields.
We have gone through some rigorous mathematical route to consolidate the logic and motivation for NSE solver segregated algorithms. In what follows I shall try and keep a very motivational albeit rigorous path upon laying the foundations for the Navier-Stokes Equations algorithms as proposed by the ANSYS Fluent solver followed by a “best practice” guidance and some proposed current (implemented and via UDF) pressure and density based NSE algorithms with some future insight to proposed improvement propositions for the algorithms.