By John Chawner
At the recent AIAA Science and Technology Forum 2015 (aka SciTech), I was fortunate to participate on a panel discussion related to the meshing and geometry needs identified by the CFD (computational fluid dynamics) Vision 2030 Study, hereinafter the Study [Reference 1]. Specifically, The Path To and State of Geometry and Meshing in 2030 was organized by the Meshing, Visualization, and Computational Environments (MVCE) Technical Committee in order to provide technical interchange that would illuminate the path forward for geometry and mesh generation.
The panel discussion was moderated by Dr. Hugh Thornburg (HPCMPO PETTT Technical Lead). The panelists in addition to myself were Dr. John Dannehoffer (Syracuse University), Dr. Saikat Dey (Naval Research Laboratory), Jeff Slotnick (Boeing, lead author of the Study), and Dr. Nigel Taylor (MBDA UK).
Note: In order to protect the innocent, quotes from the panel and audience are unattributed and names of specific software packages are omitted from the discussion below.
As a starting point, consider what the Study had to say about preprocessing in its section on CFD Technology Gaps and Impediments. Generating meshes was called a “principal bottleneck” in the simulation workflow and it was said to constitute the “dominant cost” in terms of human intervention.
In particular, two main impediments were noted: inadequate linkage with computer aided design software (CAD) and poor performance and robustness. The former is not surprising as meshing and geometry are inextricably coupled. The latter concerns itself with the ability of meshing algorithms to produce a suitable mesh with little user input.
In particular, the Study cited the following:
With that as a backdrop, the panel discussion began.
The panel kicked off with a pair of questions about the anticipated and required turn-around time to generate a mesh and the associated user skill level.
I thought the Study made the anticipated turnaround time very clear when it called for a single engineer in the year 2030 to be able to conduct a large ensemble of simulations (e.g. 1,000) from conception through post-processing within 24 hours. That should suffice for a visionary requirement; the details of that vision will not materially change the approach to achieving it.
As for the currently required time, the answer is “it depends,” even if you assume the user is the prototypical “perpetual intermediate” – someone who is neither a power user nor a novice. It depends on the application, the technique/software, the desired output, and the time constraints.
The panel and the audience squarely and repeatedly blamed meshing's problems on CAD geometry for all the reasons cited above and more. Some in the audience were of the opinion that most mesh generation software could produce a valid mesh if given a “clean” geometry. (This is similar to what is said about flow solvers; just about any solver can compute an accurate answer given a “good” grid.)
But the truth of the geometry issue is more complicated than that. There was quite a bit of discussion about what exactly is meant by “CAD data.”
CAD data can be differentiated by the software used to create it. For example, the major mechanical CAD (MCAD) software packages are widely used by larger organizations such as aerospace OEMs. For various reasons, organizations also use specialized CAD tools (called preCAD) for the generation of geometries for simulation. These organizations feel that the geometry produced by preCAD is virtually analysis-ready as opposed to MCAD geometry, which often has to be cleaned. On the other hand, some consider preCAD to be a BAND-AID® on the geometry problem despite its successes.
CAD data can be differentiated by the user. Models may be generated by the designer (a non-analyst engineer) or by the analyst. Also, in many cases the designer may create the model but another person post-processes the model before giving it to the analyst. The implication is that designers simply are not aware of the geometry needs of the analyst. Of course, MCAD software is usually so expensive to maintain that there often are not enough licenses available for analysts to use, resulting in their using preCAD (see above).
The type of geometry and its level of sophistication also vary by the stage of the design process. During conceptual design, preCAD or other design tools may dominate, resulting in geometry that may be simpler but possibly sloppier. During detailed design, MCAD may rule the day such that analysts have to deal with (overly) detailed geometry (e.g. an aircraft with all the interior seats).
Regardless of who generates it or the software used, a fundamental issue with CAD data is the form of the geometry. The form can be analytic (e.g. NURBS) or faceted (e.g. STL, 3D scans, subdivision surfaces). Regarding analytic geometry, there is also the issue of how solid models are represented and maintained (trimmed surfaces, etc.). Use of NURBS is non-trivial and most CFD programmers lack access to the suitable tools. By contrast, faceted geometry is nothing more than a mesh with which CFD programmers already have great expertise. The problem arises, however, that faceted data may be no cleaner – and perhaps actually dirtier – than NURBS.
There are also techniques like isogeometric analysis (IGA) whereby the NURBS patches themselves in the geometry act as the grid.
When it comes to the nuts-and-bolts issue of how exactly the mesher gets access to the CAD data, there are basically two mechanisms: by a file or by an application programming interface (API). Both require translation of the CAD data. File-based interchange may require two translations (CAD to file, file to mesher) except in the case where the file is the CAD system's native file. API-based interchange may also require translation if the CAD system and mesher don't share a common kernel (see below). Regardless of the method, these translations are a major source of errors in geometry interchange.
Of course, API-based access can be very tightly coupled if the mesher is actually embedded in the CAD software. A practical issue related to API-based interchange is the licensing requirement of the API (i.e. does the mesher included a license for the CAD system's native API). And CAD-embedded meshing requires the analyst user to have access to the CAD system, which can be problematic as noted above.
Inside every software program that works with CAD data is its kernel, in which the CAD data is stored and manipulated. If the kernel used by the CAD software is not the same as the mesher's kernel, the translation of the geometry can often introduce errors.
Because the Study largely considers CFD within the design process (as opposed to forensic or post-design simulations), there is a strong need for geometry to go from CFD to CAD assuming that simulation has been used to shape the design for performance reasons. At present, few CFD/meshing software packages offer bi-direction translation except through standard files (e.g. IGES), a scenario that suffers from the problems cited above, just in the opposite direction.
Also, if the geometry is to be modified by the analyst and/or the CFD software, the issue of design intent needs to be shared so the geometry modification can be done simply and effectively. Many geometry transfer mechanisms do not share this info.
At least one panelist felt that many of our problems could be solved through educating our users on the myriad geometry issues to better set expectations and best practices.
Another panelist opined that CFD's influence on the major MCAD systems was virtually zero. This was echoed by an audience member who said that we should just get used to CAD data being sloppy and just figure out how to deal with it.
There was very little discussion about the benefits or drawbacks of any particular meshing technique. The point was made, however, that because of the serious nature of the meshing challenge, perhaps we should consider entirely new techniques rather than trying to bandage the current methods.
An analogy was made that perhaps the Study is meshing's “Kodak moment” in which we leave the old, analog film (i.e. current meshing methods) behind and go totally digital with respect to our geometry and meshing. Specifically, the reference here was to Cartesian grids with sufficiently high levels of refinement; that is, refinement so high that manual resolution of sharp features is unnecessary.
The Study specifically calls for adaptive mesh refinement (AMR) and that was echoed by many in the audience, some of whom claimed never to look at a mesh at all and instead rely totally on AMR to generate the grid along with the solution to capture all relevant physics. Of course, this only works as long as you start out with a sufficiently decent coarse mesh that will capture all the relevant physics. Furthermore, some in the audience were proponents of adjoint-based adaption, declaring that gradient-based adaption would only lead to the wrong answer. Finally, many in the discussion felt that a priori metrics are virtually worthless.
Of course, AMR will eventually involve moving the mesh on the surface, which will require access to the geometry. See the section above for the perils associated with this.
One participant noted that the Study's technology development roadmap (Ref. 1, pg 23) calls for “large scale parallel mesh generation” in the year 2021, but then questioned the path to that goal given that generation of a 50 million cell mesh today calls for a user with “hero status.”
Some in the audience thought most, if not all, of the challenges cited in the Study have been solved in one form or another by one technique/tool or another with the implication that no one technique or tool has all the pieces. I would like to think that the Study's authors would have identified any one particular tool that met their vision requirements if one existed. In other words, Aldous Huxley's quote applies: “The future is already here – it's just not very evenly distributed.”
The problem with “the future is already here” viewpoint, is that the panel/audience discussion involved mostly experts in CFD and mesh generation. I believe that many of the opinions expressed were biased toward serving their peers. As we all know, putting this technology in the hands of the non-expert analyst requires work of an entirely different nature.
The moderator inquired whether meshing will remain a separate discipline or be merged into the solver. The consensus was that there will continue to be meshing specialists regardless of whether the software is stand-alone or integrated. In other words, the distinction of how the meshing software is deployed is a thin one.
Of course, as the Study itself points out, several in the audience lamented the lack of U.S. government funding for research – basic or applied – in this area. Several government representatives pointed out that there is no implied funding commitment with the Study.
Without a doubt, 2030 is already here. You can find many examples of tools or techniques that meet one or some of the Study's visionary requirements under certain circumstances. The practical issue is to achieve all of the Study's requirements in a way that's widely applicable. One cannot treat all CFD solvers as a homogenous, interchangeable lot. Suppose one meshing software package absolutely solves the CAD geometry issue. If it only generates overset structured grids, for example, it is of no use for CFD solvers that need unstructured meshes.
A second practical issue is how willing users will be to migrate to a new technology (see “Kodak Moment” above) simply for its meshing benefits. Most organizations are tightly coupled to their CFD solver due to verification, validation, infrastructure and other investments. It will take an overwhelming amount of preprocessing improvements to set all that aside.
Regardless, what is needed is advancement on the broad technology front that is faced by meshing in order to meet the 2030 visionary requirements. The best way to accomplish this is what we usually do (as cited by an audience member): break down this large problem into many smaller problems and attack them one at a time. In other words, by evolving our technologies over the next 15 years, we will find ourselves with the revolutionary capabilities that the Study envisions.
And in order to be able to monitor our progress toward 2030, it is best to set some mileposts in order to gauge progress. I am hoping that you will see steps in this direction from MVCE very soon.
If you would like to generate your meshes using Pointwise request a free evaluation today.