This is the 3rd in a series of informal articles about one engineer’s usage of LS-DYNA to solve a variety of non-crash simulation problems. The first was on LS-DYNA: Observations on Implicit Analysis, the second was on LS-DYNA: Observations on Composite Modeling and the fourth was LS-DYNA: Observations on Material Modeling.
I come from a background in implicit analysis where element quality can often be swept under the rug by the use of dense meshes and since models run quickly no one really cares about model size, i.e., ten million DOF. Whereas, what I enjoy about explicit is that it demands the upmost model preparation from the choice of element types to the creation of perfect quad and hex dominant meshes having the absolute minimum number of DOF.
What I have been noticing over the last couple of years in the explicit world is the creation of gigantic meshes that are justified by saying, “It runs just fine in four hours using 32 CPU-cores.” Although it runs, I wonder how much time was spent in debugging this beast, and also whether the mesh density was justified by experience or by the economy of using an off-shore meshing service.
What’s the Point?