Cloud-based HPC, 2 years later...

A reflection on two years of cloud-based HPC use at TEN TECH LLC
Several articles came out lately, both in "industry" magazines and over social media from reputable software companies speaking of cloud-based HPC. The common thread being how "Cloud-based HPC could be a solution for Small Businesses". My initial reaction: ask the authors "where have you been for the last 2 years?". My second reaction: are we as a company such early adopters of promising technology? We have been a cloud-HPC consumer for 2 years!

To better serve our customers, TEN TECH LLC constantly monitors the software and hardware industry in search for the better, faster stronger magic bullet. We do not hesitate to invest in technology if it makes sense, especially in the CAE world (see our CAE arsenal here). Over the years, we have acquired many high-end CAE tools such as XFlow, NX NASTRAN, Abaqus, scSTREAM or 6sigmaET. We are known for our high fidelity models and strong abilities at modeling and studying complex physics, we push these tools to their limit very often. We also push our hardware to its limits on a daily basis. A typical Abaqus stress model or NASTRAN dynamics model for us is between 5M-10M dof, while our 6sigmaET thermal models exceed 25M cells, quite often by a lot.

We are one of the few consulting companies around not charging our customers for tools or solver computing time, we consider that our cost of doing business. We need to deliver good, meaningful data quickly and often, and a lot of it. Let's face it, the models we make are not getting any smaller and the physics we're studying are not getting any simpler. We have a reputation of Subject Matter Experts to preserve, and we are constantly pushing ourselves to do more and challenge our own comfort zone. I know this sets us apart.

A prime example of our advanced analysis expertise: Finite Element Analysis of the flow and interaction of thermal paste and TIM under tension/compression cycles and how it affects thermal and mechanical interactions in electronic packages. These are complex material models and contact interactions than only a handful of solvers can tackle, one of them being Abaqus. Interestingly enough, I recently heard at a local conference an "expert" categorically state that nobody was studying this exact problem. We have been working on this for months. In fact, we are so far in the process that we have already correlated our results with test and are generalizing our approach to multiple types of materials.

As one can imagine, these types of problems, or any of the large models we use daily, require a large amount of computing power. Nearly two years ago we sat down and evaluated our situation: we are still a small company with a limited hardware budget. We have pushed and upgraded our workstations to their limits, and have outgrown our workgroup compute server as well… A yearly hardware refresh isn't really practical. What could possibly be the next magic bullet? Along came Rescale, offering cloud-based HPC access for popular CAE software (XFlow and NX NASTRAN at the time) in an ITAR-compliant manner (critical to us). After a few test cases, it became evident that Rescale was a great solution for our needs. And not only on the hardware side, we are also able to use on-demand licensing for certain CAE solvers, further lowering expenses.

It's been now two years since we started using Rescale (see our original announcement here), and have been very successful with it. We have been able to save a considerable amount of time or even perform some complex analysis that would have been too time consuming or simply impossible with our in-house hardware. We have published or will publish some of our successes with Rescale, here are some examples and links to more detailed information:

  • XFlow: XFlow (like most CFD codes) scales wonderfully on clusters. By using on-demand license and hardware, we were able to cut our solve time by nearly 90% for a very large scale vortex shedding analysis of an array of space telescopes. See the full success story here,

Rescale XFlow


  • NX NASTRAN: On-demand licensing, high core count for DMP dynamic response analysis and ultra-fast I/O on Rescale allowed us to solve some very large models in a fraction of the time it would take us to perform in-house. Some of our findings were used as benchmark for success by Siemens PLM (read here) or presented at the prestigious NAFEMS conference (article here),
Rescale NX NASTRAN

  • Abaqus/Explicit: for crash, drop, explosion and any kind of rapid dynamics simulation, Abaqus/Explicit is one of the top-rated software in terms of performance and scalability. By using a large configuration on Rescale (7TB memory and 512-core), we were able to perform MIL-810 bench handling simulation in a reasonable amount of time. Because of the numerically-intensive complexity of replicating this test by simulation, it is typically done physically. Thanks to the Rescale HPC environment, we can now confidently (and affordably!) perform bench handling analysis. More in-depth information can be found here,
Rescale Abaqus

  • 6sigmaET: 6sigmaET is a high-performance cartesian grid CFD/CHT solver that scales very well on clusters. Thanks to a native interface to Rescale, we are able to submit and solve large models practically seamlessly. Our latest success story, yet to be documented, includes a 50M grid thermal analysis of a complex liquid-cooled defense electronics system that we were able to solve in a few hours, as opposed to an overnight solve situation using our in-house hardware.

Rescale 6sigmaET

  • Abaqus/Standard: along with taking advantage of Abaqus' scalability, Rescale allows us to access high-end GPU hardware such as NVIDIA Tesla K80 to further improve our throughput. Results of our success with Abaqus' GPGPU scalability was presented at the Science in the Age of Experience conference in Boston this year. Read more here.
Rescale Abaqus GPU

We have naturally many more examples and successes with Rescale, many we cannot share due to the nature of our activities. As we are entering our third year of cloud-based HPC with Rescale, we are looking forward to even larger models and more complex simulations such as large Fluid-Structure Interaction co-simulation with Abaqus and sc/TETRA, or large NX Space Thermal view factor calculation for Satellite orbital analysis. All possible within the comfort of "our" Rescale environment Happy






blog comments powered by Disqus

We use cookies to operate this website, improve its usability, personalize your experience, and track visits. By continuing to use this site, you are consenting to the use of cookies.
For more information, please read our
privacy policy.