Many OpenSees use cases, from the embarrassingly parallel to large, high fidelity models, require high performance computing (HPC). But even today, HPC remains out of reach for many OpenSees users for a variety of reasons.
If you or your organization is able to purchase HPC hardware, the overhead to maintain and operate the hardware remains high. And every two years, you buy new hardware with more RAM and faster cores.
If you’re at a university and your research group has bought in to the campus super computing center, odds are your IDAs have been bumped by “more important” jobs from researchers who were able to buy in at platinum status. It’s back in the queue for you.
Or the super computing center is down for another round of hardware upgrades–just “swapping out a few nodes” from 5am to 6am on Sunday ended up taking a few days.
Running OpenSees on HPC presents a slew of problems, many of which are solved with cloud-based computing.
OpenSees Virtual Machine
On Amazon Web Services (AWS), you can launch EC2 instances with variable amounts of RAM and number of virtual cores (vCPU). Pick the right combination of resources for your OpenSees analysis needs.
But every new EC2 instance is a blank slate. Installing OpenSees, MPI, etc., and all the dependencies therein is a non-trivial task.
This is where the OpenSees Virtual Machine, an Amazon Machine Image (AMI), comes in. You can launch an EC2 instance with the OpenSees AMI and have instant access to OpenSeesPy, Python, OpenSeesMP, Tcl, and MPI, as well as a few example scripts to get you started. Nothing to install. Nothing to compile.
You can OpenSees in the cloud for a few cents to a few dollars per hour. The $1/hour price point gets you 64 GB RAM and 16 vCPU. Above this point, you can go as high as 384 GB RAM and 96 vCPU for about $5/hour. All things considered, much more affordable, reliable, and efficient than churning through HPC hardware or waiting in queues on the ground.
For more information on using an OpenSees AMI, check out the following resources.
1. OpenSees AMI in the AWS Marketplace
2. Available OpenSees AMI Instance Types
3. Steps to Launch an OpenSees AMI Instance on EC2
4. Connect to Your OpenSees AMI Instance
After you have successfully launched an OpenSees AMI on EC2, you can upload your IDA scripts to your virtual machine and run OpenSees. But first, it will be a good idea to assess the scalability of your model and analysis to determine which instance type is right for you.
One of the example scripts pre-loaded on every OpenSees AMI instance is
triParallel.py, a static analysis of a fine mesh of triangle elements. The model has over 100,000 nodal equilibrium equations.
To test things out, I launched two OpenSees AMI instances:
m5.8xlarge– 128 GB RAM, 32 vCPU
c5.9xlarge– 72 GB RAM, 36 vCPU
Yeah, the RAM on these instances is way too much for this model. But I want the high number of vCPUs.
The analysis can be run from the command line of each OpenSees AMI instance using
mpiexec, e.g., with 10 vCPUs
[ec2-user examples]$ mpiexec -np 10 python triParallel.py
If you haven’t made the switch to Python yet, you can run
mpiexec -np 10 openseesmp trussParallelMP.tcl on your OpenSees AMI instance.
Either way, running the analysis using 1 to 32 vCPUs (
-np 1 to
-np 32) gives the following run times.
The speedup peaks at about 10 vCPUs, i.e., with more vCPUs the communication between processors starts to dominate the run time. Also note that the
c5 instance does slightly better than the
m5 instance–this is expected as the
c5 instance series is “compute optimized”.
So, for this model and analysis, either a
c5.2xlarge with 8 vCPUs or a
c5.4xlarge with 16 vCPUs is the sweet spot.
You can do similar assessments for your models.
Finishing this scalability assessment in under an hour only cost a couple dollars with the
c5.9xlarge instances shown in the plots. No hardware to mess with, no waiting in line.
The author of this post is a co-owner of SecondSees, Inc., the company that sells the OpenSees AMI.