Developer of hardware and software tests AWS cloud bursting for EDA workloads, improving its runtime up to 40% while growing its engineering productivity and reducing time to market.

Services Provided: Amazon Web Services

Business Need

This developer sought to supplement its on-premise capacity for engineering workloads with cloud bursting.

Solution

Applying NTT DATA’s Compute at Scale methodology, client engineers established an EDA infrastructure in AWS through automation, replicated required data and qualified a workload to burst into AWS, testing the outcome against the company’s on-premise benchmarks.

Outcomes

  • Improves runtime in time sensitive design phase by up to 40%
  • Increases ROI for expensive EDA licenses
  • Grows engineering productivity
  • Speeds time to market of quality products
coworkers wearing protective goggles working on computer

Using NTT DATA Compute at Scale for AWS, this client reduced its design iteration times, bringing superior products to market faster and avoided costly problems.

Electronic design automation (EDA) helps hardware developers design everything from circuit boards to integrated circuits. By using EDA to design chips, developers can effectively assess the billions of factors that go into the creation of their products. This company develops hardware solutions that are embedded in everything from computer laptops to smart phones and automobiles and uses EDA to further advance its market innovations.

To further its work and continue pushing the boundaries achievable with its innovations, the company engaged with the high-performance computing (HPC) team at NTT DATA. To augment its on-premise capacity, the company sought to qualify an EDA workload to burst from its on-site data center into Amazon Web Services (AWS). In addition, the client wanted to measure the performance of the workload in the cloud, ensuring that its performance was at least as good—if not better—in AWS than on-premise.

Compute at Scale methodology

The NTT DATA HPC team worked closely with the client using the Compute at Scale methodology; it recommended taking a workload-based approach to HPC in the cloud, assessing for fit, designing and building out needed infrastructure, and running a proof of concept, followed by a transition to production. As a result, the teams began the process with discovery, assessing which of the client’s workloads would be the best fit for AWS cloud bursting. Business value and ease of migration were the leading criteria used to measure workload suitability.

Once the ideal workload was identified (a regression suite) NTT DATA consultants worked collaboratively with the client to set up the AWS infrastructure. Specifically, the infrastructure was provisioned and managed with Infrastructure as Code (IaC) using naming services, DNS, and IBM Spectrum LSF for job scheduling, as well as custom infrastructure components unique to the client’s workflow such as Jenkins and Artifactory. IaC will make it easy to setup new AWS regions in a consistent manner. The solution makes use of various AWS components, such as IAM roles and security groups to enhance security as well as the Elastic File System (EFS) to provide Network File System (NFS) file services for LSF configuration.

Cleaning the pipes for a smooth workflow

With infrastructure established, the team next ran test jobs to “pipe-clean" the workflow—iteratively running the workflow and removing errors until it ran successfully—and to tune the environment. The process gave client engineers the opportunity to test the workflow and find any changes they wanted made. The NTT DATA team quickly responded to these requests, working in concert with the client’s engineering IT staff and allowing the engineers to cleanly run their jobs in the new environment.

With the pipe cleaning process complete, the next step of the project tested workload performance on different AWS instance types to benchmark performance. Based on those runs, the NTT DATA team constructed a price and performance chart which showed the client the optimal instance size to use for its workloads.

The teams finished the pipe cleaning exercise more quickly than expected which allowed them to test three additional workloads on AWS. The workloads the teams ended up qualifying were mini regression, integration regression, and full regression as well as signoff for static timing analysis. In EDA environments, testing design iterations with advanced algorithms helps ensure a high-quality product is delivered to market. Testing iterations more quickly allows the team to reduce its design iteration times, thereby bringing superior products to market faster and avoiding costly re-spins to fix problems in silicon.

Cloud brings vast improvements

EDA applications are more expensive than hardware and are licensed per second. As a result, any improvement in runtime will lead to improved return on investment from expensive license assets. In addition, runtime improvements provide productivity enhancements and speed the time to market of quality products. For this reason, the team was elated that the candidate workloads saw runtime improvements up to 40% compared to current on-premise compute servers.

Based on these results, the client is preparing for a production rollout; to support it, the NTT DATA team worked closely with the client to ensure full knowledge transfer of the cloud operations, including a detailed runbook and recordings of training sessions for future reference. In this way, the company can be sure to extend the solution to more workloads for additional ROI and productivity gains.

Enabling some of the fastest growing trends in technology, like IoT and AI at the edge, this EDA firm cannot afford to slow down. And nor will it as the company marries its ongoing innovation with the Compute at Scale methodology and AWS technology to iterate faster and continue bringing life-changing technology to millions of consumers.

EXPLORE CASE STUDIES