When it comes to solving simulation jobs, there’s never been a wider choice of hardware. With a glass of Bordeaux in hand, Greg Corke contemplates the role the cloud has to play beyond offering near infinite compute power
We recently went on a family holiday to France, just at the time the UK was suffering from the terrible floods.
We didn’t avoid the rain altogether, but I’ve learned how enjoyable a downpour can be with freshly baked bread, cheese and a glass of Bordeaux to hand.
In planning our trip we thought long and hard about how we’d get there. Plane, train, ferry, tunnel, car. Each had its pros and cons in terms of cost, time, ease of travel and luggage, which was particularly relevant considering our seven-month old son was in tow.
While we fell short of drawing up a full cost/benefit analysis, a hard decision was made and, despite a few hiccups along the way, we had a thoroughly enjoyable two weeks.
This all got me thinking about the ever growing array of options available to designers and engineers for calculating heavy duty simulation studies.
There used to be two choices: do it on a workstation or, if your workflows demand, invest in a cluster. But in a relatively short space of time the options have doubled.
GPU compute now looks to be a serious contender and then there’s the ‘cloud’ to throw an on-demand option into the mix.
But, much like a Ryanair booking form, does it have to be so confusing?
Local knowledge
I’m a big advocate of doing things locally.
Data doesn’t have to be moved around slow networks, users have complete control over their resources and with the right choice of hardware results can come back quickly.
Furthermore, GPU compute is now starting to deliver and we’re seeing significant benefits in terms of performance and workflow. But despite the immense computational power on offer, workstations will always be limited in what they can do.
After all, there are only so many processors you can cram into a desktop chassis. Those serious about simulation can take things up a notch with a dedicated cluster.
Problems solve quicker and designers can try out more iterations. But clusters still have their limits. This is where it makes sense to look at the cloud.
Flexible thinking
The cloud offers virtually unlimited amounts of processing power, so say the marketers.
Autodesk, a big advocate of cloud, calls it infinite computing, but in doing so I think it is only telling half the story.
There’s still big value in having complete control over processing power. But the cloud can complement these local resources as and when required
For me, infinite compute brings up visions of simulation studies on a massive scale.
Forget the aircraft wing. Let’s simulate the whole plane: fuselage, landing gear and engine. Heck, let’s even throw in the drinks trolley. Then simulate hundreds of different iterations to arrive at an optimal design.
But the cloud is also about offering flexibility and I see one of the biggest benefits being its ability to deal with varying workloads within an engineering firm.
With local hardware, it’s hard to find the right balance. Invest in too little hardware and jobs are left sitting in a queue. Invest in too much and you’ve blown the entire IT budget on processors that sit idle.
A few years ago I visited DreamWorks Animation in California. It was a truly fascinating trip. I experienced my first ever 3D film, Monsters vs Aliens, but the thing that stuck with me was a PowerPoint slide of DreamWorks’ render schedule.
The complex Gantt chart showed how its vast compute resources were going to be deployed in the next few years. No idle time, render farms used 24/7, 52 weeks per year. Pre–production, production and stereo rendering on a number of long-term film projects planned down to a tee.
The problem is engineering is never so predictable. Ideas don’t work and other options need to be investigated. New projects suddenly come in with incredibly tight deadlines. This all means compute demand can vary hugely from week to week, day to day, even hour to hour.
Here the cloud offers the benefit of being able to serve up compute power on-demand. But the cloud doesn’t have to be about high-end workflows, when hundreds of huge compute jobs need to be solved in parallel.
It’s equally relevant to a designer who only uses simulation a few times year or a mobile workstation user who wants access to clusterlevel performance from a client’s office.
Before people confuse me with some sort of cloud evangelist, it’s important to get a few things clear. I’m well aware of the barriers to adoption — bandwidth, security and business models are the ones that immediately spring to mind.
The point I’m trying to make is with the cloud it doesn’t have to be all or nothing. I think there’s still big value in having complete control over processing power, whether that’s a workstation, GPU workstation or a cluster.
But, as the technology matures and software increases in availability, the cloud can complement these local resources as and when required.
Use the technology most appropriate to your project at any given time. On our trip to France we chose to fly. We hired a car. And then when the sun came out we rode our bikes. We could have gone by
ferry, Eurostar — even hitchhiked if we’d been feeling reckless.
The one thing I did learn: a fine glass of Bordeaux always tastes the same however you get there.
Greg Corke contemplates the pleasures of Bordeaux and the potentiality of the cloud
Default