Published 17 May 2010
Posted by Al Dean
Myself and the legendary Greg Corke had a conference call and web demo with the Autodesk team yesterday to look at the first iteration of its cloud-based computation tools for simulation. Currently in closed invite-only Beta, the two projects, Centaur and Cumulus, are the first pass at using remote computation technology to speed simulation. Whether you want to call it a Cloud app, call it SaaS (Software as a service) or call it SRDFBCSE (Stuff Running Dead Fast on a Big Computer Somewhere Elsewhere), this is a serious trend and we’re finally starting to see demonstrations of how things are going to pan out over the coming years. But let’s break it down a little and get some clarity.
This is a remote solver for Moldflow - that’s pretty simple. You use the standard client for Moldflow Insight (rather than the Inventor integrated version) to carry out your traditional pre-processing and study set-up. Geometry is worked on, a mesh created, inputs and parameters set. you then send this data to ‘the server’ and it calculates it. Once complete, it send back the results dataset and you use the same client software to inspect, to validate and interact with the results.
This follows a vaguely similar pattern. you download an Add-in for Inventor Simulation that deals with optimization. You set-up your geometry, add the loads and constraints, choose the variables for optimization from the inventor model (at present, you need to define each variable’s inputs). Essentially, you use a design of experiments methodology to define the values for each variable and set a goal. At present it only works with a Factor of safety type analysis so the inputs are pretty standard. once done, you hit the calculate button, this send the data to the calculation server, the optimization iterations are performed and the results are streamed back to your Inventor client on your workstation. You look at the results from a pretty clear list and choose the configuration you want.Inventor adapts the model in work to that configuration, adopting those chosen inputs for the geometry and it’s saved. Job Done.
Before we get onto what this means and the potential here, I did want to get Greg Corke, our resident hardware guru’s thoughts on what he saw. So, here you go - i’ll be back once you’ve read this:
Greg Corke on What the Cloud really offers
While ‘The Cloud’ is a supercomputer on a massive scale it’s important to point out that neither of Autodesk’s Cloud-based technology previews will actually reduce the processing time of a single simulation job.
Both the solver codes for Project Cumulus and Project Centaur are not designed for distributed cluster architectures, so single jobs won’t be any faster than when using a cutting edge multi-core workstation. It’s also important to note that the solver code in Project Centaur (Inventor) is not multi-threaded and only runs on one CPU core. The solver code used in Project Cumulus (Moldflow), while multi-threaded, offers little additional benefit when more than four cores are used on a single job. N.B. read our article on Moldflow Insight and computation.
What both Cloud-based technologies do offer is the ability to increase the number of simulations that can be carried out in parallel. This is designed to make it possible to find a much better solution to a specific problem using design of experiments. Even with the latest multi-core workstations (2 x 6 CPU cores), running Project Centaur (Inventor) on a desktop machine would be limited to 12 jobs at the same time (and would also probably slow down the workstation to do other tasks such as modelling). Autodesk says that select customers that have already tried out the technology are typically running 20-30 jobs in parallel and some have tried out over 100 – we guess it depends on just how optimal you want your design to be. For Project Cumulus (Moldflow), as the solver code is multi-threaded, the benefits of using the Cloud could be even bigger.
Designed to run on Clouds such as Windows Azure or Amazon web services, both of Autodesk’s technology previews are purely CPU-based, not that there are many (or any) GPU and CPU-based Clouds out there yet. However, as Moldflow Insight offers GPU compute capabilities using Nvidia’s CUDA, Autodesk said it is looking into using Clouds that use both CPU and GPU and it is working with Nvidia and other GPU manufacturers to do this. It also said it is looking at a number of GPGPU technologies and not just CUDA, so one could be fairly certain that OpenCL is one of these.
A few final notes on challenges
This is Labs territory so this isn’t going to be how the final shipping or delivered version will be - that would be an unrealistic expectation. It’s a hint at where Autodesk are going. As Greg has pointed out above, the key thing to understand is that at present, this will not speed up single pass simulation tasks, either in Moldflow or Inventor Simulation - what it’s about is scaling up the number of simulations that can be performed in parallel. A simulation run that took 7 hours (as the moldflow example did) is still going to take 7 hours. The point is that you can do multiples of them at the same time.
There are many tasks in the simulation world that require multiple iterations - optimization is of course, one of the prime examples, but there are many others - in fact, anything that uses a convergence methodology is suitable as are many others. This isn’t a particularly new concept in the computing or indeed, simulation world - the likes of Fluent, Ansys and MSC have been doing this type of thing for years.
In terms of challenges that lie ahead, there are many. Pricing is one. There are established models for this type of service. The higher-end simulation world often works on a token basis or a CPU hour basis - either you pay per CPU hour or you buy a number of hours in advance and use those. Or you pay an access fee and get an all you can eat access to do what you want. Would Autodesk use one of these models or bundle a certain number of CPU hours into the higher-end Inventor Suites? Who knows and it’s almost pointless to speculate - let’s see where the technology can get to first.
Personally speaking, for existing users, the Moldflow centric Project Centaur looks to provide the most immediately usable functionality. Moldflow simulations are lengthy as they’re typically used on very complex parts and the ability to carry out more studies, to find the optimal configuration for a mold tool can be key.
Consider running a mould tool for 24 hours a day producing thousands of parts in a single day. If you can shave a few seconds off the cycle time by running more optimization processes on gate positions, size, cooling channels, injection and machine parameters and such is going to be a huge benefit in terms of cost savings.
Let’s be clear - This isn’t about about the cost of adopting a highly parallel cloud-based computation methodology.
This is about saving 3 seconds off an injection mould tool cycle time, which when you have 12 sets of the same tool, producing 1,000 parts a day for three years, meaning you can build the same parts and get that same number of units to market in 150 less days or getting a 5 month jump on your competition.
The problem is that the optimization process is very computationally heavy - if you want a fine detail understand of how your mould tool will perform, you need a much finer mesh but that means a heavier simulation dataset. Because of that heaviness users will often find a workable compromise between accuracy and speed of result. A cloud-based approach, where computation is much cheaper and much more powerful, would negate the need for that compromise and true optimization could take place rather than choosing the best solution from a limited set.
In terms of Project Cumulus, which offloads optimization in Inventor Simulation to a remote-based compute farm, its clear that much work needs to be done to find out what users require - not necessarily in terms of functionality in the simulation tools, but the other things that surround it. At present, the results aren’t stored or downloaded to Inventor - you run the optimization, choose the optimum configuration and run with that. As Ravi Akella, Product Manager for Digital Simulation said in our conversation, “The results are blown away, why would you need them?” While I’m sure this was more a case of someone close to the technology getting carried away, but to my mind and even though this is an early stage technology preview, it’s unworkable and needs to change - and change quick. Simulation is a tool that can be incredibly useful in the product development process but that process needs to be documented and traceable.
This is a Tech Preview, allowing both users and Autodesk to try out these technologies and see what they can do, see how the server handles the load and see what problems occur when you’re disconnecting the client from the computation engine. That said users still need to be able to gain access to the data that drives decisions, what were the parameters input, what did each iteration give you? How else are you going to know if this technology is for you? Even if you’re only ‘playing’ with it, you need to be able to see why your results are what they are. “Blowing away” results data doesn’t support that.