Professional graphics cards are capable of much more than simply spinning a model on screen. Greg Corke looks to a future where CAE calculations are carried out on the workstation chip most appropriate to the task at hand
Over the past couple of years there’s been a huge amount of hype surrounding the use of GPUs (Graphics Processing Units) to perform computationally intensive tasks usually carried out by CPUs (Central Processing Units). The thing is there’s been very little activity by the CAD/CAM/CAE software vendors, with most action coming from niche areas of finance, science, and academia – so little action in fact, that I had started to wonder if we’d ever see this tech appear in product development workflows.
A couple of weeks ago though, Autodesk announced that it is using GPU acceleration in its Moldflow Insight 2010 application to speed up the simulation of plastic flow in injection moulded components. The technology is based on Nvidia’s CUDA parallel processing architecture, which is supported by Nvidia’s Quadro graphics cards, and the development is said to have resulted in more than a 2x performance increase.
While this is certainly a significant increase, it’s not in the order of 10s or 100s – factors that have often been bandied about when graphics card manufacturers have extolled the benefits of using GPUs for tasks traditionally carried out by CPUs. The significant news here though is that Autodesk’s Moldflow development is the first implementation of GPGPU (General Purpose GPU) technology from a mainstream CAD/CAM/CAE vendor. From speaking to Nvidia and its rival AMD over the past year, both companies have maintained that GPGPU technology is very much on the roadmap for a number of CAD/CAM/CAE software developers, but as developments have been at an early stage both companies have been unable to name names.
Not all computational tasks can be offloaded from the CPU to the GPU, in the same way that not all computational tasks can be accelerated by multiple CPU cores. Simulation and rendering, however, are commonly named as areas ripe for benefit. For example, because a lot of CFD (Computational Fluid Dynamics) code is highly parallel and scales well over multiple CPUs, this should theoretically translate well to GPGPUs, which feature a massive array of parallel processors, in the order of 100s.
Of course, if writing code for GPGPUs was that simple and the benefits that great we’d already have GPU-accelerated CAE software from all the major vendors. One of the issues is that because this technology is still in its infancy, there are many paths for software developers to take. Nvidia has CUDA, AMD has its Stream Development Kit (SDK), The Khronos Group (the industry consortium behind OpenGL) is putting the final touches to OpenCL, Microsoft is developing DirectX 11 and (yes there’s more) Intel is working on cT and a programmable graphics engine called Larrabee, which combines GPU and CPU technology.
CAD users may start to look at higher-end graphics cards with a view to using some of the compute capacity for GPGPU calculations
In terms of momentum, Nvidia certainly has the lead with CUDA. For the last two years Nvidia has invested a lot in development and its PR engine has been on full throttle making a lot of noise about its many CUDA implementations. While most of these have been in academia, medical, science and niche areas of engineering, it has also had made some inroads in commercial software, most notably with the GPU acceleration of Adobe’s CS4. Surprisingly, one development it has not promoted (and one which only came to light in researching this article) was the news that Ansys has already implemented CUDA in some of its FEA code.
The major problem with CUDA is that it will only work effectively with Nvidia hardware. And this is where OpenCL has an advantage, as it is an open standard that works with Nvidia, AMD and Intel technology and can also execute across heterogeneous platforms consisting of CPUs and GPUs. The technology is very close to completion and OpenCL 1.0 will initially be introduced in Mac OS X v10.6 (Snow Leopard) in September 2009.
AMD and Nvidia are among the many members of The Khronos Group and OpenCL has the full backing of both companies. Nvidia is also keen to point out that because OpenCL code is very similar to CUDA porting efforts will be minor.
If momentum for GPGPU in mainstream CAD/CAM/CAE applications continues to grow this could have a major impact in how engineers and designers consider workstations. CAD users, often content with low to mid-range graphics cards, may start to look at higher-end boards with a view to using some of the compute capacity for GPGPU calculations.
Dedicated GPGPU boards are also available, (Nvidia has Tesla and AMD FireStream), some of which are already sold by the major workstation vendors or put inside dedicated personal GPGPU-based ‘supercomputers’. Furthermore, AMD and Intel are also looking at merging GPU and CPU technologies, so future workstations may be able to more dynamically allocate computational resources to the task at hand.
Most CAD/CAM/CAE software developers have been very tight lipped about their plans for GPGPU development, probably because some are still in the early development stages. But the challenge is not only porting code to the new platform, it’s whether GPGPUs will bring worthwhile performance benefits over standard multi-core CPUs. Autodesk certainly believes in its 2x speed up in Moldflow, but is less optimistic about the benefits in other areas of its software development. GPGPU is certainly not a technology for every software developer, but I expect the next six to twelve months will deliver some very interesting news for GPGPU use in CAD/CAM/CAE.
Greg Corke expects big things from GPGPU