Harness the power

1725 0

With new compute APIs out now and CPUs & GPUs starting to merge next year, the smart software developers are already looking for intelligent ways to make the most of all this available power,
writes Rob Jamieson
When a software company sets about designing a new version of its software it often looks at what hardware it expects to be out in the marketplace when all the development work has finished.

This generally means looking at current trends and then transposing the speed and performance in two years hence.

Historically, this approach has proven pretty successful, but problems arise if things don’t pan out the way that the software company had predicted. Take Moore’s law, for example, which talks about
doubling of the transistor count. In recent years clock speed has not increased at the same rate and instead CPU manufacturers have boosted performance by adding more cores. This certainly caught a few companies out.

The recent global slow down in world markets also reduced the spending on new hardware and a few software companies have very high requirements for their new software right now. As you can see, predicting future standards is not easy, but with two to three year software development cycles you have no real choice.

In 2008 proprietary APIs were introduced that interact and talk to dedicated hardware such as CPUs and graphics hardware GPUs. These dictate that a certain piece of hardware does a certain defined task. There is no regard for the ability of the hardware to cope with this. If there is not enough RAM to load the file it can crash.

Companies that adapt quickly and get on the right path could grow at a dramatic rate even in sectors of CAD that are historically slow to change

Now in 2010 different industry standard programming schemes are coming to the fore such as OpenCL and DX direct compute. These APIs are designed to be able to look at a task and assign it to
the most suitable piece of hardware in a system with a view to completing the computational task as quickly as possible. This could be multiple cores of a CPU or the stream compute engine of a graphics card, or both. Typical examples of a high compute problem include meshing an assembly for stress analysis or generating NC code for CAM.

Advertisement
Advertisement

At the moment programming software to automatically switch between the available compute resources is not a trivial task and software developers need a certain level of in-house resources to make this work. While the bigger CAD/CAM/CAE software companies are able to throw programming resources at this challenge some of the smaller software firms are still choosing to develop for either GPU or CPU.

Helping such companies make the most out of these new APIs, third party toolkits will be available in the future. These Integrated Development Environments (IDEs) are currently under development and
will use industry standard libraries and development tools to create code for OpenCL and DirectX direct compute and make it easier for compute-intensive tasks to be automatically balanced between
GPU and CPU.

It’s not only the software tools that are changing; hardware is also going through an evolution and CPUs and high performance GPUs will soon to appear on a single die. In the first instance, this will
affect the size, performance and power requirements of mobile devices, say netbooks, but by bolting together multiples of these the integrated technology will also be able to produce high performance
systems. All of the major chip development companies are looking to produce these.

For many years programming new software for the future simply required looking at where the clock speed of the CPU might be in a couple of years time. Now, with new APIs and development methods
that can harness the power of CPUs and GPUs, and a future where both compute technologies will co-exist on a single die, the game is certainly changing.

Nobody can predict the adoption levels of this new technology but companies that adapt quickly and get on the right path could grow at a dramatic rate even in sectors of design software that are historically slow to change. There is always the driver that if you can do it faster and cheaper somewhere else (platform or software) people will change.

Rob Jamieson is a marketing manager at AMD. He would pay big money for an OpenCL application that automatically fills out his expenses forms for him. This article is his own opinion and may not represent AMD’s positions, strategies or opinions.

{encode=”robert.jamieson@amd” title=”robert.jamieson@amd”}

Rob Jamieson looks at getting the most from the new compute power