Visualisation future

Where does visualisation technology go next?

2723 0

We set out to find out where the design visualisation and rendering technology industry goes next. Our experts, meanwhile, let us in on some of the driving factors spurring them on to push users’ productivity and capability further


Visualisation – in the context of design, engineering and manufacturing – has changed dramatically in recent years. But right from the start, the idea that the 3D engineering data that defines a product’s form and function could also be used to generate realistic images and animations of that product has proved to be 100% valid.

What hampered this idea was crappy tools. These suffered from scene set-ups that were too complex, computation times that were too lengthy and, if we’re going to be frank, pretty awful results. Anyone who was around at the tail end of the 1990s and early 2000s will be more than familiar with the state-of-the-art of that time: a phong-shaded, super-shiny part (irrespective of the actual material used), typically presented under a spotlight and inexplicably placed on a marble plinth.

And on top of this, it could take three hours to render out an image large enough to print in a brochure.

Fast-forward to today and we are presented with a bewildering choice of much-improved offerings, from older technologies that have made great steps forward, to brand, spanking new approaches, and from rendering engines built into our CAD system of choice to standalone tools that take you from 3D geometry to a nearphotorealistic rendering in just minutes.

In short, the visualisation technology market has never been so wide, nor so diverse.

At the same time, new technologies that originated in the games and entertainment markets have started to make real in-roads in the industrial sector. Head-mounted displays (HMDs) are now both affordable and capable and GPUs can get us to results in a fraction of the time it used to take. Real materials are scanned, processed and available to the masses.

Advertisement
Advertisement

Let’s face it: you’re spoilt for choice.

But visualisation marches on to conquer new horizons. While the concept of ‘doing a render’ is now common and pervasive enough to perhaps be considered best practice in all manner of organisations, there are always those who are looking to push the envelope, to explore what’s next.

Look at the live stage used on Disney’s Mandalorian to dazzling effect (Google it, if you haven’t already, it’s breathtaking). Look at Unreal’s MetaHumans project. Look at what Nvidia is up to with Omniverse. The list of what’s coming down the line is endless.

Arthur C Clark’s adage still holds true: “Any sufficiently advanced technology is indistinguishable from magic.” Something that one year is considered state of the art, beyond the grasp of the mainstream, offering the magic of Hollywood and so on, is quickly packaged, commoditised and made available for all of us the following year, sometimes even sooner.

At the moment, there is an interesting convergence between the technologies and engines used in the gaming industry and those in the industrial world – it won’t be long before the two collide to create a brand new set of tools where the differentiation between play and work doesn’t really exist anyone.

So to help us identify what to expect next from visualisation technologies, we’ve asked nine experts from the field to tell us what they believe the future holds. Their thoughts are presented here and should give you some inkling of future developments.

Forewarned, as they say, is forearmed.


Gavin Bridgemanvisualisation
Chief technology officer, Tech Soft 3D

From media and entertainment, to design and engineering (yes, you read that correctly), there is now a strong desire to have models on your devices look like they do in the real world.

It wasn’t long ago that CAD engineers didn’t value having lifelike renderings of their models, or that CAE engineers were happy with the digital simulations they’d been using for years. With technology like augmented reality (AR) and 3D scanning now blurring the lines between the physical and digital worlds, that’s no longer true.

One of the primary requirements to get lifelike renderings of your digital model is having more sophisticated descriptions of the materials from which it is made. The engineering world, in particular, has been stuck in the RGBA world, where surface colours are expressed as a combination of red, green and blue values, along with a transparency (also called alpha) value. That was until physically based rendering (PBR), implemented via GPU shaders, came along.

PBR via GPU originated in the gaming industry and provides a way to describe materials with equations that more accurately reflect how surfaces react to light.

With PBR came a way to have lifelike renderings of models on practically any device.

In recent years, there have been several file formats – .FBX and .glTF – that allow you to describe your models with PBR materials. The problem is that these file formats are new, and not everybody has moved to PBR definition for their materials.

This begs the question: When are we going to see widespread support for PBR materials within the industry?

Ultimately, this isn’t just about rendering. It’s about being able to expand the use of 3D. For example, why can’t we take the same design model and use it to power a fantastic retail experience via AR? In order for design visualisation and rendering to take its next big leap forward – and in doing so, push users’ productivity and capabilities further – enabling them to easily access lifelike renderings of their 3D data will need to be addressed.


Belinda Ercanvisualisation
Twinmotion product marketing manager, Epic Games

Today’s world is so very fast-paced and rapidly changing that it’s not easy to keep up with the entire spectrum of tech advancements as they are happening, let alone comprehend the nature of the ‘now’, of technology today.

It is hard to predict what’s next, if we can’t grasp what’s now. But as Carl Sagan said, “You have to know the past to understand the present.”

When I look back, I see the convergence that is impacting the ‘now’. With highly complex design tools becoming increasingly more simplified, relations have started to form between toolsets once seen as unconnected entities.

For example, the AEC industry has typically worked in a relatively compartmentalised way. Architects, engineers, contractors, consultants and many more specialists often find themselves working in separate silos, as an island on their own, in a world where no person (or tool) should be an island any longer.

Now, with simplified creative tools, what’s happening is the convergence of technologies, allowing for more collaboration between the separate roles, where the architect can more easily work with the acoustic consultant, the engineer can collaborate with visualiser, the gamer can collaborate with the programmer, the student can collaborate with the teacher, and so on.

The roles and tools of a specialist now become more accessible to the generalist, thus driving and unleashing slumbering potential for creativity and productivity in a broader pool of users. Everything’s becoming more interconnected.

So it’s actually a case of the multiverse becoming the universe – one holistic, interwoven, unified system of creators collaborating, supported by tools and the rarest, most valuable commodity of all, their own creativity.


Henrick Wann Jensenvisualisation
Co-founder & chief scientist, Luxion

The most overarching effects of advancements in design visualisation and rendering technology are seen in the emergence of immersive real-time experiences, driven by development in GPU ray tracing hardware and viewing equipment for virtual, augmented and mixed reality.

These developments directly advance the visualisation capability of 3D professionals across all industries, by providing abilities that allow more experiential, hands-on, and adaptive creation of completely realistic product appearance and function in their natural environment or expanded synthetic environment.

Within this, we will see advancements in material creation and application, brought about by new methods to capture and digitise material data, with increased abilities to accurately interpret and correlate real-world appearance, lighting, sensory and environmental conditions.

The correlation with digital creations will also become more aligned with physical phenomena, input and variation while overcoming its limitations.

This creates downstream workflow improvements for 3D professionals, and even consumers, helping them to realise factors that affect product appearance and function, within or without immersive capabilities.

As hardware and visualisation technology continues to advance, we will see output capabilities that far exceed what was ever thought possible. In the same way that we see the speed and accuracy of product visualisation advance, we will also see the accessibility of product visualisation advance.

This will be most notable in a company’s ability to provide or present product experiences, but also show up in a consumer’s ability to realise or otherwise generate immersive product experiences.

Such accessibility will create demand for accurate appearance and function, which will in turn lead to more hardware and software constraints being overcome.


Marek Trawnyvisualisation
Director of product, auto and concept design, Autodesk

At Autodesk, we believe rich design visualisation and rendering will play a central role in collaboration and design decision-making of the future, as well as yield engaging new marketing and sales experiences.

As technologies used for design, engineering, visualisation and manufacturing converge, it’s becoming simple to render and view product designs, even as real-time modifications are made, through technologies like VR and extended reality (XR).

Getting there requires integrating digital technologies more deeply into creation and review workflows, but we’ve been doing so for years and will soon effortlessly deliver renders to any device, from high-end VR headsets to the phone in your pocket, adding imagination and storytelling to the process of design reviews.

Multiple key stakeholders immersed in the same virtual world, regardless of device or location, have a shared virtual experience in which to view and discuss the product. This puts powerful decision-making information at the fingertips – and in the faces – of the people who keep organisations agile.

Every day, we’re overcoming infrastructure and process challenges that slow the seamless delivery of these experiences. As connected design datasets are increasingly stored in the cloud, important pieces of this puzzle come together. Tedious data preparation tasks required before creating renders are being eliminated.

Autodesk is enabling automotive, accessories and apparel and consumer product design leaders to even capture and archive the feedback resulting from virtual reviews, right in the original design data.

Increasing rendering efficiency pays off by enabling everyone from executives to marketing managers to potential customers to have immersive visualisations available instantly.


Sandeep Guptevisualisation
Senior director, professional solutions group, Nvidia

In today’s hyper-competitive environment, businesses strive to develop new products and get them to market as quickly as possible. As a result, manufacturers are looking to accelerate workflows at every stage.

That might involve automating unsafe and repetitive tasks on the factory floor, or using advanced engineering software to instantly explore design alternatives, or even seeing photorealistic simulations of design concepts at their earliest stages.

Whatever the solution may be, it is critical to find every way possible to work faster and more efficiently.

Additionally, working together is essential for designing and building high-quality products.

Real-time collaboration is becoming far more of a necessity, whether it is between design and engineering teams, between manufacturers and customers, or even with AI and intelligent machines.

And with so many people working remotely, collaborating with ecosystems of colleagues, contractors and vendors in globally dispersed locations becomes increasingly complex.

Over the years, Nvidia has pushed the pace of innovation with breakthrough technologies in physically based rendering, engineering simulation, immersive virtual reality and applied AI.

But professionals also need a common, open and extensible platform, which is easily accessible and can be connected to AI, design software and intelligent machines – a truly interactive, open environment in which users can seamlessly collaborate with teams around the world, all in a virtual shared space.

This is where we have directed our recent research and development efforts – in bringing all these capabilities together within a photorealistic, physically simulated and AI-driven environment.

Here, product development professionals can accelerate workflows, maintain productivity, and collaborate interactively unlike ever before.

We call it Nvidia Omniverse. As you may have seen at Nvidia GTC, we are already well along this path, working alongside companies like BMW to put Omniverse through its paces in real-life workplaces.


Thomas Tegervisualisation
Co-founder & chief product officer, Swatchbook

The trend in visualisation and rendering is clear: it is all moving towards real-time visualisation, on any platform. While the traditional marketing shot is still important, people want to interact with the product: to view it, spin it, interact with it, configure it, see it on the device, in 3D, AR or VR – all in real time.

Taking into consideration that a lot of companies are moving towards a ‘sell it before you make it’ process, it will be important that any product sold is tied to an actual vBom not just listing components, but also materials. This will tie together design, production, retail and consumer in a new way, and allow for seamless bi-directional flow of data between brand and consumer.

In order for this to succeed, the process must be fluid, so that data can be reused from concept all the way to the consumer.

The importance of a common data model for 3D in tools for preparation of the asset for the desired platform has never been greater.

What is new is the aspect of materials, not just from a visual standpoint, but also from a metadata aspect. This means information relating, for example, to price, availability and sustainability.

This in turn requires that digital materials must follow the same principles as geometry: the ability to use a ‘standard material description’ that allows reuse of such materials in any system.

While visualisation applications still use their own proprietary shader model, it must be open enough to ingest and reuse any of the standard material descriptions with little to no rework.

This also calls for further democratisation of the tools used in the process. Tools that allow anyone to participate in the creative process, allowing them to put their talents to work to create the best designs possible, without being limited by their knowledge of the applications they use.

And, of course, it has to work on mobile. With the power of the recently introduced iPad Pro, it is foolish not to take advantage of the compute power and the overall user experience a tablet can give with touch.


Jamie Gwilliam viz
Sr. market development manager,  AMD

Rendering is historically all about waiting and refining, waiting and refining. While the refining stage will always be part of creating beautiful product images, the waiting part shouldn’t be.

We’ve seen the rise of realtime workflows, where design has taken its lead from the games industry. We’ve seen the rise of more CPU cores, leading to improved rendering speeds. We’ve seen our overall understanding of what makes a good product image increase, by looking to others for inspiration and reading tutorials.

All of these advancements are driven by the hunger for speed, or more specifically, of not wanting to wait. We simply don’t want to wait for calculations to be performed and for pixels to be generated.

One of the fastest pieces of hardware for performing calculations is the GPU, but this has historically been expensive.

Now, with the introduction of GPU hardware rendering on even low-workload GPUs, we see software companies adding functionality that transforms the standard design viewport into an interactive ray trace renderer.

As end users, our expectations continue to rise. What was seen as high quality two years ago is now sub-par and takes too long.

As consumers, we want it all and we want it now – and we want to pay as little as possible for it. These wider expectations will continue to drive a rise in quality and ease of use.

There will always be specialist users, creating stunning marketing visuals, but we all know what makes a good image of our product, and we want to create it within our CAD product, at multiple stages in the design process, not just at the end of it, or using specialist software or hardware.

So what drives rendering advancements further? My answer: It’s our expectations. And long may that drive, which we all share, continue.


David Varela
Sr.industry manager – manufacturing, Unity Technologies

At Unity, we’re seeing design visualisation play a key role throughout all steps of the product life cycle, from concept design to sales, marketing and after sales activities. One of the specific areas we see rendering technology disrupting is everincreasing customer demand for customisation at ever-decreasing timescales.

Additionally, as teams and consumers remain remote, you’ll see increased adoption of real-time 3D technology to enable new ways to conduct sales and marketing.

Examples of this might be in virtual design centers and meeting rooms, where this technology fosters the personal connection that you might otherwise lose by just using video conferencing technologies.

With people stuck at home, we see customers using the Unity Editor to host collaborative design reviews with people located all over the world. The same tool allows teams from all sectors to create digital twins and virtual reality training and guidance applications, to simulate robotic cells, and even to train AI-driven vehicles.

Furthermore, using tools like Unity Forma, developers at automotive, transportation and engineering companies are overcoming the skills barrier and enabling users of all backgrounds to create engaging experiences for product reviews, configurators and sales applications.

The most exciting stage is yet to come. Unity’s real-time platform is gaining massive adoption from industrial users, because of its huge potential to enable and empower digital manufacturing.

We see real-time 3D transforming how products are made, by enabling companies to adopt flexible manufacturing operations, accelerate new product introduction and use AI to optimise manufacturing processes – and in these ways, help organisations to build the agility and resilience necessary to survive in today’s challenging environment.


Phillip Miller
VP of product management, Chaos

At Chaos, we see a continuum of rendering possibilities stretching out before us, from game-like, real-time rendering and VR, to real-time ray tracing of massive assemblies, to predictive simulation, and finally, to filmquality experiences.

More importantly, we believe all of this can be part of a continuous workflow that unites the efforts of designers and visualisation specialists across projects and 3D tools.

The critical design details made clear through material, surface and illumination choices will no longer be stranded in one renderer, tool or designer’s mind, but will persist throughout the project lifespan, saving time and preserving design intent.

Artificial intelligence (AI) will continue to become more widely employed in the rendering process, making tedious or multi-step setups automatic, while also suggesting new permutations for design exploration and experimentation.

Combining AI with wellcurated libraries of materials and methods will enable even casual users to easily achieve the type of surface appearances that help them predict how the manufactured result will behave across surface options and lighting scenarios.

As these changes take hold, efficiency and productivity will dramatically increase. For most designers, this will mean they can explore more ideas/approaches to achieve a superior design, rather than delivering an average design more quickly. And ultimately, better designs should translate into better business.

Finally, as the rendering workflow reaches an always-on/ continuous state, the ability to share results and interactive experiences with remote decision-makers will also increase, which will do wonders for design communication and accelerating the decisionmaking process.


Leave a comment