neural CAD

Neural CAD accelerates design exploration

13 0

Design software should support rapid and iterative exploration of new ideas. Neural CAD engines could be the best way for designers and makers to shift into a new age of AI enhancement, writes Mike Haley of Autodesk


In the early stages of a design project, during ideation and conceptualisation, tools should accommodate the broadest methods of expression. Early specifications, outlines, rough sketches and photos (among others) are all essential in expressing ideas.

Yet, the core paradigm of CAD software, parametric CAD, has remained largely unchanged for 40 years. These engines, built using traditional software technology, rely on a keyboard, mouse and typical user interface. Creators must explicitly define parameters such as the dimensions of an object. That can restrict creativity.

Creators need technology that doesn’t obstruct their creative flow. Instead, it should enable rapid and iterative exploration of ideas, regardless of how detailed or approximate they might be.

Over the past decade, research has underscored that early-stage explorations significantly enhance project metrics like cost, sustainability and suitability. That’s why recent advancements in neural technology are generating such excitement.

Picture a scenario in which a computer comprehends spoken language, sketches, 3D design data and industry-specific workflows, and then enrich that scenario with a project team’s decades of project knowledge

A new era approaches

Today, we are entering a transformative era in design and make technology. Picture a scenario in which a computer comprehends spoken language, sketches, three-dimensional design data and industry-specific workflows and now enrich that scenario with a team’s decades of project knowledge.

Advertisement
Advertisement

That is the work of pioneering neural AI foundation models that focus on design and make problems. Neural CAD engines enhance legacy parametric engines, offering novel ways to explore solutions and generate geometry.

They still honour the traditional precision and control designers require, but most importantly, they also introduce entirely new and richer ways for users to express and interact with ideas.

Unlike parametric CAD, they are learning and improving, so that they constantly align better to creators’ needs and ways of working. Meanwhile, they can reliably reason about the three-dimension and physical world.

Ultimately, these advancements are aimed at making creators faster, smarter and more competitive. The quicker designers can turn around their designs, the faster they can move products to markets – but the impact goes beyond speed. AI-powered design tools can directly reason about the immense complexity within design creation. It is this that sets the stage for vast improvements across the entire design and make category.

Human/machine interaction

Imagine you’re a product designer exploring some early-stage product concepts. While it’s well known that AI image generators can quickly create conceptual images, now consider that instead of just an image, you can generate a highly detailed CAD model just as easily. Simply by entering text prompts or sketches, the neural CAD engine will start to produce options instantly.

The deep neural networks behind this technology are directly reasoning through the surfaces, the edges and the topology that would satisfy any request. And it doesn’t just produce one model, but multiple models, allowing users to quickly explore trade-offs between different options. The result is first-class editable CAD geometry, so that a design can be immediately usable.

Buildings are a great example of the transformation these systems can provide, given their interrelated levels of details, representations and components. Changing something in one place invariably means creating or updating associated representations and structures. Much of this work is laborious. It slows down creators’ ability to explore alternatives and rapidly make changes.

Consider an architectural massing model that outlines a proposal for a new building; as the architect manipulates the shape, the neural CAD engine responds to these changes, autogenerating floor plan layouts. Generative AI creates a simple floor plan instantly, but for this to be truly useful, an architect must control the generation.

Leveraging the multi-modal interaction enabled by neural CAD, they can lock down certain elements, such as a specific stairwell, hallway or room, and then use natural language prompts to instruct the software to change structural materials.

Combining the power of large language models, the software can recompute the location and size of columns and create entirely new floor layouts, while honouring constraints specified by the architect.

In the future, these neural CAD foundation models will be customisable to an organisation’s proprietary data and processes. The integration of these models will transform the creative process, making it more intuitive, collaborative and abundant.


About the author:Neural CAD

Mike Haley is senior vice president of research at Autodesk.

In this role, he leads the company’s industrial research focusing on uncovering new technologies for customercentric transformations and addressing challenges such as climate change, automation and industry convergence.

 


This article first appeared in DEVELOP3D Magazine

DEVELOP3D is a publication dedicated to product design + development, from concept to manufacture and the technologies behind it all.

To receive the physical publication or digital issue free, as well as exclusive news and offers, subscribe to DEVELOP3D Magazine here


Leave a comment