Considering the complex nature of digital design, it seems odd that there isn’t more 3D input hardware. Al Dean looks at what Elon Musk of Tesla / SpaceX recently demoed and wonders if we’ll all get to play Tony Stark
Say what you like about Marvel’s Iron Man movies, if they do one thing well, it’s that they show that engineers and designers can achieve greatness. No, I’m kiding.
Of course, it’s a fantastical world where money is no object, where all the future technology of the world is available to a former arms manufacturer turned super hero. And whether you despise the plot holes, his obsession with Audi (Stark wouldn’t drive an R8, would he?), or the Oracle product placements, the one thing everyone loves is the interface of a whizzy holographic design system.
He manipulates geometry, visualises it (strangely, always in wireframe) and builds it — presumably by a 3D printer that the world’s not even dreamt up yet. As a glimpse into the future, run through Hollywood’s magic machine, it’s fascinating but, ultimately, nonsense. Or is it?
Earlier this month, founder of Paypal, Tesla and SpaceX, Elon Musk, tweeted “We figured out how to design rocket parts just with hand movements through the air (seriously). Now we need a high frame rate holograph generator.” Then promised to post a video of how they did it.
With Musk being worth around $9 billion, running an electric vehicle company and building space ships, a statement like that tends to grab everyone’s attention. A week later and the video was posted — it’s worth a look. In it you’ll see a mix of Siemens NX, a Leap Motion, then progressively more sophisticated display devices used to manipulate 3D geometry.
For the layperson, this looks incredible. For those with experience of actually doing this kind of work, it’s quickly clear that this is about geometry movement (panning, zooming, rotating) rather than actual editing.
But it’s a clue as to where things are heading.
How did we get here?
If we take a step back and look at the hardware available to those working with 3D design tools, in the earlier days, hardware was bespoke for the process.
The sheer compute power (for the time) required calculation in 2D, then 3D geometry creation has always pushed the curve in terms of what’s available. That’s why companies like SGI came along with high powered (and high cost) workstations.
Anyone that used an Indigo or an Octane will tell you that these things had a look and feel that made you feel special (even though they just cost you twenty grand).
Alongside the high powered computation, hardware has always been lacking in terms of input, enter the keyboard and mouse.
While they might have been plugged into a metallic purple workstation that cost a shit tonne of money, the design has always been conducted with peripheral devices based on 2D workflows. Yes there’s the likes of Wacom with its ever expanding range of tablet devices, but again, these are focussed on 2D workflows.
Of course, 3D design, in an ideal world, would require a 3D input device.
The most successful comes from 3Dconnexion, which saw the producers of the SpaceBall (labtec) and SpaceMouse (Logitech) merge into a single organisation. These devices are still developed and loved by those that use them.
There’s also the haptic input devices, which use a force feedback device to ‘sculpt’ digital clay. There are a couple of vendors out there, such as Geomagic, who is now part of the 3D Systems empire, that still produces these devices but they remain a niche.
There have been other attempts to get something off the ground and I’ve got a drawer in the office devoted to just such attempts. Anyone remember the Dimentor that combined the mouse with a rollerball for 3D manipulation? No. Thought not.
So, what is it about Musk’s demo that’s different?
The answer is that this is something funded by a user, rather than a vendor, using consumer grade electronics. Albeit a user worth billions with his own space ship company.
His use of the Leap Motion is interesting. This is the enabling technology and it’s cheap. I walked into Maplin and bought one for under 70 quid. Once it’s plugged into your machine and you’ve worked through the orientation process, it becomes clear that while still early days, this thing has huge potential.
Being able to manipulate what’s on screen directly is something that seems both alien and intuitive at the same time. It’s alien because it’s new, it’s intuitive because it just works. But that’s the demo.
Post demo blues
The Leap Motion is cool technology, that can’t be denied. But the reality is that there’s very little in the way of actual use for this thing yet.
At present, only Autodesk has launched drivers for its Maya and Motion Builder products in the media and entertainment industry. Personally, I’m amazed that no other vendor has talked about this yet.
The idea of being able to manipulate a geometry model on screen using your hands is powerful. I doubt it would change how you design — waving your arms about is tiring, to be honest. But where I can see it fitting in is presentation and design review.
Imagine working with these tools with a decent projector and your design team. It makes it more dynamic and less exclusive.
So let’s see what comes in the next 12 months, so we can all play billionaire playboy philanthropist.
Will we all get to play Tony Stark with motion-sensing design tools?