Erin McDermott’s autonomous driving fears

2093 0

When I attended the Consumer Electronics Show (CES) in Las Vegas this year, there was an electric buzz around autonomous driving at the event.

Millions of development dollars are being spent. Big players and small players are filling the field. Larger players are gobbling up smaller players, too.

It could be argued that electric vehicles are a first step in building tomorrow’s AI-powered vehicle, and here we even see a company that has never built a vehicle before: Sony.

Without explicitly being stated, any CES attendee would acknowledge the public expectation that robot chauffeurs are coming.

They’re inevitable and they’ll also be wondrous.

It’s the future! And, at the same time, it’s horrifying! As an R&D engineer with many years of experience in industry, including the automotive sector, I’ve seen things. I think all engineers have.

We know a physical thing can’t be made in mass production 100% correctly, 100% of the time, even with accurate planning and communication.

Advertisement
Advertisement

(“Accurate communication”? What is that, anyway?) Beyond that, automotive engineers especially know that there are no completely safe vehicles out there.

It’s not because manufacturers are evil and don’t want consumers to be safe. It’s that no one would pay for – or want to be seen in – a super-safe vehicle.

The thing would be constructed exclusively of padding, roll bars and speed limiters. It would also cost an exorbitant amount to meticulously test every millimeter of each finished product.

The saving grace of these dangerous, speeding hunks of metal that we consumers buy and sit inside, is that humans are pretty good at reacting to snafus they don’t anticipate.

So when brakes fail, or your Ford Pinto bursts into flames, or grandpa escapes from the nursing home, hijacks a car and is speeding toward you, going the wrong direction on the highway, you can creatively react.

It’s not imperative that you already have a programmed response for these specific situations.

For an autonomous driving program, though? How does a software engineer build code to anticipate situations that are hard to imagine?

I polled my fellow engineers in the physical product space and found I wasn’t alone. Me: “What do you think of autonomous driving?”

All of them: “Terrified.” Right, ok. But the question remained: Was I being hasty, assuming bad things about the software side?

Maybe I’m not being fair. I asked my friend, Keith Marcum, a software nerd/expert extraordinaire about his personal thoughts from the coding side of the table. TL/DR: Also horror.

As he put it: “So, basically, all software is terrible. Some of it is terrible in ways that you can see. Some of it is only terrible beneath the surface.” Great!

So, at least in this respect, hardware and software are kind of similar.

Marcum made some other interesting points that I hadn’t previously considered.

It’s worth noting that, as we chatted about this topic, we were dangerously cut-off on the expressway multiple times as he drove me to catch my flight to Las Vegas.

First, laypeople he speaks with often assume that the best and brightest programmers are put to task on coding for autonomous vehicles. This is false.

They’re the same mediocre people writing car robot code as the folks who built the first version of Apple Maps (my example, not his) – or, when you think about it, any other programme that’s disappointed you lately.

Second, people put too much faith in AI. This in itself is scary.

Those who aren’t naturally fearful have a tendency to kick back and take a nap when they should be supervising their robot’s actions. (Would you nap when your teen with a brand new learner’s permit was driving? Oh, no? But you’re cool with the bot driving? Ok, great. Just checking.)

Third, there are different levels of autonomy and this goes along with the last point I made.

Humans will tend to assume that their fancy Knight Rider car is at fully automated Level 5 – even if it’s not.

Finally, if the bar is other human drivers, then that bar is low. Really low.

On that note, Marcum tapped his brakes, anticipating another driver was about to merge not only into our lane, but also into us, before going on to say:

“I don’t mean to say that humans can’t make software that’s good enough to keep humans alive in dangerous situations. We clearly can. But when we write safety-critical software, it looks totally different.”

As he explained, general software development culture is one in which coders are used to being able to build things without directly facing the consequences of bad decisions.

Think back to the 2017 Equifax data breach.

For this reason, Marcum believes that, “safe autonomous vehicle software can only happen in defiance of prevailing software culture.”

In any case, I’d like to prop up one suggestion for autonomous driving development teams: Can you make sure that your programmers get to spend plenty of time driving outside of California roads? I’d hate for the bar to be set by those drivers most likely to make me require a change of pants.


Get in touch:

Erin M McDermott is director of optical engineering at Spire Starter and a digital nomad (read: vagrant).

She travels the world meeting hardware engineers who don’t know that things using light (cameras, LED illumination, LiDAR, laser processes etc) need competent design, optimisation and tolerancing, just like the rest of their widget. Get in touch at spirestarter.com or @erinmmcdermott


Leave a comment