There’s only so much you can script into code for a car to drive itself and make sure it’s safe on unpredictable public roads. In fact, if you go down the scripting route too much, you may end up with a product that lacks adaptability, and that’s exactly what Google is reportedly trying to avoid in developing its autonomous car tech.

Its current crop of (around twenty) prototypes seems to be pretty safe (we’ve heard no horror stories), though that’s partly down to the way they react to unusual situations. The software that controls them uses extreme caution when it sees anything out of the ordinary, most often stopping dead and perplexed.

An amusing episode, writes Mercury News, was noted when one autonomous car stopped dead when it detected a woman on a motorized scooter chasing a duck and waving a broom around…

However, that’s not really the way, as the researchers involved discovered that when it comes to efficiently moving through traffic, a car needs to be assertive (or incisive, but not necessarily aggressive). Otherwise, it will bog down far too frequently and seem unnatural – people aren’t overly cautious when driving and neither should machines…

To put it another way, Nathaniel Fairfield, one of Google’s software writing team leaders said “if you’re always yielding and conservative, basically everybody will just stomp on you all day.”

This leads us back to the legal problem again…

But regardless, one of the Google founders, Sergey Brin, expects self-drivers to be commercially available by 2017. That’s not a lot of time to engineer something this complicated (well), so we hope the work they’ve done so far (Google has been at it since 2012) has been fruitful.

Besides, it would be unnatural of us to fully accept this without solid proof which we’d still challenge before even coming remotely close to accepting it.

Video