'It's already happening. Us human beings … We had a chance to make something of ourselves and we blew it! Ok. Another civilization is gonna take a turn. We take machines and stuff them full of information. And they become smarter than we are."
Those scripted words were spoken by actor Buddy Hackett more than 50 years ago in the Disney film, The Love Bug. This familiar Volkswagen Beetle takes on the world as the first autonomous vehicle, making its own decisions as to where it wants to go and how quickly it gets there.
So where are we in the road to autonomy— Some industries and transportation support groups suggest that it's just around the corner.
Those who are not embedded in the industry are reading blogs or watching news broadcasts and advertisements on "AI" writing formats and coding structures sense that autonomous vehicles are going to flood the highways within the next few years.
Running a light Level 5 AV within a few mapped miles, however, is a lot different than a vehicle navigating any road or highway across the U.S. like our fantasy Bug.
It's going to take quadrillions and quadrillions of machine learning before the machine learns to mimic human cognitive decision-making and performs complex tasks that rely heavily on human reasoning.
Of course, there is programming that is helping along the learning process, but that is only part of the challenge. There is a ton of machine learning to perform before anything goes AI.
For example: At an average intersection, you have four cars, four bikes, four pedestrians. That four-way intersection with different states of a traffic signal or signage can generate more than 131,000 different scenarios. Times that by the more than 250,000 regulated intersections in the country (according to the National Highway Traffic Safety Administration). That's a lot of events, a lot of learning going on.
What does this have to do with advanced driver assistance systems (ADAS)— Plenty!
ADAS is the backbone to autonomy — no matter the propulsion mode. And this data collection starts with you: the daily driver.
If you have an antenna on your car or truck, the manufacturer is uploading/downloading data when it comes to driver-assist components, as well as ingesting how the driver is interacting with other cars and trucks, as the autonomous vehicle will not be driving alone, at first.
So, to have the most accurate data for machine-learning purposes, vehicles that have either dealership or aftermarket shop ADAS calibration/recalibration need to be dead-on when it comes to the process.
Including alignment first, calibration/recalibration second — at the same time/visit and not days, weeks later. If the alignment is not correct — out of spec, say, due to a pothole or curbing incident — the off-angles can adversely affect the calibration/recalibration efficacy when it comes to vehicle-to-driver ping upon lane departure, auto-emergency braking or adaptive cruise control, to name a few.
That means checking the ride-height and zeroing-out the thrust. No more "set the toe and let it go" thought-process.
Another practice that can miscue the calibration/recalibration via technician "forced" reset is based on a vehicle's environment during the ADAS calibration process.
I'm talking about providing the proper space for static calibration/recalibration or navigating a rich landscape for those cars and trucks requiring a dynamic approach. Both scenarios, again, report back to the OE bank of ADAS machine learning.
The end-user support system — professional technicians — needs to address ADAS as what it is: machine-learning devices towards a successful autonomy.
Follow directions to the letter, and in order of application. Use professional equipment in proper working order. And document, document, document.
Remember: The system is geared to operate within one degree.