Computers have come a long way, and now that the U.S. National Highway Transportation Safety Administration (NHTSA) has released a letter claiming that the artificial intelligence component of Google’s Level 4 autonomous cars can be considered the driver, they can literally cross continents all by themes.s.
Google’s Level 4 full self-driving autumation vehicles earn their status by performing all safety-critical driving functions and by being capable of monitoring roadway conditions for an entire trip. L4s don’t have steering wheels, brakes, or gas pedals.
The NHTSA admitted that the current U.S. Federal Motor Vehicle Standards (FMVSS) were drafted before the invention of autonomous cars and were outdated in that they assumed that the driver would be human. Tesla’s autopilot mode and the surge of autonomous research and development being done by most major auto industry leaders have clearly indicated a changing of the times.
“[Google is] the only company so far committed to L4 because their objective is to completely eliminate the human, and thus human error, from driving,” explained research manager at Frost & Sullivan Praveen Chandrasekar. “Ford and GM are thinking about similar levels” of autonomous driving, he continued.
Tesla’s newest update is projected to make it possible for drivers to fly to another side of the country and call their cars to them. The cars will be able to pull out of the driveway and drive themselves across the country to a set location.
Google had struggled to remain in accordance with FMVSS by providing different interpretations of what a driver is and one in for where the driver’s seating position is.
“The next question is whether and how Google could certify that the SDS (self-driving system) meets a standard developed an designed to apply to a vehicle with a human driver,” stated the NHTSA.
US Transportation Secretary Anthony Foxx commented that the NHTSA’s interpretation “is significant, but the burden remains on self-driving car manufacturers to prove that their vehicles meet rigorous federal safety standards.”
Others believe that the outcome is outrageous and may or may not be a result of lobbying done by the powerful tech company. John Simpson, a consumer advocate at Consumer Watchdog, claims that the NHTSA’s interpretation is “outrageous.”
“Google’s own numbers reveal its autonomous technology failed 341 times over 15 months demonstrating that we need a human driver behind the wheel who can take control. You’ll recall that the robot technology failed 241 times, and the human driver was scared enough to take control 69 times.”
Not that the NHTSA is officially allowing Google to invent whatever it pleases and release it not he road. The NHTSA has simply delineated that Google’s requests “present policy issues beyond the scope and limitations of interpretations and thus will need to be addressed using other regulatory tools or approaches.” The question of autonomous driving is a big one, and it may take more than a few government agencies (and courts) to determine how legislation should proceed.
Then there’s the issue of insurance. According to Chandrasekar, “currently, insurance is decided based largely on the driver and minimally on the vehicle… [liability] is a huge favor, and that will need to be carefully analyzed as OEMs will end up being largely responsible… This is why OEMs like Volvo, Audi and Mercedes-Benz have stated that in their L3 vehicles they’ll assume all liability when the vehicle is driving itself.”