Automotive

Honda sets its sights on level 4 autonomy by 2025

View 3 Images
Honda demonstrated its progress this week at a closed track in Japan
The idea of autonomy levels first emerged in 2014
Honda had previously announced plans to introduce Level 3 autonomy by the year 2020
Honda demonstrated its progress this week at a closed track in Japan
View gallery - 3 images

Honda has ramped up it self-driving car efforts this week, publicly setting its sights on achieving Level 4 autonomy by 2025. This would see the company's cars handle all driving tasks without human intervention, and according to Honda, complete a critical step in its contribution toward a future of accident-free roads.

Honda had previously announced plans to introduce Level 3 autonomy by the year 2020, which would mean highly-automated freeway driving. The new announcement takes things one step further and is yet another sign of how quickly these technologies and the industry as a whole is advancing.

This idea of autonomy levels first emerged in 2014 by way of a report from the international Society of Automotive Engineers (SAE). By categorizing the varying capabilities of this new breed of vehicle, it would provide a framework and a language developers and the general public could refer to as we progress along the road toward full autonomy.

Honda had previously announced plans to introduce Level 3 autonomy by the year 2020

Full autonomy is described as Level 5, where the steering wheel is optional and the seats might face even face backwards to form a mobile lounge room in your car. Level 4, which Honda has in its cross hairs, would mean a car that can be driven by a human, but doesn't ever need to be. It will call out for human assistance if needed, say if it encounters rough weather or unusual conditions, but by and large this constitutes a true self-driving car. Daimler and Waymo are two other examples of companies targeting Level 4 autonomy.

Honda demonstrated its progress this week in Japan with media on-hand, where a driving test saw a vehicle equipped with radar sensors, LIDAR and multiple cameras navigate a closed track simulating multi-lane freeway traffic. Another test saw an autonomous vehicle use AI and cameras, with no GPS or LIDAR, navigate a complex urban environment and predict and avoid obstacles.

You can see these tests playing out in the video below.

Source: Honda

View gallery - 3 images
  • Facebook
  • Twitter
  • Flipboard
  • LinkedIn
4 comments
guzmanchinky
With 100 people dying every single day in the US alone, this cannot come soon enough. I am also relieved by the fact that at age 46, by the time I reach my parent's age, I won't be confined to the house because I can't drive anymore.
Derek Howe
Wow, they are a full 5 years behind Tesla...they better hurry up. At least they are working on it...but then again, they don't have a choice if they want to exist in 10 years.
MQ
Autonomous systems... Hilarious description of level 5, of course you may as well read a newspaper, at this level of autonomy there isn't even an emergency stop button for you to push any more, your fate is out of your hands.
An alternate interpretation may be... - of course open to debate and definition of terms: Level 5 autonomy removes the human from the loop totally, ie. no supervisory role for the human (unreliable meatbags), merely define the mission parameters (like google maps routing without using the options) and the system plans and executes... (real system wide level 5 autonomy would delegate mission parameters to the "Master Controller".)
As soon as the human makes a high level intervention -at execution or planning level- this rapidly degrades the autonomy through level 4-2 ( merely disapproving of a system intention and ordering a "recalculation of intentions" is level 4 (human on the loop/ supervisor [authorise/revoke]) - -oops crash while having a debate with the machine. The human choosing one of multiple scenarios (kill/no-kill, go/nogo, left/right/straight-ahead, stop/no) is level 3 (human on the loop / operator) allowing interaction like; altering system behavior using a touchscreen etc. -human/operator on the loop).
If the human were to use a "Fly-by-wire" (FBW) (computer augmented) control input, this would be back to a level 2 instantly (human /-IN-the-loop [series control]), at this level there is still no direct control by the human of things like actual steering, braking or throttle inputs these are all abstracted.
Level 1 Autonomy is fairly familiar where we switch cruise control (or other assistance tools) on and off at will (parallel manual and automatic control) to assist reducing workload (optional control loop) while maintaining manual control / direct override at all times.....
Level 0 IS NOT an Autonomy level, this is full manual control (to the point of no climate control, braking assistance, cruise control or power steering. That would be like returning to the stone age) with NO automated systems at all.
The big leap in terms of control architecture is level 1-2. Once we relinquish direct manual control by including a physical abstraction layer, we are 100% relying on the automation system inserted between the human and the machine to work perfectly. This step is a key design change and often cannot be overridden at will. From there to level 4 is almost trivial (with today's computing power), but to get to level 5 requires sophisticated "AI"-like systemwide interaction (higher level traffic management).
At all levels below 5 there is a need for the operator to retake control in "extraordinary" situations, often with a fraction of a second notice. This is an extraordinarily difficult thing to do in a close-quarters traffic environment due to lack of situational awareness, and practice by a "standby" operator.. Driving in the most dangerous thing we do, and all it takes is a 10th of a second of inattention at the wrong moment, and it all ends badly...
Deres
This seems more realistic than most projects from Google ... Note that to attain autonomous driving, they will use 2 out of 3 independant sensors (cameras, LIDAR and milliter wave radar) and that each sensor will be at least double for redundancy. As a result, such a technology will be highly expensive, as you will need at least 6 sensor frontally and probably 12 to cover the rear and side plus the power to analyse all the data in real-time of those data-rich sensors.