As prepared for
Toyota Advanced Safety Seminar
Wednesday, Sept. 4, 2014
Ken Koibuchi, General Manager of Intelligent Vehicle Development, Toyota Motor Corporation
Good morning everyone.
As I believe you will see from my presentation today, I have the most-fun job in the company.
In fact, safety is fun…and I can prove it.
Today I will offer you a progress report
- on what Toyota has been doing recently
- and where we are planning to go in the future…
- in the area of intelligent vehicle technology.
The Toyota Advanced Active Safety Research Vehicle
- was unveiled at the Consumer Electronics Show in Las Vegas,
- nearly two years ago.
That vehicle, capable of fully automated driving,
- was developed mostly at the Toyota Research Institute North America
- only a few miles from here
- and was meant to offer a future reference point
- for a broad active-safety strategy at Toyota.
Toyota believes that someday,
- in specific traffic environments,
- vehicles will be capable of full automation
- but with drivers always in control.
As is the case with zero traffic fatalities,
- full automation is a target difficult to achieve.
In striving for perfection,
- safety advancements can be made at an quicker pace.
Our goal is to bring to market as soon as possible,
- high-level driving assist systems
- for the mass market.
One big question will be; will the masses be ready?
- about a million dollars an hour on R&D
- because we take a broad approach
- to find answers to challenging questions.
In the area of advanced safety,
- we have been comprehensive in exploring all avenues
- that will get us closer to zero traffic fatalities.
In fact, the new era in mobility…is already in motion.
In the course of researching and developing automated driving technology, we have succeeded in developing…
- “Automated” active safety systems and
- “connected” or Intelligent Transport Systems,
- which are both high-level driving assist technologies.
Connected vehicle technologies
- require either Vehicle-to-Vehicle or
- Vehicle-to-Infrastructure communications.
Whereas automated driving technologies…
- rely on processing information
- solely from on board sensors, GPS and cameras.
At last year’s ITS World Congress in Tokyo,
- we demonstrated an Automated Highway Driving Assist (AHDA) vehicle on public roads
- equipped with new cooperative and adaptive cruise control
- and lane tracing features.
The system is capable of assisting the driver with steering, braking and acceleration tasks
- by maintaining an inter-vehicle wireless connection with the vehicles in close proximity.
This year, we will demonstrate
- a similar functionality, with certain updates to match US road conditions .
- but without being connected by vehicle to vehicle communications.
Instead, this year’s AHDA vehicle
- relies solely on automated driving technologies
- to navigate through downtown Detroit.
The aim of both systems is to develop
- a predictable and trustworthy relationship
- between the vehicle and the driver.
Not only does that mean
- that the vehicle can take control from the driver in a critical situation…
- it also means that the vehicle can alert the driver
- that the driver should take control
- of a function or maneuver…
- that the vehicle has not been able to properly process.
This year’s system also uses a driver awareness feature
- that scans eye movement and hands-on-the-wheel
- and alerts the driver if it believes
- that insufficient attention is being paid
- to the road ahead.
In other words, a full-time backseat driver.
The AHDA Lexus GS
- that we will demonstrate for you tomorrow in downtown Detroit
- will cover a course of diverse conditions.
This newest edition of the AHDA concept series
- is an example of our progress in Intelligent vehicles,
- where high-level automation
- has been refined and improved
- and made more marketable.
Toyota plans to market technologies based on the AHDA advanced driving support system in the mid-decade.
Since the 2013 CES,
- we have also been moving quickly
- with advancements in the AASRV concept series.
What I am about to show-and-tell
- is part of the fun stuff I mentioned in my opening.
A universal truth is that
- Europe, Asia and North America are major markets
- with wildly diverse traffic environments
- that must be tailored—programmed…
- into the awareness, processing and reaction functions
- of automated driving technology.
The original AASRV was tailored to American traffic environments
- with ample room and high speeds.
As research and refinement progressed,
- we needed to shift to a traffic environment
- that was less expansive,
- with slower speeds
- and much different signage and infrastructure.
In the video we will show you today
- we have three vehicles, all three featuring
- our current level of fully automated driving equipment.
The grey vehicle is being tested at its home base in the general Ann Arbor area
- on specific public highways…
- with open, high-speed sections
- the predominant environment,
- along with quite a bit of rural road
- where signs and signals and stripes
- might not be in abundance.
The white vehicle spends most of its days at our test facility near Mount Fuji…
- where it is driven dawn to dusk
- confronted with demanding, decision-making scenarios
- such as blind hilltops and blind corners of merging traffic,
- and light- into-darkness tunnels with pedestrians on narrow walks
These vehicles feature significantly enhanced software than our earlier AARSV
- that allow them to read and identify objects
- such as vehicles, bicycles and pedestrians in Japan’s crowded traffic environment
- with greater accuracy and less variation.
The vehicles are designed to perform 3 main functions;
- fully acknowledge the 360-degree surrounding situation.
- Automatically take the safest route
- Amend, or follow-up on driver error with display, alarm and/or automatic reaction.
To accomplish this, they must
- verify 3 dimensional data via highly-accurate recognition technology, improved mapping,
- and extract and properly identify
- all vehicle types, bikes and pedestrians.
The red vehicle that you see in this video,
- demonstrates a completely new and more demanding test environment in which to operate;
- the crowded boulevards around the Imperial Palace in Tokyo.
This route was recently used
- to demonstrate this technology to prime minister Abe.
Please note…that is not Prime Minister Abe behind the wheel.
Camera vehicles in front of and behind the red target vehicle
- record how it navigates around parked busses,
- road work and pedestrians in full automated mode.
What you also see is the data overlay
- of what the vehicle is seeing,
- and how that matches up
- with what the camera vehicles are seeing.
Finally, there is the mapping overlay
- that allows a viewing comparison using GPS, as well as on-board maps, LIDAR and cameras,
- that gives us a more three dimensional view of the accuracy of this system.
In full-automated mode, this vehicle
- understands and obeys basic safety rules
- keeps in its lane, maintains a safe distance
- and avoids a crash by anticipating predictable movements
- of dissimilar objects.
A large bus will probably not suddenly lurch into your path
- Whereas a motorcycle might suddenly zip past
The point of this, of course
- is that to successfully navigate through such congested traffic environments,
- the automated vehicle must gather as much “good data” as possible as quickly as possible…
- from as many sources as possible…
- steering, braking and accelerating within extremely close tolerances.
Simply put, the better the data “input”, the better the processing and the better the maneuver.
Even in conditions that are challenging for the onboard sensors, such as in the rain, the vehicle processes information from many sources and judges how to drive at an appropriate speed.
We’re getting better, but there is still much improvement necessary in the following four key areas.
The first is that we need to set realistic expectations
- on when this technology goes primetime,
- which is the social component.
We need to educate all stakeholders on the benefits and challenges,
- rules and regulations and
- integration of what Kuzumaki san identified earlier as
- the three pillars of people, traffic environment and the vehicle.
Then there are the human factors.
Rather than making it seem like the driver can simply take a nap at the wheel,
- we need drivers to understand
- that there will be task-sharing involved,
- handing controls back and forth
- and that over-confidence must be avoided.
Also, the vehicle dynamics control systems themselves
- must be bullet-proof in reliability,
- with highly advanced electronics platforms
- and a special focus on system and cyber security.
Finally and most obvious, we need a smarter car.
This is a technology currently getting straight “A”s in high school.
But, to deliver on the expectations of its parents…
- it will need to graduate cum laude from a major university,
- with minor degrees in next-generation…
- real-time 3D mapping,
- further refined recognition technologies
- and a dose of complementary ITS infrastructure
So…what’s next is the part where the car gets smarter,
- components get smaller and cheaper,
- decision making algorithms get more comprehensive
- and operational software, higher in performance.
Later today you will have the opportunity to actively explore two pieces of that on going research that we are just now starting to integrate:
- A new miniature Light Detection And Ranging “LIDAR” system
- that may eventually replace the gumball machine on the roof;
- And a new 3D heads-up display system
- which builds on the company’s philosophy
- that advanced safety technology should work as a “teammate”
- that informs the driver of critical information
- including vehicle status, traffic conditions and road signs
- on the front windshield, in almost hologram format .
So this is where we are…and as you see, it’s a very fun place to be.
Thank you for your attention today.