As prepared for:
Los Angeles Auto Show
Toyota Press Conference
Wednesday, Novemeber 20, 2013
Thanks Bob. Good afternoon everyone.
For those of you not familiar with the Collaborative Safety Research Center…
- it is a unique research model developed by Toyota
- that includes 26 projects…
- with 16 universities and research hospitals in
At the CSRC,
- we are focused onhow to better protect people in crashes,
- and how we can help prevent collisions from happening in the first place.
We also believe in the importance
- of openly sharing our findings as widely as possible.
Ultimately,everyone has a stake in highway safety,
- regardless of the nameplate on their vehicle.
Today, I will discuss three projects;
- two are specific to CSRC;
- the other, a project between Toyota Motor Sales and Microsoft.
The thread that runs through all projects…
- are three important researchideas …
- at the forefront of how we, as an industry
- are trying to reduce highway injuries and fatalities.
Last week the National Highway Safety Administration announced
- that about 1 in 10 fatal accidents in 2012
- was driver distraction affected.
There is still much to be learned in this area.
The first important research idea that we and others are taking
- instead of talking only about driver distraction…
- it might also be meaningful…
- to talk about driver awareness.
Driver distraction can be visual.
It can be manual and it can be cognitive.
What is important is…
- eyes on the road…
- hands on the wheel…
- and brain engaged and aware .
These three elements are always…
- at some level…
- even though they are often
- talked about separately.
The question is…
- how do we stay properly stimulated…
- and relaxed…
- to be sufficiently aware while driving?
The second important research idea
- is that cars
- are an interaction
- of multiple screens;
- informing the driver…
- of her environment,
- inside and outside of the vehicle.
- there was the windscreen,
- and rear window…
- and the rear and side-view mirrors.
We now have multiple gauge clusters,
- large information screens…
- and heads-up displays…
- all feeding us information…
- and competing for our attention.
As our staging here today suggests…we are all becoming screen junkies.
For many of us…our lives are one big multi-task.
While sitting at the kitchen table having that first cup of coffee in the morning…
- we talk on the phone…
- while we are texting…
- while we are downloading a video…
- of a dog eating peanut butter.
Problem is … we don’t want to stop when we get into our cars to go to work.
For the last 100 years the car has played the role of a functional tool, dutifully respondingm to our human needs and input.
That relationship has forever changed.
Not only can cars see things and react quicker than humans…they are becoming intelligent.
In fact, we now find ourselves at a point
- where perhaps the most important focus of all…
- and my third important research idea…
- may be on what is often called…
- the driver-vehicle interface.
In truth, it should more aptly be called… the driver-vehicle relationship.
People relate to electronic devices socially. They build bonds with them. And like any human-to-human connection, they have an emotional effect on people.
We are now capable of creating a true inter-relationship between the driver and an intelligent vehicle.
And it will have a profound effect on saving lives on the highway.
Today, I want you to start thinking of the car…
- and the driver…
- as teammates;
- sharing the common goal of saving lives.
As teammates, they have individual roles. Each must rely on the other, to perform consistently and accurately.
The best teammates learn from each other. They watch, listen and remember. They adapt. They communicate. And they assist, when needed. Over time, a foundation of trust is built.
You can’t trust what you don’t know
And as trust is built, more tasks can be reassigned shared or re-assigned.
Together, the teammates are building
- a common situational awareness
- of their driving environment.
So, driver awareness…
- and driver-and-vehicle as teammates.
Let me demonstrate.
A little more than a year ago, Toyota and Microsoft got together to explore a simple proposition.
- we could begin a conversation between the driver and the vehicle…
- before the driver… starts driving?
What if the car could recognize the driver,
- and begin a conversation
- on a window screen
- as he approaches?
For example, if the driver walks-up…
- on a weekday afternoon,
- the car could ask if the destination was first daycare…
- and then home.
The driver…using hand gestures, voice, or a key fob…responds.
The car then proceeds
- to alert him of a traffic jam along the way,
- offer an alternate route,
- inform him it is low on fuel,
- and pinpoint on a map, the closest gas station.
By addressing these tasks—distractions if you will…
- before the car is set into motion,
- the driver is given more mental bandwidth…
- and, potentially a higher level of awareness
- to the task of driving safely.
It also accomplishes something else. The car is showing that it wants you, the driver, to be safe.
It recognizes you. It remembers you and your priorities.
This is the kind of trust that will be needed
- to allow higher and higher levels
- of driver-assist technology…
- as we move closer to forms
- of fully automated driving.
The Toyota Driver Awareness Research Vehicle was developed to do just that.
DARV (“darvy”) presents what is essentially another… highly useful screen,
- dedicated to the all-important pre-launch,
- or preparation-portion of driving a car.
Integrating Microsoft Kinect technology and rear projection…
- the system can be programmed
- to recognize more than one individual.
For example… as parent and child approach the vehicle…
- DARV can initiate a game with the child…
- to see how quickly seatbelts can be buckled.
Conceivably, the side window could also be used while parked
- for entertainment…
- watching the pregame show
- at a… side-gate party …at the stadium.
This first-step of the DARV project
- explores the vehicle-driver relationship on the outside.
Next steps will move inside the vehicle.
By recognizing who is onboard and where…
- tasks could be shared among the passengers.
In motion, the driver might be locked out
- of specific tasks,
- like operating the navigation system…
- whereas a teammate
- in the front-passenger seat will not.
The DARV vehicle
- clearly demonstrates
- how people can relate to electronic devices
When the device works well…
- it builds trust…
- and is used more often…
- with predictably more effective results.
This notion is the focus of a recently completed
- collaborative study…
- between the Massachusetts Institute of Technology’s AgeLab
- and CSRC.
- research on how voice commands impact on driver attention
- has been limited.
This study was meant to expand our understanding, of the human factors of voice command, while developing data that could fill-in research gaps to support the development of NHTSA’s in the development of its Phase III distraction guidelines.
This research project is not an evaluation of current technology.
Rather, it is meant to research how drivers interact with these systems to develop continuous improvements.
The MIT AgeLab study investigated…
- how, or whether,
- the cognitive demands imposed by voice command…
- impacted a driver’s focus on the road ahead.
- that cognitive demands…
- were actually lower than expected.
They believe this is due, in part,
- because drivers often compensated
- by slowing down,
- changing lanes less frequently,
- or increasing the distance to vehicles ahead.
However, in some of voice interactions …
- the number of drivers taking their eyes off the road
- to perform voice command tasks
- was higher than expected. the
The study found
- that this situation is often more pronounced
- among older drivers,
- some of whom turned their bodies
- toward the voice command’s graphical interface.
Finally, the study showed
- that the issue of driver attention
- will require considerable further research.
Why are these findings significant?
We humans are so hard-wired for voice…
- that the use of voice,
- by the driver…and by the car…
- increases social response.
And this creates expectations for the driver.
Drivers will put effort into working with a system they think of as a teammate.
They will use words that the system understands…
- especially when they feel that the system
- is trying to understand them.
But if the system does not provide a good experience,
- trust could be reduced.
This idea of building trust, by sharing tasks
- is being taken to a new level
- with a collaborative project
- between CSRC …
- and Stanford University
- led by Clifford Nass.
Cliff was the first person I ever heard talking about driving as a collection of screens.
As some of you may know, Cliff passed away on November 2nd.
He was a friend and a colleague and a pioneer in the study of how humans interact with machines especially in the effects of multiple screens and multi-tasking.
He contributed greatly to our knowledge of human factors.
His work will continue…but he will be greatly missed.
The premise of the Stanford study is that as we move closer to automated forms of driving…
- we must understand
- how the car and the driver inter-act;
- how they hand-off control of the vehicle…
- back-and-forth…in critical situations
This is accomplished with Stanford’s innovative and unique driving simulator.
The system combines
- EEG sensors to track brain activity,
- skin sensors to measure emotional arousal
- and eye-tracking headgear
- to follow directional glances.
The system can determine what’s happening inside the car…
- with what’s happening outside the car…
- with what ‘s happening…
- With the driver’s brain and behavior.
The simulator can
- shift from fully automated control…
- to driver in full control,
- to mixed control,
The driver’s focus and level of attention are measured, including…
- emotional state, and reaction time…
- in a variety of scenarios.
For example, how does the driver respond…
- to a sudden “takeover now!” alert…
- compared to a “takeover now” that says why…
- compared to a suggestion of taking over…
- within a specific timeframe.
After driving for a prolonged period
- in fully automated mode,
- how are a driver’s abilities effected?
Are reaction times and situational awareness diminished?
Should the mode be communication from the car to the driver
- be voice, or auditory lights or visual or haptic…
- or a combination.
These are questions that need to be answered…
- not only to help build a product.
But also, to build a foundation of understanding
- and guidelines…
- for how we proceed with further research…
- into the human factors of automated vehicles.
For CSRC…it’s an important and fascinating project,
- and we will keep you posted on its findings.
I will end today with a short announcement about a fourth project that’s really a lot of fun.
But before that; I’ve covered a lot of ground.
It’s a lot to absorb and there is a lot of good stuff that I was unable to touch on.
On the screens behind me are the websites to go to for much more information.
Also, with us today are…
- Bruce Mehler from MIT,
- and our CSRC Human Factors researcher, Jim Foley.
We will be around after the press conference for more discussion.
Finally, the Toyota Teen Video Challenge invites teens from across the country to create a short video to encourage their friends to make safer decisions behind the wheel for a chance to win a $15,000 scholarship.
This year’s winning 30-second video was made by Ryan Johnston and was re-shot by a Discovery Channel film crew and will air on Discovery networks nationwide.
Here it is.
Ladies and gentlemen, the creator of the winning entry for 2013, Mr. Ryan Johnston
Ryan, congratulations on your win and thank you for helping us spread the word among teens about the importance of safe driving.
And thank you all for joining us today.