36: The Psychology of Driving

36: The Psychology of Driving

Read More

The driver-vehicle relationship is changing quickly. From sensors to cameras, AI and other smart-enabled tech, it’s starting to seem like vehicles are more computer than they are car. In this episode, Toyota research scientists, John Lenneman, Ph.D., and Josh Domeyer, Ph.D., join us to address common curiosities and answer questions—while posing many more.

That’s the nature of the job, it seems. Both are proud, self-admitted nerds, passionate about problem solving and discovery, especially as they relate to safety in the auto industry and beyond. They detail their academic journeys to Toyota and explain, in layman’s terms, the whys and hows behind their work, and that of the research community in general.

Grab a notepad and tune in to The Psychology of Driving for this Toyota-powered introductory course on behavioral research, and explore the resource links to learn more about specific studies or organizations discussed in this episode.

Intro:  [Intro]

[00:00:32]Tyler Litchenberger:  Alright, everybody, welcome back to Toyota Untold. I am Tyler.

[00:00:36]Kelsey Soule:  And I’m Kelsey.

[00:00:38]Tyler Litchenberger:  Kelsey, guess what? I got a new car, and you got a new car, too. How was it, picking up your new car?

[00:00:46]Kelsey Soule:  Oh, my gosh. I felt like I really leveled up, because I just upgraded my 4Runner, and I like shout out to the Parts and Accessories Department for really zhushing up my vehicle, because I had like every accessory possible, and I’m thrilled.

[00:01:01]Tyler Litchenberger:  Amazing. Amazing. So, both of us, we’re living our best lives.

[00:01:06]Kelsey Soule:  What was your car again?

[00:01:08]Tyler Litchenberger:  I got a new Land Cruiser, so I switched from Lexus. I’m no longer part of the Lexus family, although I love you, Lexus. I got a Toyota Land Cruiser and it is amazing. Amazing. I got the Heritage edition, so it has the cool wheels and stuff like that. So, I just love it so much. So, I’m actually in Georgia. I brought the kids to visit my parents. We haven’t seen them in like a year.

[00:01:32] But I always laugh, because we brought the new Land Cruiser, and my dad always complains about the most random things, like where cup holders are placed or like how many cup holders there are. Like it was so funny that I’m in Georgia, because we are talking about the psychology of design, and like where things are put, and like how decisions are made about vehicles, and why things are the way they are within vehicles.

[00:02:03]Kelsey Soule:  A lot of people probably don’t recognize this, but there’s a lot of thought put behind why they put everything where it goes, how big it is, how big it’s not. Like I have a safe in my center console, which apparently is important to TRD 4Runner owners, because they like to do overlanding, camping, et cetera, and you have to have a place to put your valuables. And then, obviously, the cup holder conversation, I have a cool feature in mind where it has like multiple sizes for the cup holders. So, I thought that was pretty tight.

[00:02:36]Tyler Litchenberger:  You can go and get that Big Gulp.

[00:02:38]Kelsey Soule:  Right. I’m excited to hear from our guests today, because I didn’t do so well in psychology in college, not my top subject, but these people have really nerded out on what it means to build a vehicle based on human psychology.

[00:02:54]Tyler Litchenberger:  Awesome. Alright. So, when we started talking about this episode of Toyota Untold, we thought we were going to do a deep dive into the world of self-driving cars, and that’s cool, too, really, science fiction turned into reality, but as we started to develop this idea, we kept coming back to how it relates to the world of just human psychology, which, Kelsey, we’ll get you there in this episode. We realize that there’s this whole world of research and study into the way the people think about, they interact with our cars. And so, John Lenneman and Josh Domeyer are two research scientists from Toyota’s own CSRC. And they talk to us about the exciting sorts of research that they’re doing and the ways that they do it.

[00:03:39]Kelsey Soule:  So, let’s get into it.

[00:03:45]John Lenneman:  I’m John Lenneman. I’m a Senior Principal Research Scientist in Toyoda’s Collaborative Safety Research Center. I’ve been with the company for almost five years and I’ve been in the auto industry for about 20 years or so.

[00:03:59]Josh Domeyer:  And I’m Josh Domeyer. I’m a Research Scientist. I joined Toyota in 2011 when the Collaborative Safety Research Center was kicked off. And so, I’ve been involved in different research initiatives since then and have largely focused on driver distraction and human use of automation as it relates to vehicles.

[00:04:18]John Lenneman:  A lot of people are really surprised when they hear about people like Josh and I working in an automotive company like Toyota, so you guys are psychologists and you’re working for an automotive company, what do you do there?

[00:04:33]Kelsey Soule:  That’s a great question, but before we examine the surprisingly deep relationship between cars and our psyche, we wanted to get to know Josh and John a little bit more. John explains how studying to be a psychologist led him to a job here at Toyota

[00:04:46]John Lenneman:  When I first started my higher education, I went to Central Michigan University for undergraduate studies and I majored in psychology, minored in mathematics, thought I wanted it to be a clinical psychologist. And it was a path that I pursued up until the last semester of my senior year when I heard that a gentleman by the name of Rick Bax was going to be joining CMU Psychology Department and he was in a field called Human Factors.

[00:05:16] And Human Factors is one of the terms that this field is often called. It’s also called engineering psychology, ergonomics, cognitive ergonomics, usability, sometimes, universal design. I’ve never heard of human factors. And in a nutshell, it’s simply trying to take what we can learn about human performance, human cognition, perception, human behavior, and then designing technologies to fit the human, as opposed to just taking an engineering approach, which some people might take as designing the technology, and then forcing the human to fit the technology.

[00:05:59] We are trying to do research, so we can fit the technology to the humans, cognitive capabilities, limitations, perceptual, behavioral, et cetera. So, I gave up on the clinical approach and our clinical dream, and decided to do research with him just with a more general master’s degree, just to get to know the field. Started doing research actually in aviation. So, I had started out actually in aviation. And it was pretty exciting research and I’ve been in love with human factors ever since.

[00:06:32]Tyler Litchenberger:  It’s always amazing to me how far and wide the pull of Toyota can be. John is far from the first person to join us from the aviation industry, for example. Our staff at Toyota come from so many different areas and disciplines besides just automotive engineering, so I was fascinated to hear about Josh’s career path, too.

[00:06:51]Josh Domeyer:  I can go back a little bit further than John even. I was a computer geek in high school, and so there’s a lot of troubleshooting involved in technology. You’re installing different things in a computer, something will break, it’s really exciting when you can get it to work. When I went to Central Michigan University, similarly to John, I discovered that this same philosophy of troubleshooting can be applied to how people use technology, right?

[00:07:18] So, the most interesting troubleshooting that I can think of is actually examining human behavior and trying to figure out how we can use that information to make technology better for people and also improve their lives in different ways. I began to apply this thinking in the same lab as John to driving. And specifically, I focused on humans and automation at the time.

[00:07:42] Although it wasn’t automated vehicle research, it was more basic research of how technology can guide the behavior of people to make better decisions. And that’s evolved over the time that I’ve been at Toyota, initially focusing on driver distraction, and then looping back to some of my initial work with automation and trying to figure out how to make it more compatible with people.

[00:08:05]John Lenneman:  As you can tell, Josh and I, we’ve got a history that overlaps quite a bit. Josh was an undergraduate, I think, when I was a graduate student, so we’ve got quite the history. And then, I spent some time in the automotive industry with another company before I joined Toyota and joined Josh in 2016.

[00:08:22]Kelsey Soule:  If you want to get to know Josh and John a little bit more after the show, we will plug their website, so you can learn more.

[00:08:28]John Lenneman:  We both have our personal videos on the CSRC website that you guys could link to, that just explains more about us in a very high level. They’re like two minutes each. It’s going to be nerdy stuff, but yeah.

[00:08:41]Josh Domeyer:  Stuff that I like.

[00:08:42]John Lenneman:  Oh, yeah, we’re nerds. I know. We love it. Yeah.

[00:08:46]Tyler Litchenberger:  Check out the show notes for this episode to find the link to those videos. But first, we wanted to pick up on something that John said there, when he called himself a “nerd”. What does being a nerd even mean these days? If you’re talking about being passionate about a specific topic, then I guess we’re all nerds on this podcast. I asked him if that nerdiness comes naturally with this line of work.

[00:09:08]John Lenneman:  The nerdy thing about what we do is taken as deep a dive and getting into the weeds. Like when I say getting into the weeds, I’m not thinking a few weeds in a little pot, I’m thinking just the murkiest water, like you can barely see in front of you. And we are just deep into that mud. If you could be a nerd to get that deep into something that seems like, eh, just design it so it’s usable, no, you got to be a nerd, understand it’s a lot more complex than that,s and be willing to go into that mud.

[00:09:38]Josh Domeyer:  Yeah. We’re literally talking, sometimes, about, when we talk about neuroergonomics, we’re talking about brain waves.

[00:09:44]John Lenneman:  That’s so fun.

[00:09:44]Josh Domeyer:  How that can influence design, right? And it’s super difficult, and like neither John or I understand all of it, but like even just attempting to do that, it’s really interesting.

[00:09:58]John Lenneman:  Oh, yeah. When Google Scholar sent me an alert to an article that talks about an event-related potential study, and it’s the effects, and how it’s manifested and processed by the brain, I see that article and I save it. And next thing, I’ve got cue of like 2,000 articles long that I’m never going to read, because everything looks so exciting. It’s fantastic stuff.

[00:10:21]Kelsey Soule:  Alright. Hearing that kind of enthusiasm, I think it’s safe to say that John and Josh are both in the right jobs. But I was curious about what life might have looked like for them if they hadn’t found their way to Toyota.

[00:10:31]John Lenneman:  Honestly, if I wasn’t doing what I’m doing right now, I probably would be finishing up an MD program and maybe being like a resident somewhere. I actually contemplated going back and actually took some classes, like actually organic chemistry and stuff to fulfill some of the undergraduate requirements to then prepare for the MCAT about five or six years ago before I found this position at Toyota. That’s how much I love Toyota and this position. It was so exciting.

[00:11:02] It pulled me away from about a-year-and-a-half of preparing to take the MCAT to then apply as a 40-year-old student. And my goal there was still to look at some of the neurology and driving. So, there’s actually some pretty cool driving research that’s being done, and we’re actually collaborating with one of the groups right now at University of Nebraska Medical Center. And they really inspired me to go back, and then maybe pursue an MD. So, honestly, if I wasn’t doing what I’m doing right now, I would probably not be in the profession.

[00:11:37]Josh Domeyer:  I had a similar story as John, which is that I started off interested in clinical also, but it usually becomes very apparent when you’re in a psychology program, like maybe there’s something else that I can do with psychology that’s equally interesting. And I honestly don’t know what I would have ended up doing, but a lot of what I do right now is in the robotics realm and technology generally.

[00:12:03] And I think maybe the career wouldn’t be that much different, it would just be a different technology than I’m applying it to. Generally, I think people want to try to avoid having their hobby be their job, because you want to have something that’s fun. And especially getting into psychology and learning that there are these connections, it was like, oh, okay, I can do something that’s similar, but not the same. And so, that’s what was alluring about it.

[00:12:28]Tyler Litchenberger:  Thankfully, Josh and John didn’t go down those paths and they found their way to the CSRC. They both explained what the CSRC actually is and how it fits into Toyota as a company.

[00:12:38]John Lenneman:  Josh and I are a part of the Collaborative Safety Research Center, or CSRC for short. CSRC is a research group whose mission is essentially to do safety-related research and push it out into the public domain for the good of all society. We push this out in the form of primarily publications and presentations. So, if you Google CSRC, you might find some research articles or proceedings from conferences that we are authors on.

[00:13:07] We’re not always first authors on these papers. They are in collaboration with university partners. We’ve collaborated with Stanford, MIT, Virginia Tech, University of Michigan, University of Washington, University of Nebraska Medical Center, University of Wisconsin. And we collaborate with these university partners on a number of different projects and topics.

[00:13:29]Josh Domeyer:  John and I are specifically a part of the Human Technology Integration Group, which largely focuses on this human factors perspective. The name is meant to evoke kind of this human-centric perspective where we begin with models of human behavior and sort of use that to inform the development of technology. And that’s primarily the research that we’re focused on.

[00:13:53]John Lenneman:  Essentially, what we want to do is improve the effect of technology on all people through developing human-centered solutions through our public communication and through education that may come out of the research that we do.

[00:14:06] And we’re looking to foster innovation of human-centered products, translate like our research, our cognitive perceptual behavioral research, sometimes, into best practices, guidelines, or standards that can be used by engineers and in development. We also are really trying to use more advanced methods and tools in our research to allow machine learning approaches, for example. And of course, we really value, and this was from 2011 when it began to even today, we want to be advocates for the integration of human-centered R&D-type work in Toyota industry and society in general.

[00:14:46]Josh Domeyer:  Yeah. One approach that John and I take with this that’s core to the CSRC mission is not only promoting the research that gets published and talked about in the research community, but it’s actually designed to also motivate the safety perspective across the industry. In partnering with these universities, we can motivate new research topics, because we have insight into different industry needs. And then, by partnering with these collaborators across the country, we can marry the academic and the industry needs together in a way that might motivate other people to do research into these topics. It’s not just we’re focusing on Toyota, we really are focused on society as a whole and really promoting that safety message.

[00:15:29]Kelsey Soule:  Working for the greater good is truly a noble cause, but as much as Toyota is striving to achieve what’s best for the world, there must be a benefit to sharing our knowledge with competitors beyond the humanitarian side of things.

[00:15:40]John Lenneman:  The consumers of our products, the drivers of our vehicles, we understand that they don’t live in a bubble. Not everybody drives a Toyota vehicle. As much as we’d like it, not everybody does. So, our consumers, the drivers of our vehicles and our products, live in a greater society. And it’s to the benefit of society and Toyota consumers and drivers themselves if we push this knowledge into the public domain so that the whole driving system. I think across all industry, I think everybody would agree, and I’m confident saying this, that zero deaths, they actually have a consortium, road to zero, that that is a goal everybody agrees that we should strive for. By pushing this information, the research that we do in the public domain, what we’re hoping is that goal someday can be achieved.

[00:16:31]Josh Domeyer:  And there are some things here that are precompetitive in the sense that they require some coordination amongst automotive manufacturers. Like a good example is some of the research that I’ve been focused on is how automated vehicles might interact with pedestrians. And if you have thousands of vehicles that choose to communicate in different ways, it could be very confusing. And the automotive industry always has been focused on safety, but when we get together to promote this sort of standards, our ways, best practices, we can actually improve safety for society in general.

[00:17:06] A lot of what we do, too, in the standard space is actually commonizing like how you talk about things, right? A lot of terms and definitions development, which isn’t necessarily super entertaining, but if you have people using different language when they’re talking about safety or even different methods of measuring safety, most of these get really complicated very quickly. And so, having that common language and even common goals as an industry can always help focus attention a little bit.

[00:17:34]John Lenneman:  And that reflects one of our missions, which is we’re advocating for human-centered approaches. And so, it could be in the form of just using a common language, really trying to push that.

[00:17:43]Josh Domeyer:  There’s other people in the Collaborative Safety Research Center that have funded some of these organizations to actually produce data sets which are publicly available. One of the biggest challenges with human researches is often the data isn’t available, because it’s proprietary within companies. And so, one of the things that we’ve done is leverage some of our ability to fund lots of human annotation of data and other things like that to actually produce data sets that can be analyzed by the whole community, leading to just more robust knowledge.

[00:18:15]John Lenneman:  Another example of precompetitive nature of our work is in some of the consortiums or organizations we’re a part of. We join various groups with other OEMs, or suppliers, or companies like Mcity or MIT has their AVT Consortium. AVT stands for Advanced Vehicle Technologies. But the idea is groups like that are pulled together, and the various researchers, engineers from the different companies get together and talk about, at a precompetitive and noncompetitive level, what are the issues that we’re all facing? And what knowledge do we want to gain through the research that Mcity, or MIT’s AVT group, or other groups like that can provide for us and do for us? So, there are lots of examples of not just Toyota, other companies doing work in precompetitive ways.

[00:19:05]Tyler Litchenberger:  So, this is where it gets interesting. We always think of psychology just being about theorizing and exploring the ways that our brains work. And that doesn’t necessarily gel with examining reams of data and the more hard science approach that seems to be involved here.

[00:19:21]John Lenneman:  Our work is actually a lot harder than some people might think. It’s not just getting like opinions, and thoughts, and feelings. We’re doing some pretty enhanced deep dives into human cognition, perception, et cetera. Of course, that type of research can be hard, but the really hard thing, I believe, is taking what we learn about human cognition, perception, behavior, and turning those into engineering requirements or product requirements.

[00:19:49] So, if there’s anything we know, whole heck of a lot of variation in humans. It’s a theory-driven science. We don’t have a whole lot of laws about how the brain works and how people behave, so it’s fuzzy. It’s an art when we’re trying to take what we learn about human cognition perception and turning that into product requirements for engineers, but then actually build a system that has hard numbers behind.

[00:20:14]Josh Domeyer:  One example of that is actually, there’s a lot of theory about how the human eye perceives the world and how you make decisions based on that information. So, you can think about, as something approaches you, it expands in your visual field, right? You feel a threat, because that’s coming toward you. And so, we can take things like that and model the human behavior in how they assess risk and all that stuff based on these perceptual processes.

[00:20:41] And then, we have to turn around, and say, okay, now that we understand the psychology of this, what does it mean for how you design this technology? And you might design it in a way that maybe it amplifies that risk or provides some information that augments the risk that’s being evaluated by the system. If something’s approaching quickly, you might alert somebody. It’s that simple.

[00:21:03] But tying it to these core psychological concepts is really important, because without theory, you don’t have any anchoring to how to design the next product, right? You might design this product fine by testing this one versus this one, but then you have to design a new product, you have to say, oh, okay, what did we learn before? We learned this about human psychology, and that’s how you carry from one concept to another one.

[00:21:30]Kelsey Soule:  It’s fascinating how different areas of automotive research have evolved over the years. We’re always pushing to innovate and use the most cutting-edge technology in every field that we can, so why should psychology be any different? Given how long they’ve been at the CSRC, I was excited to find out how psychological research has evolved over the last decade.

[00:21:50]John Lenneman:  The last 10 years or so, we’re not just changing, say, interfaces, we’re changing the task itself, right? The task is changing, going from manual driving to, in the future state, maybe pure automated driving. And then, we’ve got this space that we’re starting to live through in between, which is this interaction between the vehicle and the human. And so, the fact that the driving task is starting to change so dramatically just adds a whole another layer of complexity on top of it.

[00:22:24]Josh Domeyer:  We are evolving into this scenario where it’s like this human automation interaction, and even Toyota has adopted this philosophy of kind of a relationship with automation in terms of like Tmate and that sort of thing. And it’s really changed how we have to investigate these topics from a human behavior standpoint, because it used to be the case that you could do the scientific thing that should be familiar to a lot of people, which is you change a variable and you see how that affects them.

[00:22:53] But now, we’re entering this space where we have AI that’s interacting with people. There’s no variable to change. It’s a relationship where they’re developing over time, people are evolving with the technology. And so, it ends up being the situation where you can’t do these little small tests, but you actually have to model not only the human behavior, but you have to have a model of the automation behavior, and then you see how those models interact.

[00:23:17] And then, from that, you can gain insights about what safety issues might emerge. And it’s a really fascinating view of technology interaction, because it’s very difficult and it really leverages this kind of troubleshooting thing that I was talking about, where you really have to dive deeper into the theory and try to understand what is it that we’re trying to help people do fundamentally.

[00:23:42]John Lenneman:  Josh said models of human and models of AI or the automation, but how do we build those models? Those models are built on data. So, we need to gather that data, and make sense of that data, and feed that into the model. So, there’s just an absolute ton of work to be done.

[00:23:59]Tyler Litchenberger:  It’s clear that data is an incredibly valuable resource for Josh and John, and it’s one of these things where the more of it that you can get, the better. Quality data on driving habits must be difficult to obtain. Our time inside our cars is usually quite a private experience after all. I wanted to know how Josh and John actually go about finding the information that they need.

[00:24:21]John Lenneman:  Sometimes, we’ll just do the types of survey-type research. The research that I described, we primarily did what we call naturalistic driving research, where we observe drivers in their environment. When they’re actually using the system or driving on the road, we can instrument the vehicles in many different ways. A lot of our collaborative partners do this for us and we’ll monitor drivers out there on the road for extended periods of time. I mentioned we can use video cameras, we can put a bunch of different sensors on the cars. But basically, what we’re doing is monitoring human behavior in the environment it’s actually used. Some of our research also uses simulation.

[00:25:03]Josh Domeyer:  One of the tools that’s emerged in the last probably five years that has been very valuable for vehicle pedestrian interaction research is actually virtual reality. So, we can actually put people in these immersive environments.

[00:25:15]John Lenneman:  We’ve used driving simulators that we’ll say are pretty low fidelity, which is literally like a gaming steering wheel and a monitor on a desktop with the gaming pedals and brakes. We’ve used those before, whereas we’ve also done research in some pretty high-fidelity simulators. We’ve got 360-degree simulation screens around the whole vehicle that’s in a room where the steering wheel or actual steering wheel and actual pedals, it’s like pretty much a real car with really high-resolution graphics. And we can collect in these simulators a lot of driving performance measures. Of course, we can also, at any point, collect subjective data, some scale, on a scale from one to seven, how did this affect you, or those types of Likert-type questions.

[00:26:04]Josh Domeyer:  Somebody has that VR headset on, and what they’re seeing is a vehicle approached them and they have to make a decision about whether or not to cross the road. And we can actually use these to manipulate little variables to figure out how a vehicle should stop to be communicative to the pedestrians. So, if it stops like this, it’s really clear that it’s stopping, that’s great, and I feel comfortable crossing the road, whereas other stopping styles might make the person feel uncomfortable.

[00:26:32] And you can’t really do these in the real world and that’s why it’s really useful to have these virtual reality setups for this purpose. You can test out scenarios that you observe in the real world that might not be safe to do and you can essentially simulate the data in a way that allows you to identify some safety issues that might not emerge if you tried to do as an experiment.

[00:26:53]John Lenneman:  We’ve also done research where we’ve collected what we call physiological measures. So, EKG or EEG, so your heart rate and other heart-related-type measures. EEG is electroencephalography. So, we’ll say brain wave activity or others, sweat rate, breathing rate, et cetera.

[00:27:12]Josh Domeyer:  You may have heard recently, there’s some information that it’s really hard for automated vehicles to get enough data just by driving around, because the number of miles that you need to travel is just so high that you can’t possibly test every safety scenario. And even the things that we’re testing are not this sort of average effects. There are outliers. They emerge because of the environment, and other vehicles and your vehicle create a situation where there’s some sort of safety issue. And so, the way around this is through these simulation methods. And by simulating, you can examine some of those outlier cases that really matter in the real world, but you wouldn’t have previously been able to do.

[00:27:55]John Lenneman:  Because we’re developing these simulations, and we’re using more machine learning and similar types of approaches in our research, it’s even more important that we get out into the field and do this naturalistic-type research, where we’re collecting data on the road, understanding how people drive by using sensors that are integrated into the vehicles or by, and we’re getting to Josh’s point earlier, sharing data, these data sources that companies and research institutions are putting out into the public data domain now. It’s extremely important that we continue to do that, because we have a whole bunch of tools now that can capitalize on that. So, it’s a really cool ecosystem, the scientific community consortium. I don’t know if people even realize, like we are all essentially working together in the spirit of CSRC.

[00:28:47]Kelsey Soule:  Now that we’re using the most up-to-date research methods and technology, how are we actually applying that information? What are we doing with it? Josh gave me an example of just one way the CSRC’s work has influenced the automotive landscape.

[00:29:00]Josh Domeyer:  When I joined CSRC in 2011, I already mentioned that I was focused on driver distraction research, and that began to evolve in about 2014, 2015 when we started to recognize some challenges with human use of automation. And so, at that time, I started thinking deeply about how vehicle automation will interact with pedestrians. And it’s interesting, because if you pay attention a little bit when you’re driving on the road, you’ll notice that you move in certain ways to accommodate other people.

[00:29:32] You might, in fact, stop and gaze directly at a pedestrian to indicate that they can cross. And so, there’s a lot of these social behaviors that people exhibit while they’re driving. And the real challenge and the thing that was fascinating to me from the beginning was that these become explicit with automated vehicles. You don’t have a human to imbue these social behaviors in the driving.

[00:29:58] It’s now automation that has to imbue the automation with these social behaviors. And around 2015, 2016, we began some research at the University of Wisconsin-Madison, to actually look at how people behave on the road as a means of communication. So, not necessarily hand-waving, eye contact, all of those things, but simply, how do you move the vehicle to communicate? And this led to a bunch of other projects, one with the MIT AgeLab, where we actually looked at real behaviors on the road and saw the different communication strategies that people used.

[00:30:37] And one of the interesting things that came out of this earlier research is that people rarely use gestures, or eye contact, or these sorts of things to communicate. And generally, your communication is more based on how a pedestrian positions him or herself on the curb, or how the driver decides to stop. A couple of years later, in 2017, I joined the University of Wisconsin-Madison to pursue this topic. And what we’ve discovered in the last few years as I finished my PhD is that this behavior component of communication is key to making human-compatible vehicle automation.

[00:31:18] And even extending beyond that, I think most people think of this issue as safety and efficiency, which of course are very important for this, but we started to look at different concepts that might also matter, trust or fairness in terms of how long the pedestrian or the vehicle is waiting, or comfort even, because these things ultimately influence the acceptance of the technology or even whether we’re improving the lives of everybody that the vehicle might interact with. And so, I continue this research topic to this day and continue some work with the University of Wisconsin-Madison and the MIT AgeLab.

[00:31:59]Tyler Litchenberger:  It’s obviously a good thing to do great work in any subject, but what good is all of it if the end result is creating a product that the general public has no interest in using, like too few cup holders, too many cup holders? And what about if they simply don’t know how to use it?

[00:32:14]John Lenneman:  Acceptance is really important, because that eventually, we believe, leads to utilization of the technology. And in the end, that’s what we want to do. We want to develop technology that people will use, and then they reap the safety benefits by using that technology. Technology in general is both getting simpler and more complex, which is weird, but it’s true. The one thing that’s, I think, universal is there’s some level of understanding of technology that’s required to operate.

[00:32:43] It can be very minimal. It could be a lot. But at some point, you need to understand how a system works to be able to use it in some way, shape, or form. That’s often referred to as a mental model. So, a mental model is just one’s understanding of system operation. So, I did a lot of research starting in like 2016, 2017, and still continue to do that research today, looking at mental models, how do people develop their mental models? And how do these mental models evolve over time as people would use the technologies that are integrated into the vehicle?

[00:33:19] Now, of course, when we’re designing systems, when systems are designed, you want them to be designed so that they’re as intuitive as possible, right? Ideally, someone can take something out of the box, someone can just hop right into it and use it, and it’s just immediately understandable. But as technology gets more complex, that increases the odds that somebody might have to have some form of education or something like that in order to use the technology. And so, we’re doing this research so that we can, if consumer education is needed in some way, shape, or form, create more effective consumer education program.

[00:33:54]Kelsey Soule:  The idea of consumers not understanding their technology really interested me. I’m sure, at one point or another, we’ve all lived with a gadget where the clock is permanently set to midnight, because we can’t figure out how to set it. Honestly, when daylight saving time happens, I just wait until it comes back.

[00:34:10]Tyler Litchenberger:  There you go.

[00:34:10]Kelsey Soule:  Not because I don’t know how to change it, but because it’s just effort. Okay. So, apparently, the CSRC has done specific studies into this very thing.

[00:34:21]John Lenneman:  We observe drivers over six months, about 55 drivers, and we got them about a week or two after they purchased a Lexus or purchased a vehicle, and for 10 or 11 of them, we installed various sensors into the vehicle, cameras, accelerometers, GPS systems, things that can track how much you’re braking, how hard you’re braking, how fast you’re going, and of course get some sort of video image of the environment around them.

[00:34:53] So, in this research, what we did was basically, we asked them, immediately after they purchased their vehicle, what do you know about the latest technologies? What do you know about lane center? What do you know about adaptive cruise control, lane departure warning, collision mitigation systems? And we got some sort of assessment of what they knew, and then we were able to track them. We interviewed them every two weeks over the course of up to six months, and that we’re actually able to see, what was their understanding at the beginning of the research?

[00:35:23] And then, how did their understanding of the technology evolve over time? So, what was their understanding at the end, at six months later? Something else we looked at was what sort of sources of information did they actually turn to when they were trying to learn about their technology over the course of six months? Did they get some sort of like dealership education when they actually purchased the vehicle? Did they go watch a YouTube video, or read an article, or something like that, or how often did they even just simply check their owner’s manual? And we learned a whole bunch of stuff.

[00:35:56]Tyler Litchenberger:  This research project found that we can all be grouped into one of five groups of people based on how we learn, how we engage with or consume educational information. John broke each category down for us.

[00:36:08]John Lenneman:  We have some what we labeled as expert, skilled, or moderate learners. And these are people that have some knowledge, experts have more knowledge than moderate, obviously, and skilled in between, but some knowledge of the system or of the systems and the technologies when they first purchased the vehicle, and then they have an ability to learn about the technologies over time.

[00:36:30] So, their mental models, their understanding of the technologies evolve pretty well over time. And that makes sense. A lot of us can get in the car and we know a lot about the technology already, that we’ve seen commercials, et cetera, and we can learn about it. There was a group of people that didn’t really know anything about the technologies at all, but good news is they’re still able to learn. But then, the final group, and this is the interesting group, is what we called misinformed. These are misinformed learners.

[00:37:04] And these are individuals who didn’t really know how the systems work. When they bought the vehicle or six months later, they couldn’t learn about the technology, because they didn’t want to learn, because they thought they knew everything they had needed to know about the technology in order to operate it, and they were pretty darn confident in their own knowledge, but their understanding of the technology was actually wrong. So, there’s this cohort of people that we’ve identified that maybe are probably prime targets for a future consumer education.

[00:37:41]Kelsey Soule:  The Greek philosopher Plato famously said, the only thing that I know is that I know nothing. It sounds like the opposite of that is true when it comes to misinformed learners. I wonder where that misplaced confidence stems from. One theory is that we apply our knowledge of other similar technologies on the assumption that they’ll behave in the same way. John told us about how technology has to be designed to anticipate the ways that it may be misused or not fully understood. More often than not, we have to use the technologies people are already familiar with as a springboard.

[00:38:13]John Lenneman:  The technologies that are more consistent with technologies that are already in the vehicle, people tended to grasp. So, for example, adaptive cruise control. They’ve already had cruise control for a number of years. So, even if it’s their first time driving a vehicle with adaptive cruise control, they were able to get the gist of it. There are other issues with adaptive cruise control that we are researching and I think other companies are researching as well, and that’s the gap acceptance and stuff like that.

[00:38:40] But in general, people understand how it works. Now, whether or not they decide to use it, that’s another story. So, a collision mitigation system, right? So, basically, some sort of technology that helps you mitigate the collision if you’re going to have one. Those are newer, especially relative to cruise control. And the problem is they’re relatively infrequent.

[00:39:02] So, even if they did get some education, let’s say the dealer did a great job immediately explaining what a collision mitigation system is and how it operates, they may not actually experience the collision mitigation system for four months. So, by then, they don’t necessarily know how it works. Now, the question also becomes, do they need user education? Do they need to understand how that works?

[00:39:28] Because that is an example of a technology that essentially maybe does the work for you. So, whether or not people need to understand it, that’s another question that we’re trying to answer. So, the question is, to what extent do people actually have to understand the technology in order to use it correctly? And that applies to essentially everything. You don’t understand how your cellphone works in super, super great detail.

[00:39:54] I’m sure there’s a certain level that you’re kind of like, I know it enough, so that I can use the app, so I can make my calls, et cetera, maybe troubleshoot a little bit, but beyond that, I’m taking it into the repair shop or I’m just trading it in and getting a new model. A lot of people don’t know how a refrigerator works, but they know how to put their milk in and out of the fridge. So, there’s a certain level of understanding that people have to have or they tend to have for all system operation.

[00:40:22]Tyler Litchenberger:  It’s almost as though what the general public will accept or embrace from technology doesn’t necessarily correlate with that technology’s value or reliability. An intuitive decision obviously makes sense to us, but it might not always be the right choice, logically speaking.

[00:40:37]Josh Domeyer:  We had some research where we actually manipulated how the vehicle moved back and forth on the road. And so, if the vehicle held straight right in the center of the lane, people’s trust scores were higher than if it were to meander maybe a little bit, people had lower trust. And what was interesting about this is this trust rating was related to how much the drivers decided to look at the road. I think people find this interesting, because once you start thinking about these systems of interaction between the environment, the person, and the automation, you start to think about solutions differently. You’re thinking like, how do we make the environment, or the person, or the system altogether lead to better safety outcomes generally?

[00:41:21]Kelsey Soule:  Collecting data like this may be incredibly useful and worthwhile research, but I wanted to know how often John and Josh discover a breakthrough where they can really celebrate. Do they ever get to crack open a bottle of champagne?

[00:41:32]Tyler Litchenberger:  Popping bottles, Kelsey.

[00:41:34]Kelsey Soule:  Champagne campaign.

[00:41:37]John Lenneman:  I would say that there’s not necessarily a lot of firm answers in our research area, and it’s driven basically because there’s a whole heck of a lot of variability. So, the second we try to present something as a firm answer, there’s always this exception to the rule. And the thing is we don’t want to ignore the exception to the rule. That’s not in the spirit of our profession, human factors. It’s sometimes called the universal design for a reason.

[00:42:03] We try to capture as much and be essentially inclusive as much as possible. So, what we really try to do is present our data and create a business case for a design in some way, shape, or form, but we need to balance that business case. We need to balance our recommendations, guidelines, et cetera, with other factors. Let’s say just even like the design feature.

[00:42:29] We want people to use the technology. And the reality is if people think of technology or feature is ugly if there’s some sort of aesthetic component to it, maybe that influences whether or not they actually use the technology. I would like to think that aesthetics doesn’t necessarily influence whether or not someone’s going to use the safety feature, so we need to balance our recommendations with those things. So, in the end, our goal is to optimize utilization of the technology.

[00:43:00]Josh Domeyer:  John is right. There’s a lot of variability in human behavior. We don’t have those same luxury that more hard sciences have. We deal in the world of uncertainty.

[00:43:10]John Lenneman:  One of the challenges in our industry is that you can still build a product potentially without our input, and that’s unfortunate, and hopefully, that doesn’t happen, but that’s the reality, right? And so, what I consider myself sometimes, I have the role of a marketer. So, I generate results, and maybe in my mind, I can see how they apply to the product and the product development process in some way, shape, or form. But now, I need to turn around, and sell that, market that to whoever is the recipient of that data, of those findings. Hopefully, that’s not too hard of a sell. Matter of fact, hopefully, that’s not even a sell that’s necessary at all, but the reality is sometimes, we need to illustrate how important our findings are.

[00:44:01]Tyler Litchenberger:  Before we spoke to John and Josh, we assume they take cars and technologies that already exist and conduct their research around them, but one of the most interesting things they told me was that that’s not the case. They go so deep with this stuff that there’s almost no way of knowing what problems you’re going to ultimately have to go and solve. Doing research is the only reliable way to discover what the problems are in the first place.

[00:44:25]John Lenneman:  We’re constantly trying to push, both as in our profession and in Toyota, learn as much as we can about human cognition perception or behavior as early as possible upfront, and then design the solution based on our learnings.

[00:44:41]Josh Domeyer:  One of the things that has emerged from the Vehicle-Pedestrian Interaction research is early on, almost every discussion of the topic that I saw started with the premise that we needed to introduce some sort of signaling device to the outside of the vehicle to communicate with pedestrians. So, you have turn signals right now, you might think of like a new system that kind of informs pedestrians when they should cross the road or something like that.

[00:45:08] But one of the things that has emerged based on some of the CSRC research and based on research by others is that it might not be the case that these signaling devices are needed. They typically have a benefit for acceptance, but they don’t often result in a benefit for crossing behavior or leading the pedestrian to check more to make sure that there’s not another vehicle. These interfaces simply don’t do those types of things. And what really emerged was this vehicle behavior perspective where people based most of their decisions on whether to cross and if the vehicle is stopped or its stopping pattern.

[00:45:51]John Lenneman:  That’s an example of people coming up with a solution without necessarily, at a deep enough level, understanding exactly the problem, the human-perception-cognition-behavior-type problems. So, we are constantly both in Toyota and human factors professionals in general trying to have an influence as early on in the design process as possible.

[00:46:15]Kelsey Soule:  But it isn’t also purely theoretical. Josh gave me another example of their research that has a direct impact on the way that cars are designed and built.

[00:46:24]Josh Domeyer:  I wasn’t involved in it directly, but early in the CSRC for the human factors research in general, we did a lot of work with the MIT AgeLab on the destruction potential of technology in vehicles, specifically voice interfaces and some visual manual interfaces that people might interact with while driving. And some of the tools that we developed during that to assess things like cognitive distraction and visual manual distraction, they didn’t necessarily end up into a vehicle, per se, but those methods are generally used to evaluate the interfaces and improve their safety.

[00:47:01] And similarly, there are other projects within CSRC that aren’t John and mine specifically that sort of deal with understanding the safety benefits of different technology, usually kind of test methods that might be used by a third-party organization to test the safety of something. But generally, I think our projects have more of this high level, they might not be the product itself, but they might have led to a better, safer product.

[00:47:28] There was really fascinating early research that we were doing in vehicle automation probably back in 2014 or 2015 with Stanford University, where we had people, and remember, there’s very little research at this time about human interaction with specifically automated vehicles. And we set them up in a simulator where we removed the sort of steering behavior that’s necessary for controlling the vehicle.

[00:47:55] And one of the observations that we made at the time is that when you remove this lateral control from their interaction, all of a sudden, the driver checks out, because they’re not in direct control of the vehicle. And since then, other research has emerged talking about this coupling of sort of perception and action that could be really important for vehicle automation.

[00:48:19] And it wasn’t because of this research, but many other researchers have looked into how to develop interfaces to improve that connection between the driver and the vehicle since then. But to me, it was really interesting that these small results about lateral control of the vehicle and how it led to complacency with the automation at the time was really interesting, because you could feel it when you were in the simulator. And as I said, there’s been a lot of good research since then on interface design and how to keep people engaged in driving and those sorts of topics.

[00:48:56]Tyler Litchenberger:  We’ve spent a lot of time going over what the CSRC has done, so it was time to shift gears and turn our direction to the future. Josh told us about the issues faced when trying to ensure their data is relevant and future proof.

[00:49:09]Josh Domeyer:  In general, technology will almost move quicker than the behavioral science around it. And so, it’s like this constant catch up, and I’ll reemphasize it, this is one of the reasons that theory is incredibly important. I think sometimes, engineers look at theory, and think, oh, what is it? What does it mean in practice or whatever? But in the way that we talk about theory, it’s not applied versus theory, it’s theory is the basis for how you make decisions about applying things and it’s the toolset that human factors has to inform good design. And so, that’s really the struggle that we face, is we have to look at new technology, and say, okay, we have all this theory that’s been built over the last 50 years, what about this applies to this particular product?

[00:50:00]John Lenneman:  And then, the products are changing. And then, products are changing faster than humans. Humans don’t evolve nearly as fast as technology. And yeah, the theory is foundational. So, getting back to what we talked about earlier, developing models of human behavior and models of automation, the theory is the foundation behind it. And so, it’s got to be solid. And we’re always looking to see if we can contribute even to the foundational theory in some way, shape, or form as well.

[00:50:26] So, we are nonstop students, to be honest. So, we need to stay on top of the research. I know Josh and I, and we read way too many journal articles, my Google Scholar feed and all that stuff is just constantly kicking research articles at me. And I try to read as many as I can, because there can be some real enlightening findings that can have some real application to, maybe it’s not the product, but maybe it’s to our research process. So, maybe it’s to the development of a model that we want to test. So, we are nonstop students. By the end of our lifetimes, we’ll have done the work of probably about 20 PhDs.

[00:51:07]Josh Domeyer:  The nonstop student thing to me is very enjoyable. In CSRC, I had the opportunity to get a PhD. There’s a reason that I pursued it and it’s not necessarily just a good sort of career learning opportunity, but it’s really about learning in a fairly challenging way the appropriate way to do the research, and actually contribute to the research community, and to be part of it. And curiosity is like a major component of this sort of job.

[00:51:40]John Lenneman:  Yeah. And it’s really working in science, right? You’d like to think that by the time someone graduates with a bachelor’s degree, they can read a journal article and really take away what is the meaning from, et cetera. But I think it really takes digging, getting deep into the weeds in a topic in a certain area to be able to really understand the science behind it, and then how to apply it.

[00:52:04]Josh Domeyer:  You probably have to read 100 papers before you understand where an individual paper fits in the grander vision of science.

[00:52:12]John Lenneman:  Yeah. And that’s important. But just that simple awareness is important. We need to understand, I think Josh and I both know that we read one article and it says that they found this. It doesn’t necessarily mean that’s the end all, be all. That’s one article. That’s one finding in a pool of, let’s say, 20 articles that maybe have come out in the last month. But out of those 20 articles, there are maybe another 80 articles, journal articles that didn’t get published, because they didn’t find any significant findings. So, we also need to understand that, then figure out, how does the fact that we understand that impact what we’re going to do from making recommendations, et cetera, to product development?

[00:52:56]Kelsey Soule:  It’s hard enough to predict the future at the best of times, but it sounds like it’s almost impossible in this field of research. That said, we love a challenge, so I wanted to see if Josh or John had any guesses as to topics they may study going forward.

[00:53:09]Tyler Litchenberger:  Cup holders.

[00:53:11]Josh Domeyer:  There’s a concept in human factors of operational, tactical, and strategic behavior that people exhibit. And this isn’t universally true, but if you think about vehicle automations operating at this operational level, where it will prevent you from getting into a crash or those sorts of things, but it doesn’t operate at this tactical or strategic level. It doesn’t choose your route or it doesn’t strategically go a little bit faster to get in front of the car, to stay out of the blind spot or whatever.

[00:53:40] And that’s really interesting from a human factor standpoint, because probably, and I don’t think there’s any data on this, but I would think that a lot of crashes, and safety issues, and comfort issues are actually at this strategic and tactical level, whereas a lot of the collision avoidance stuff is at this performance level, right? And so, as human factors researchers, we have to think about control at these different levels, encourage behavior, make them more natural, make them more accepted by society. It’s a really interesting way to look at these problems.

[00:54:13]Tyler Litchenberger:  And of course, we had to bring us back to the original idea for this episode, self-driving cars. When you see the future depicted in movies, what are the two things you always get? Flying cars, self-driving cars, right? Flying cars are still firmly in the world of sci-fi, but self-driving cars are kind of already here. I wanted to know what John and Josh think the future of automated vehicles looks like.

[00:54:39]John Lenneman:  Do we believe that there’s going to be a future state where everybody’s driving an automated vehicle? I don’t know. I think a lot of companies might say that’s the goal. But the reality is it’s ways away. The turnover in the fleet is years and years just to turn over 90, 95%. And there might always be, and who knows from a policy perspective, are they going to allow on how they’ll handle someone that wants to drive their 1960, ’70s muscle car, and they like the manual driving.

[00:55:06] And as a society, do we want to afford that? I don’t know. That’s probably policy decisions that need to be made. They’re probably thinking about it now, need to be made years down the road. Well, what that does is it illustrates that there’s what we call sometimes a mixed fleet society. And that is a huge challenge, where you’ve got a society, an ecosystem, where there’s people driving manual vehicles, there’s people driving a, in the industry, we call level one or level two automated driving, which is simply, there’s some automation, but the driver really still has to control the vehicle.

[00:55:43] But then, maybe there’s a future state where then there are also vehicles, it’s a small percentage, where the vehicles do most of the driving. That’s higher, level three, four technologies. That state, probably, it’s coming. It’s essentially here already. And it’s going to stick around for a very long time. So, it’s definitely going to stick around for as long as I am in the industry and have a job.

[00:56:09] So, I can’t forecast, and I don’t think we can or should forecast what the future state is going to be, except I feel pretty comfortable that this makes fleet society, which is going to have a whole heck of a lot of challenges, a lot of research questions that we need to address. It’s going to be around for a very long time, and potentially, maybe forever. But then, how do you design the infrastructure, how a manually driven vehicle interact with the fully autonomous vehicle, that type of stuff?

[00:56:39]Kelsey Soule:  Alright. That’s enough about the future. The CSRC exists in the here and now, and they’re doing great work right this second that we want to champion. John tells us all about one of their current projects.

[00:56:49]John Lenneman:  With the University of Michigan, we’re doing some research in an area called roadmanship. And roadmanship is essentially what was introduced by the RAND Corporation a couple of few years ago. And it’s essentially just adding a layer of courtesy on top of safety. It’s not designing an automated vehicle, so it’s just safe, but also designing some automation, so it is courteous to other drivers. We don’t want our automated vehicles to make other drivers feel nervous, for example.

[00:57:22] So, on this concept of roadmanship, we’re exploring this concept with the University of Michigan. And right now, what we’re doing is looking through video feeds that are just posted on YouTube. There’s a video feed from Jackson Hole, Wyoming. There’s another video feed from a town in New Hampshire. The Jackson Hole, Wyoming is a video feed of an intersection where we’re monitoring drivers as they make left turns.

[00:57:53] That video feed in New Hampshire is that’s a camera on a pole and it’s a roundabout. It’s actually a five-point roundabout. It’s a pretty interesting roundabout. And what we’re doing there is simply trying to understand, how do people accept or reject the gaps that they either turn left into between two vehicles or merge in a roundabout? The idea there is, if we understand how people drive, how likely are they to cut into, or cut off, or even, we’ve all had scenarios where you’re driving down a road, and a person takes a left turn in front of you, and there’s plenty of space, but that person’s going really slow, it’s, come on, get it over with, now, I have to slow down?

[00:58:43] That’s bad roadmanship. Another example of bad roadmanship, according to the courtesy definition, is in highway driving. Maybe there’s a little bit of traffic, but you definitely don’t want to tailgate, but some people might say it’s bad roadmanship if you’re leaving too big of a gap. There’s been situations probably for some of us where we’re following a vehicle, and the vehicle we’re following, it’s leaving such a huge gap between its vehicle and the vehicle in front of it. That might be considered bad roadmanship, so this is the next thing we’ve looked at. So, we’re trying to understand driving behavior, we create models, so then we can maybe feed that into future automated driving systems.

[00:59:27]Tyler Litchenberger:  Oh, my God, I love this, the idea of taking all the bad drivers we put up with when we get behind the wheel and replacing them with courteous one sounds almost too good to be true. Well, maybe it isn’t. After all, it sounds like the CSRC has some pretty impressive backing at the moment.

[00:59:42]John Lenneman:  Some of the projects that we are working on right now are actually in collaboration with other companies and with some federal institutions, actually. I’ll say specifically the USDOT, Department of Transportation, working with them through what’s called their University Transportation Center. So, we have partnerships right now that exist directly with other companies and with the federal government, all working the same problem, answering the same questions.

[01:00:10]Josh Domeyer:  The UTC partnerships are specifically designed to have industry-government collaboration through these entities.

[01:00:18]John Lenneman:  Yeah. We often get asked to, if not participate on projects, to support in some way, shape, or form, whether it’s just some sort of written letter of support just to give our moral support, yes, we think it’s a topic that should be addressed. Sometimes, we’re asked to actually participate and it doesn’t necessarily have to be through, they’re not asking for funding all the time. So, sometimes, we fund a lot of projects. I do have projects that I’ve worked on where we’re not funding, we’re just actually providing our knowledge and expertise. And that goes all the way back as long as I’ve been in the industry, that sort of collaboration between various sort of government organizations.

[01:00:56]Kelsey Soule:  Whoo. They were not kidding when they were talking about this being a deep dive. We hope you’ve enjoyed nerding out with us about this topic. There’s so much to explore here, and the landscape of this research is constantly evolving and changing. So, who knows? Maybe we’ll come back to this subject one day. Thank you so much to Josh Domeyer and John Lenneman, who are our guests this week, and thanks so much for listening to another edition of Toyota Untold. I’m Kelsey.

[01:01:20]Tyler Litchenberger:  And I’m Tyler. See you next time.

[01:01:24] This podcast is brought to you by Toyota Motor Sales USA, Inc. and may not be reproduced or redistributed in whole or in part without prior permission of Toyota. The opinions expressed in this podcast are those of the guests and our hosts, and do not necessarily reflect the views or opinions of Toyota. Please note that Toyota is not responsible for any errors, or the accuracy, or timeliness of the content provided. Used with permission. All rights reserved worldwide.


Email Sign Up

Enter your email address below to sign up for email alerts.

*Indicates Required