Video length is 19:47

Optimising Vehicle Dynamics Development with MATLAB

Chris Johnston, Jaguar Land Rover

On-road or off, Jaguar Land Rover’s vehicles are renowned for their outstanding steering, ride, and handling. In this talk, Chris will describe the challenges in systematically quantifying and optimising vehicle dynamics across the company’s product lines. He will then explain how his team of vehicle dynamics and software engineers develops and applies a suite of advanced MATLAB® based tools which vastly improve productivity and decision-making power.

Recorded: 2 Oct 2018

All around us is evidence that the world we're engineering is becoming more complex, and conferences like this are a great opportunity to show how that complexity is manifesting itself in science and engineering and in the industries that we work.

So let's take a look at it. So whether it's aerospace and defense, or in computing, or blink and you'll miss the daily advances in robotics and AI. And if you don't know this company on the right-hand side, then they're called Boston Dynamics. I'm sure many of you will know them, but the innovation pace is incredible. They're on YouTube, all over it, in fact, so go and check them out. They're a little bit scary, as well, so worth looking.

So what does this rising complexity that we're seeing in the industries that we work look like at JLR specifically? Our customers demand cleaner, safer vehicles for the future. State-of-the-art combustion engines are giving way to all-electric propulsion. And on the right there, you can see the new Jaguar I-PACE.

Even the most modern propulsion units are being superseded day by day. Even human machine interfaces are being designed to look simpler but are actually getting vastly more complex. And on the left, a very nice Jaguar E type interior. It looks complicated, but it isn't. The market demands the look and feel of the right-hand side, but that simple look belies the complexity that lies behind the dashboard.

It's worth noting, as well, that the complexity of that dashboard, there are multiple screens and it's really not very easy to make sure that it all looks simple and actually works together. It is quite complicated to do. And on the left-hand side, as well, all of those switches, they're all individual purposed. They all do one thing, so it looks complicated, but it's not. And so Stirling Moss himself would not recognize Jaguar's return to motorsport.

And it's not just our vehicles that are getting more complex. The way we go about engineering those vehicles and the tools we use to do it are, too. On the right-hand side, you can see a VR wall where our component is being viewed in 3D before it's been prototyped or even manufactured.

And all of this is underpinned by a megatrend of code bases increasing around the world day by day, so what you're looking at here—hopefully, it's clear enough you can see—the millions of lines of code along the horizontal axis against some familiar code bases. So you can see how these have increased over time and for a modern operating system, we're looking at over 100 million lines of code, and the Range Rover is no exception to that.

I'm going to leave this slide up for a second because there's a really great quote from Bill Gates and it describes what we've been seeing over the last few decades of accelerated innovation and change. "Never before in history has innovation offered promise of so much to so many in so short a time." Wise words.

So what does it actually take to engineer one of these complex vehicles, such as the modern Range Rover? I'm going to tell you a bit about how we do this at JLR. It's called the Product Creation and Delivery System, and we start with PS, Project Star, an initial concept or idea for a vehicle, and what we end up with is a finished vehicle, a car rolling off the production line. We call that Job 1, the first job.

Between those two points, we have gateways, checkpoints, if you like, that check we're doing our engineering and we've done what we needed to do. So at any born point in time, we could have several of these vehicles in development, several of these programs in development concurrently. And we're developing systems and components, as well, for—and we're optimizing those components and systems not just for one vehicle, but for many vehicles. So a question to you: How would you characterize the performance of one of these vehicles at any point in time? How would you measure it?

So between these gateways, we have checkpoints, and to do our engineering, we use something called systems engineering. I'm sure many of you would be familiar with the systems engineering V, and I'm going to briefly explain. You can't quite make it out, but I'm going to explain. So on the top left of that V, you have what are called business requirements and that's what the customer wants. So we might say, for example, for a Jaguar, we want class-leading steering. And then what we'll do, we'll break that down, and we'll create some system requirements from that so our engineering targets, if you like. We'll break that down further and create some subsystem targets, component targets, and then we'll actually go and make something.

So when we've gone and made something, we want to know that we're going to meet the customer's requirements when we go back up the other side of that V, so we have tests, tests at every single level, multiple tests, and we're checking that we've got what we wanted to do, what we designed in the first place.

So we test all aspects of the vehicle to quantify performance, and each test has hundreds of metrics, and metrics are a post-process result, a scalar value, if you like, and each attribute performs hundreds of tests. And I'm from driving dynamics or vehicle dynamics, and that's just one attribute. So what about powertrain, electrical, thermal, noise, vibration, harshness, durability, reliability? There's actually over 20 main attributes at JLR, so you get the picture. It's actually very complicated, and there's lots of different things, lots of different tests for us to do. And it does take a huge number of engineers to look after all of that complexity.

So there's judgment calls. During these gateway reviews, there's judgment calls that have to be made, comparisons and tradeoffs, and we're trying to balance all of those attributes together using the same components, so it can be quite difficult making those decisions. And there's lots of QA loops, as well, that they can run into days and weeks that check we've done our engineering correctly.

So meeting after meeting, and it leads to a kind of fatigue and paralysis where there's a huge cost in terms of program time, because it's very difficult to understand all of that collectively, and that's what these gateways are doing. We're trying to make sure that at that point in time, we've done the engineering we needed to do to be confident that we're going to get the car out of this engineering process that will be saleable, competitive, and is going to meet the customer's needs.

So let me give you an example of the kind of complex tradeoffs that this process leads to. So what size of battery is required, if we're going to do a new program, for example? You might not think that's too complicated, but if the objective is range, then let's just think about things that are impacted if you're going to answer that question: charge time, occupant space, steering, ride, handling, aero, cost. To the number of engineers it takes to look after all of that and to engineer all of that is huge, so traditionally, what would a gateway review look like? Well, every gateway we do this.

So why am I painting this picture? Why is it so hard? Well, it's the complexity, the amount of data, the deadlines, the issues, and sometimes, the big egos in the room. And fundamentally, the reason it's hard is because it's complicated, and managing all of that complexity is really difficult. So there's high stress in these meetings.

There's lots of attributes being tracked and, traditionally, they have to be balanced during these meetings to make sure that we're getting the right thing out in the end. It's high stress. Decisions need to be made. We're checking out if we're meeting our targets. Those metrics I was talking about earlier, we're checking that we're meeting—we have the right metrics. So meeting after meeting, fatigue and questions from senior managers such as, where's the data? What's our performance compared to last time? What about compared to the competition?

So what happens is if you haven't got that data to hand during these gateway meetings, there's another meeting next week, and then the meeting—and then the week after that, as well, because you haven't answered the question. It ends up being a huge cost in terms of program time.

During these sort of frayed meetings—back when I started at JLR about six years ago, I was working on the Jaguar XE program and I used to sit in these meetings myself and ask questions of myself, like what's actually being moved on during this lengthy discussion we've just had? What tangible progress is actually being made at this point? What's being progressed in this meeting? And importantly, what decisions are being made during this meeting? I couldn't always answer those questions. So how did we go about improving this situation? And what do gateway meeting reviews look like these days?

So skip six years later, and we've worked so hard with MathWorks to create an ecosystem of apps. What these apps do is bring people together centrally. We can collaborate in one space. It's a huge productivity driver for us and it's a competitive differentiator for JLR. And my team developed a set of engineering analysis tools, and we released these using the app store mechanism that you can see here, and we release those apps to hundreds and hundreds of engineers around the business and that's still growing.

One of those applications is called the Application Toolbox for Objective Metrics, and you can see that on the screens. So what you're seeing here, when I was explaining all of the tests earlier, all of those metrics, you can see those in that big, long list and that list is actually really, really long. It's thousands of lines long. It's not just what you can see here.

And on the right is all of the data, so against each metric, we have the data for each program for each competitor and we can pull that data in at any point. So if you're in a meeting and you get asked a question that you can't answer without some more data, then you can simply pull it in during the meeting to answer that question and make the decision you need to make during that meeting.

And this list is filtered at the moment on just one particular attribute, but on the very left-hand side in the blue panel, you can see a list of tick boxes and they allow you to filter these metrics in a different way, so you can look at all of the other attributes at JLR that we need to look at and make sure we're balancing against. And on the top there's some tabs. That's for each vehicle derivative that we do—each model, if you like—within a program that we're doing, you can look at those as well. This particular one is for the F-PACE, so a bit of an old program. I'm not showing you anything that you can't see.

Another view of atom. So not only have you got the metrics, the scalar metrics and the numbers, but you can see plot data, too, so you can just click on any one of the metrics and see the plots that are behind that data, so supplementary data as well, if you need to dig into a little bit more of the detail. One further view.

So what this is looking at is the test data itself, so not the post-process data, but the test data itself. Time-series data, we've plotted that on a spider plot on the right there, and on the left-hand side, you have a tree. You can pull in physical test data and you can compare that to CA test data at the same time. And on the bottom, you've got some rows where there's some colors there, the green and the red and the blue. That's the metadata associated with the test, so you might need to know, when was the test done? Who did the test? Or even what the weather was like that day. So you have the data to hand.

So I'm going to tell you about how the gateway meetings are ran these days. Yeah, great picture. So we can now make live, data-based, data-driven decisions during meetings, live "what-if" discussions with senior managers who can make calls during that meeting. We can collaborate centrally in one tone. This drives efficiency and quality, and it has the added benefit of improving morale, because we're not in the dark anymore, and that's opposed to repeated meetings, program delays, judgment calls that need to be made, and because of that, hopefully, everybody's blood pressure is just that little bit lower.

So I'm going to tell you how we did this as well. So we're in a pretty good place now. It hasn't been easy to get there, so I will tell you about how we organized the tools and the team. First off, we tried to gather like-minds people who could take an initial idea and make that a reality. We trained them. We empowered them, and we gave them—we train them and empower them to build high-quality, elegant tools that capture their expertise, and we decided—crucially, we decided to take responsibility for what other people didn't want to do.

Secondly, we adopted Agile. We started to work in sprints. We created a product backlog so we know what our customers wanted. We regularly communicate, and we refine our processes and we know what we did yesterday, what we're doing today, and what we're doing tomorrow.

What Agile allows you to do is get something out there fast, and that's important. You should start small and dream big, because over time, if you keep going, that dream will become a reality. In our case, we needed both types of engineer; we needed the software tool engineer and we needed the method, the person who knew the engineering method, so the vehicle dynamics engineer, in my case, because when you separate those two—so if you just had one or the other, good things don't always happen. So if you partition the knowledge of the way to make the software and the engineering that we need to do on the program, it doesn't always work, so we decided that we'd put those people in the same place, in the same building. We put the tool developer with their users literally right next to each other.

So lastly on this slide—and this is why this graphic is there on the left. You won't get it right the first time. I've got it wrong many, many times. You probably won't get it right the second time or the third time, if you're going to develop a tool that's actually of benefit to people and you're not going to force them to use it. You have to keep going around the loop and learning from what you're doing wrong and listening to the feedback and then incorporating that into the tool and then eventually, you will get something right.

So why did we choose MATLAB? Well, ready-made engineering libraries mean you don't have to reinvent the wheel. I can't tell you how useful that is. My engineers can build their own tools. They have access to model fitting, visualization, simulation, toolboxes, optimization, and signal-processing functions and a unit testing framework in MATLAB, which means that we can release software and we're confident that it's going to work. It's robust. SVN and Git Integration in the context menus in MATLAB, as well, improve efficiency in terms of source control and, of course, support from a world-class development team in The MathWorks.

So what MATLAB does is it puts the power in the hands of the engineer. It's a time saved not having to write low-level code, and a unit testing suite which means, really, I can sleep at night, so that's better as well, and a release and update system that is bespoke to JLR, and anyone can pick this up. It's engineering code that we're writing, so you can see how the tool works. It's not compiled, so we can see what's going on, and as I said, anyone could pick it up. And MathWorks, they're a pretty nice bunch to work with, too.

I'm going to quote David Sampson at this point. He said, "The most valuable thing we have are our tests, the MathWorks tests, because they characterize how our software behaves." What's the JLR equivalent to that? Well, at JLR, we engineer our vehicles using DNA, so our steering DNA, or our ride DNA. And what we're doing is we're putting that into MATLAB code and that's massively powerful. We understand what we're doing. We can do it better next time.

So what does the future hold? Okay, so where complexity is increasing like crazy, live data-driven decision making in meetings is crucial to keep that machine working, and building software with engineers who know the problem best enables that to happen. At JLR, we're yet to experience a vehicle program that is simpler than the last, but we're more prepared than ever for the complexity of tomorrow's products. We know now that building engineering tools is fundamental to managing that complexity. Plus, it's fun, and you can have a really big impact in the place you work.

I want to finish by saying this: Take responsibility where you can. There's real power in taking responsibility. If you choose to accept that things are not optimal around you and that you can personally do something with the people you work and in the companies that you work, then you can make change happen. Take it personally and you can have a big, big impact. And also, I'd say, if it's difficult, then that's good.

So where are you on this journey? I'd love to hear what you are going to do next, and I'm genuinely interested, so please do come and have a chat afterwards. I'm probably going to be here for most of the day, so please do come and have a chat, and thank you for listening.

Related Products

Learn More