Keynote: Technology Strategies for Next-Gen Vehicles
Sanjeev Madhav, Tata Consultancy Services
Hear about the role of AI and machine learning, next-gen computing, distributed infrastructure (edge and cloud), virtualization and infotainment systems, and the interesting use cases that can emerge with these technologies.
Published: 8 Dec 2023
Good morning, all. Sanjeev here. Being part of the automotive electronics for about 18 years of my career. They started off with a lot of embedded software development around railroad automation, set-top boxes, smart cards, and some of the other things. So what I will do today is to share a bit of perspective of what technologies we are seeing coming up. See it's not going to be something very new from a thought process of which are those technologies. You would have heard of it umpteen times. But what I'll try to do is share a bit of perspective around what we are seeing and what experiences we are having with these technologies and that might enable you all to think through from your own particular perspectives of how and what we need to do from a future perspective.
So quickly, which are these technologies which I'm going to talk about? It's AI ML, of course, the most common and talked about thing but what we are doing specifically in this particular area and what we are seeing as a trend which is happening, next gen compute-- a lot of high performance SOCs coming from Qualcomm and some of the other things, so how that works in the automotive world. Distributed, and infra, and edge, and cloud-related implementations.
What to do where, whether it's something you need to do on the vehicle side, whether you need to do on the cloud, or you need to do somewhere in between. How do you take those decisions? And virtualization is already talked about. Mani covered a little bit of this. But that's something which has been existing from the good old days of model based development but more so today with the customers trying to decouple the hardware and the software life cycles specifically focused to reduce the cycle time of development, whether it's an ECU, whether it's a vehicle, so all of that.
And finally, the immersive experience. It's more related to the HMIs and the intuitive user interfaces, some of the sound related technologies which are coming up from a future perspective. So I'll cover a little bit of what the perspectives are. See one trend which you can see which I always relate to is many of this is being enabled by this, so whether it's containerization, microservices.
So all these technologies always existed in the enterprise world. What's happening today is with the advent of the high performance compute coming from the NVIDIAs, the Qualcomm's, more of these technologies are now being ported on board as well as on the server side applications which are happening. So more of these are likely to come in the future from the trend what I'm seeing.
Quickly moving to some of the artificial intelligence and machine learning areas, one of the stories I would like to narrate is about how we actually went about doing some annotation and labeling for about 200,000 kilometers of global vehicle driving data. This was working with one of our customers. We actually got a lot of data which was collected from various sensors, whether it's LiDAR, the camera, radar, and some of the other sensors, and collected from different geographies, let's say Europe, Japan, US, and how we used machine learning, and AI ML, and neural networks to actually overcome the problem of identifying different objects or whatever there is on the road. Primarily, the AI ML was used for identifying objects like let's say vehicles.
You have the cyclists, the pedestrians, all of this. And we also finally had a lot of sensor fusion to be done. So this data was simultaneously collected from different sensors, whether it's the LiDAR, whether it's a camera. And then we had to do a lot of processing to actually come out with the actual annotations which were used by the customer, A, for validating their algorithms which they were developing; and B, also for improving the learning capabilities around their neural networks. So when we develop some of these neural networks, we had almost six or seven of them which were created.
We faced a lot of challenges with respect to problem solving associated with AI and ML. So everyone thinks, OK, training done, we are good to go. It will be good identification of objects on the road and so on and so forth. But actually, we had huge challenges with respect to quality of data which is being used for the training. It is highly essential in machine learning applications to have good quality training data I would say which is diverse, which represents all the scenarios that we want to handle and also a little bit around the amount of data and the weights which are coming from different geographies.
For example, let's say when we use the model, which we had used in the US and Europe, in Japan, things didn't work. There was a lot of differences. For example, the amount of pedestrians on the road in Japan is way higher as compared to some of the European geographies so we had to tune our models specifically for handling the load of pedestrians which cross the road when it comes to Japan.
The signal posts which were there while we do the annotation, they were way higher as compared to the rest of the world where in Europe or in US, which needs to be appropriately adjusted when you do the sensor fusion of the modeling. So there were a lot of challenges associated with tuning the networks, putting the right balances, and then having multiple variations for day, for night, for urban, for suburban, hilly, and all those areas, the curving roads. And when we understand AI ML as a very easy thing to do, we did face a lot of challenges.
So in future, probably for one of the problems for MathWorks is to actually think through on how to come up with these neural nets, which are already trained for different geographies, different scenarios, and may be used as libraries going forward when we integrate them with the normal development process so that all these learnings are there. I don't know. Many of you all must have heard about occlusions, basically the overlapping data between multiple vehicles. You have one vehicle in the front, one vehicle in the back. LiDAR data just tells you the distances and suddenly you are choosing the wrong distance calculation for the vehicle. So a lot of surprises, a lot of things to be learned and enhanced. So that's something that I'll mention about the autonomous driving control and the annotation related area.
The other area which we'll talk about a little more detail during our session during the day is about the predictive maintenance. So a lot of diagnostic data we used to collect from different vehicles coming from the OEMs and then we used to do the learning around it to understand what kind of problems are faced by the vehicles as we go during the life cycle of the vehicle. So OK. It's launched in the market, then you start collecting the diagnostic data. You start seeing certain trends, certain issues in certain geographies, certain issues with certain components. So how to predict problems that we are seeing based on the machine learning which is being done is something that we have done a lot of work on and is very useful business case especially from the design standpoint to understand what is the trend of the components, what are the issues that we are facing in design, how to design warranties for individual components to be done. So this is another area.
But let me reiterate the point that I want to is good quality and diverse quantity of data. Again in this diagnostic scenario, there are many cases where the diagnostic trouble codes that we get from some of these systems are more of a rudimentary data and is not useful actually to make inferences out of. So necessary is the good quality of data.
That's a quick overview of what I would like to cover within the artificial intelligence and machine learning. Some of the other technologies also include the next-gen compute. Now coming to this particular scenario, we have NVIDIA. We have Qualcomm. We have Intel. We have all the chip manufacturers. I've almost met the leads of automotive across the globe of all these customers or all these chip manufacturers. And they are always on the constantly developing new next generation chips and then we are working with them specifically to define some POCs or usage of these technologies. Now one thing which you'll observe is GPUs, they're big time, especially around a link to the AMA topic I just talked about where a lot of compute and parallel compute is coming in place.
For one of our problems, we actually used a lot of GPUs to infer a lot of data through parallel processing. Now the lessons learned from that I would say is to choose where you need to use the GPU, how do you use the GPU, where you have to use multi-core CPUs, where you have to define what kind of compute you will use for which kind of application is absolutely essential when you go for the designs. So for example, we have done optimizations where we started off with a design of around 280 GPUs required of NVIDIA and then we came down to about 20, 25 GPUs actually which were required when we finally started the production process. What happened in between?
We realized that, A, the cost is extremely high so it drove the efficiency improvement thought process. Second, we realized that all these chip manufacturers have their own libraries-- for example, NVIDIA has its own Python scripting libraries-- which actually help if you use it in the right manner to improve the efficiencies of the algorithms.
Third, decide when to use GPU and when to not. Maybe there are some processes which can happen without the GPU processing. And finally, we were able to achieve a lot more efficient compute for going through a huge amount of data for the processing. So the important lessons learnt and I would say the necessary things are about when to use and for what kind of applications. So that's critical when it comes to utilization of GPU versus accelerated processing. So this is another trend which you have to use carefully and understanding how it needs to be.
Moving further, this is on the distributed infrastructure. So everyone talks about today about the software defined vehicle and what to do within let's say the central compute, what needs to be done in the zonal issues, which features would go partially in which particular area, how to distribute, basically do the feature allocation from different perspectives.
One of the important other things which a lot of customers are thinking through is from an end to end feature point of view, what can be done on the vehicle whereas what needs to be done off board and what could be done probably with something called as the mobile edge computing or the edge computing server for local scenarios? So for example, for from the in-car perspective, a lot of configurations, a lot of variant management, et cetera, all of it is happening through off-board, on-board communications where you are pushing through OTA. A lot of variant management happening there.
The infotainment is another area where typically you want to do a lot of upgrades with respect to the UI and maybe themes to be pushed through as per the user experiences so that kind of activities are also happening.
The other design criteria when you design what needs to go where is also related to the latency. So which features have low latency, which features can happen offboard is something that needs to be carefully thought about while doing the designs around this area. For example, I remember that some of these service related warnings or alerts, the oil change, a lot of this can happen at the offboard and can be pushed to the onboard in the data.
Remote diagnostics is another such scenario where you can take control or understand the diagnostic information which is coming from different vehicles and then you can actually design how you need to go about doing the diagnostic of the vehicle remotely, taking either online-- providing an online fix or designing the next scheduled service which needs to be done for the vehicle. So a lot of activities need to be done but what needs to be done where and how that design needs to be done is very crucial from the distributed architecture standpoint.
Coming to virtualization, yeah, I think a lot has been talked about it. See when I think way back in 2005, 2006, when we actually designed virtual vehicle where we had issues put all on a network and they were all PCs where you used to inject scenarios and do the validation in a digital framework you would call, that time we had various tricks to be done in terms of synchronization of the clocks and all of that to enable the real time functional testing.
Now with the advent, I think most of the OEMs today, especially the Chinese OEMs have been dramatically reducing the cycle time for vehicle development. And that has caused a big requirement coming from global OEMs that we need to reduce the cycle time of the vehicle development. So it starts with, of course, reducing the cycle time from EC development or the component development. And here again, decoupling the hardware and the software life cycles is going to be extremely crucial for this to happen where the hardware life cycle has its own productization timeline whereas software is parallelly happening where we can validate a lot of things.
So with that in mind, a lot of virtualization has come in place where new tools, new methodologies have emerged where you can actually do both the controller level virtualization as well as some of the peripherals and the ASICs can actually be virtualized, in short, putting the entire issue through the virtualization cycle, thereby enabling quality testing to be happening right within the software life cycle without needing to integrate with the actual hardware which goes on the product. So that's the other trend around virtualization, which is happening, which I think shiftleft is a keyword which we have been hearing since quite some time but gaining all the more importance in today's world where cycle times have to be dramatically reduced.
My final slide here is more related to what will be the roles of these technologies and what each of the players or the stakeholders in the automotive industry are planning to do or doing. So if you start with the component suppliers, they had their silver box and they were providing more of it to the OEMs, a lot of tribal knowledge around different domains they have. So what I see for them as a future would probably be more of productization of software.
So they have a lot of component knowledge. They have a lot of domain specific knowledge. So all of this they will probably now instead of providing their own silver boxes to the OEM will need to settle in some sort of a component, which will be containerized, which will work in a high performance compute platform, wherever it is placed, provide the access to the feature functions, which is required, and move towards more of a software as a product architecture going forward in the future.
Car manufacturers will be more of integrating some of these software components, which will come from the different from the different tier ones. They will have the platforms probably coming directly from the semiconductors where there will be a certain amount of base or platform software and ready to use SDKs which will be available from the semiconductor manufacturers, which will integrate all of this together. So the car manufacturers will be more integrating.
And if you see the second point, which is focus on the user experience features, whether it is HMI or the digital cockpit, what you want to name it. And second, the advanced driver assist features, L1, L2, L3, L4, and all the new areas that will be coming up with the vehicle. Some of the other cloud service providers continue to define what can be done at offboard, more coming up with more IoT services, specifically to the connected cars, and the cross industry players coming up with their containerized feature apps.
So that's a quick overview of what each of these stakeholders will do. And probably people like us at TCS or engineering service providers will always constantly be building this ecosystem where we will need to work with semiconductors, understand what the next chips are going to do, what their features are going to be, quickly build accelerators around it, take those accelerators, take it to the OEM, take it to the tier ones, and actually make them ready for being used in productization or taking it to a faster go to market.
Finally, the tool providers, of course, this is the digital thread I would call connecting all the development processes which are going to be there in the entire development process and how it can be linked, seamlessly integrated, ensuring that there is a trace and right from the concept to the production how that can be managed and even probably with the post-production issues which are likely to come up.
So that's a quick overview of what I think the different technologies are going to do. A recap, AI ML, chips, the next gen compute, virtualization, the immersive experience, and all of that coming together and providing. And each one of the stakeholders. I would think would be drawing their own roadmaps of how these technologies will be utilized and where they're trended. If you observe one other thing which here this line between them is blurring. I can tell you that certain semiconductor manufacturers are actually thinking of developing software directly for the OEM. And this includes the domain software as well.
So there is a lot of blurring of lines which is happening where who is going to do what is tomorrow going to be more technology driven rather than actually having the silos with respect to the different stakeholders in the industry today. So that's where I think the future will go maybe five or 10 years across the line. So that's a quick overview of what I wanted to cover in my keynote today. Thanks a lot for listening in. And thank you, MathWorks, for the opportunity to present here.