Main Content

Verify an Airborne Deep Learning System

Since R2023b

This example shows how to verify a deep learning system for airborne applications and is based on the work in [5,6,7], which includes the development and verification activities required by DO-178C [1], ARP4754A [2], and prospective EASA and FAA guidelines [3,4]. To verify that the system complies with aviation industry standards and prospective guidelines, including activities such as requirements tracing, testing and reporting, see Runway Sign Classifier: Certify an Airborne Deep Learning System (DO Qualification Kit).

You explore a case study of a runway sign classification (RSC) system that receives images from a forward-facing camera, and then detects airport runway signs in those images using object detection networks. This figure summarizes the system for the lowest design assurance level (DAL D).

According to ARP4754A [2], if two or more dissimilar components independently implement a system function, then you can lower the design assurance level of those system components. So, you can develop DAL C systems using two or more dissimilar DAL D components. To reduce the chance of similar errors, the two implementations must be sufficiently independent, as proposed in [6,7].

This figure summarizes the system for a DAL C assurance level using two DAL D components.

For example, the two DAL D Deep Neural Networks (DNNs) components could be:

  1. YOLO v3 object detector that has been constructed and trained in the MATLAB® framework and saved in a MAT format.

  2. Faster R-CNN object detector that has been constructed and trained in a non-MATLAB framework and saved in a non-MATLAB format.

In this example, you compare the performance of several YOLO-based object detector networks and verify their behavior. Use saliency maps to explain the detections and out-of-distribution detection to determine if the network is operating within its bounds. For the YOLO-based object detector network with the best performance, you generate code for CPU and GPU targets and also generate code for the out-of-distribution discriminator to monitor the network at run-time. You also compare the performance and inference speed of the original network and the generated MEX files. For integration of the two dissimilar DNN components into a Simulink model describing the system architecture above, see Runway Sign Classifier: Certify an Airborne Deep Learning System (DO Qualification Kit).

This example consists of a project with four folders. The Data, Learning, and Implementation folders each contain live scripts. To run this example successfully, you must run the live scripts in this order.

  1. Data Management

  2. Data Traceability

  3. Data Analysis

  4. Data Reviews

  5. Data Allocation

  6. Data Preparation

  7. Learning Management

  8. Estimate Anchor Boxes

  9. YOLO v2 Train

  10. YOLO v3 Train

  11. YOLO v4 Train

  12. Evaluation of MATLAB-based YOLO Models

  13. Model Detection Explanations via Saliency Maps

  14. Create Out-of-Distribution Discriminator

  15. Test Out-of-Distribution Discriminator

  16. Model Implementation

  17. CPU Code Generation

  18. GPU Code Generation

  19. Implementation Evaluation

  20. Run-time Out-Of-Distribution Monitoring

Data Management

Data sets for deep neural network (DNN) training, validation, and testing are specific to your AI system. Following the assumptions in [6], you treat DNN training and validation data set requirements as equivalent to software requirements that specify the behavior of a DNN model. For information about data development activities, including collection, labeling, and traceability, see Data Management.

Because training and validation data sets are subject to software requirements, you must verify their accuracy, consistency, traceability, and correctness.

The Data Management live script also shows how to perform these tasks:

  • Verify data consistency by using the data validity analysis methods.

  • Verify data accuracy by using the Image Labeler app.

Learning Management and Network Training

Learning management includes training preparation activities such as specifying the model architecture, training algorithm, and initial hyperparameters estimate. An AI model that you produce as a result of network training represents the software design from which you generate software source code.

For more information about learning management and model training activities, see Learning Management. For information about model testing as an essential part of the AI workflow, see Model Testing and Selection in Learning Management.

Model Implementation

Use model implementation activities to produce source code and build executable code that meets these requirements:

  • Executable on host hardware for evaluation

  • Ready for deployment on embedded hardware

For information about code generation and related model optimization activities such as quantization, see Model Implementation. Verifying the model implementation involves testing the executable code. This example reuses the model testing data set to test the implementation. For more information, see Implementation Evaluation in Model Implementation.

References

[1] RTCA DO-178C. "Software Considerations in Airborne Systems and Equipment Certification." RTCA SC-205, EUROCAE WG-12. https://www.rtca.org/.

[2] SAE ARP4754A. "Guidelines for Development of Civil Aircraft and Systems." SAE International. https://www.sae.org.

[3] Soudain, Guillaume. "EASA Concept Paper: First Usable Guidance for Level 1 Machine Learning Applications." Technical report, European Aviation Safety Agency, 2021. https://www.easa.europa.eu/en.

[4] Balduzzi, Giovanni, Martino Ferrari Bravo, Anna Chernova, Calin Cruceru, Luuk van Dijk, Peter de Lange, Juan Jerez, et al. "Neural Network Based Runway Landing Guidance for General Aviation Autoland." Technical report DOT/FAA/TC-21/48, Federal Aviation Administration, 2021.

[5] Dmitriev, Konstantin, Johann Schumann, and Florian Holzapfel. “Toward Certification of Machine-Learning Systems for Low Criticality Airborne Applications.” In 2021 IEEE/AIAA 40th Digital Avionics Systems Conference (DASC), 1–7. San Antonio, TX, USA: IEEE, 2021. https://doi.org/10.1109/DASC52595.2021.9594467.

[6] Dmitriev, Konstantin, Johann Schumann, and Florian Holzapfel. “Toward Design Assurance of Machine-Learning Airborne Systems.” In AIAA SCITECH 2022 Forum. San Diego, CA & Virtual: American Institute of Aeronautics and Astronautics, 2022. https://doi.org/10.2514/6.2022-1134.

[7] Dmitriev, Konstantin, Johann Schumann, Islam Bostanov, Mostafa Abdelhamid, and Florian Holzapfel. "Runway Sign Classifier: A DAL C Certifiable Machine Learning System." In 2023 IEEE/AIAA 42nd Digital Avionics Systems Conference (DASC), pp. 1-8. IEEE, 2023.

See Also

Apps

Functions

Objects

Related Topics