Main Content

Verify an Airborne Deep Learning System

Since R2023b

This example shows how to verify a deep learning system for airborne applications. You explore a case study of a runway sign classification (RSC) system that receives images from a forward-facing camera, and then detects airport runway signs in those images using object detection networks. This figure summarizes the system.

preprocess_dnn_image.png

You compare the performance of each network and, for the network with the best performance, you generate code for CPU and GPU targets. Finally, you compare the performance and inference speed of the original network and the generated MEX files.

This example is based on the work in [5], which includes the development and verification activities required by DO-178C [1], ARP4754A [2], and prospective EASA and FAA guidelines [3,4]. In this example, you carry out deep-learning-specific activities that support certification against these standards. To verify that the system complies with the full aviation industry standards and prospective guidelines, see Runway Sign Classifier: Certify an Airborne Deep Learning System (DO Qualification Kit).

This example consists of a project with four folders. The Data, Implementation, and Learning folders each contain live scripts. To run this example successfully, you must run the live scripts in this order.

  1. Data Management

  2. Data Traceability

  3. Data Analysis

  4. Data Reviews

  5. Data Allocation

  6. Data Preparation

  7. Learning Management

  8. Estimate Anchor Boxes

  9. Train YOLO v2 Network

  10. Train YOLO v3 Network

  11. Train YOLO v4 Network

  12. Model Evaluation

  13. Model Implementation

  14. Generate Code on CPU

  15. Quantize Model and Generate Code on GPU

  16. Implementation Evaluation

Data Management

Data sets for deep neural network (DNN) training, validation, and testing are specific to your artificial intelligence (AI) system. Following the assumptions in [6], you treat DNN training and validation data set requirements as equivalent to software requirements that specify the behavior of a DNN model. For information about data development activities, including collection, labeling, and traceability, see Data Management.

Because training and validation data sets are subject to software requirements, you must verify their accuracy, consistency, traceability, and compliance in accordance with DO-178C.

The Data Management live script also shows how to perform these tasks:

  • Verify data consistency by using the data validity analysis methods.

  • Verify data accuracy by using the Image Labeler app.

Learning Management and Network Training

Learning management includes training preparation activities such as specifying the model architecture, training algorithm, and initial hyperparameters estimate. An AI model that you produce as a result of network training represents the software design from which you generate software source code.

For more information about learning management and model training activities, see Learning Management. For information about model testing as an essential part of the AI workflow, see Model Testing and Selection in Learning Management.

Model Implementation

Use model implementation activities to produce source code and build executable code that runs on host hardware for evaluation and is ready for deployment on embedded hardware.

For information about code generation and related model optimization activities such as quantization, see Model Implementation. Verifying the model implementation involves testing the executable code. This example reuses the model testing data set to test the implementation. For more information, see Implementation Evaluation in Model Implementation.

References

[1] RTCA DO-178C. "Software Considerations in Airborne Systems and Equipment Certification." RTCA SC-205, EUROCAE WG-12. https://www.rtca.org/.

[2] SAE ARP4754A. "Guidelines for Development of Civil Aircraft and Systems." SAE International. https://www.sae.org.

[3] Soudain, Guillaume. "EASA Concept Paper: First Usable Guidance for Level 1 Machine Learning Applications." Technical report, European Aviation Safety Agency, 2021. https://www.easa.europa.eu/en.

[4] Balduzzi, Giovanni, Martino Ferrari Bravo, Anna Chernova, Calin Cruceru, Luuk van Dijk, Peter de Lange, Juan Jerez, et al. "Neural Network Based Runway Landing Guidance for General Aviation Autoland." Technical report DOT/FAA/TC-21/48, Federal Aviation Administration, 2021.

[5] Dmitriev, Konstantin, Johann Schumann, and Florian Holzapfel. “Toward Certification of Machine-Learning Systems for Low Criticality Airborne Applications.” In 2021 IEEE/AIAA 40th Digital Avionics Systems Conference (DASC), 1–7. San Antonio, TX, USA: IEEE, 2021. https://doi.org/10.1109/DASC52595.2021.9594467.

[6] Dmitriev, Konstantin, Johann Schumann, and Florian Holzapfel. “Toward Design Assurance of Machine-Learning Airborne Systems.” In AIAA SCITECH 2022 Forum. San Diego, CA & Virtual: American Institute of Aeronautics and Astronautics, 2022. https://doi.org/10.2514/6.2022-1134.

See Also

Apps

Functions

Objects

Related Topics