Bottom-Up Approach to Model Analysis
Design Verifier™ software works most effectively at analyzing large models using a
bottom-up approach. In this approach, the software analyzes smaller model components
first, which can be faster than using the default
Auto test suite
The bottom-up approach offers several advantages:
It allows you to solve the problems that slow down error detection, test generation, or property proving in a controlled environment.
Solving problems with small model components before analyzing the model as a whole is more efficient, especially if you have unreachable components in your model that you can only discover in the context of the model.
You can iterate more quickly—find a problem and fix it, find another problem and fix it, and so on.
If one model component has a problem—for example, a component is unreachable in simulation—that can prevent the software from generating tests for all the objectives in a large model.
Try this workflow with your large model:
Use the Test Generation Advisor to identify analyzable model components and generate tests for these components. For more information, see Use Test Generation Advisor to Identify Analyzable Components.
Fix any problems by adding constraints or specifying block replacements.
After you analyze the smaller components, reapply the required constraints and substitutions to the original model. Analyze the full model.
When you finish a bottom-up analysis, you have a top-level model that Simulink Design Verifier can analyze quickly.
Reuse of Analysis Results from Subsystems at the System level
This section explains how the results for Simulink Design Verifier run on the unit level generalize to the system level. This could be used in certain circumstances as a replacement for running Simulink Design Verifier at the system level, or to restrict the checks that need to run at the system level.
These points describe how Simulink Design Verifier generalizes the results on the unit level to the system level:
When the design errors prove to be valid or, if you find dead logic at the unit level, the same results are considered valid (or dead logic) at the integration level. Without the system context, analysis at the unit level allows for a less constrained set of behaviors than those experienced in a unit when running at the system level. In other words, when the design error is valid in an unconstrained setting, it is valid in the more constrained setting.
When there are design errors or an absence of dead logic at the unit level, the results might be different at the integration level. You must then reanalyze these objectives at the integration level.
These limitations are for reusing of analysis results from subsystems, at the system level:
If the configuration parameter values between the unit level and the system level differ, the Simulink Design Verifier results may change at the system level.
If floating-point Inf/NaN check is run at the unit level, the inputs to the unit are assumed to be finite, and similarly if the subnormal check is run at the unit level, the inputs to the unit are assumed to be normal. If you need to consider Inf/NaN and subnormal as inputs to the unit level, consider either disabling these checks or analyzing at the integration level. For more information, see Assumptions and Limitations.
If you use
sldvextractfunction, in order to extract a unit for analysis, Simulink Design Verifier in some cases, inserts a Data Store Memory block and Data Store Read and/or Data Store Write blocks. For more information, see Analyze Subsystems That Read from Global Data Storage. This leads to a different simulation behavior for the unit level. Additionally, the data store access violation checks may experience different results.