When using logged signals to export data from a simulation, I have been specifying the signal logging sample time parameter. The motivation behind this is to limit the impact on memory and storage when running a simulation multiple times (10,000's of times). However, my one main issue is that the final operating point is not logged unless it perfectly lines up with the logging sample time specified.
Is there a way to specificy a sample time for a logged signal, and also include the final operating point in the same signal log, even if the termination time isn't at a timestep of logging?
Demo of Ask
To demo what I'd like, I drafted this simple simulink model of a point mass subject to gravity. Below is the model diagram:![](https://www.mathworks.com/matlabcentral/answers/uploaded_files/1624753/image.png)
Some background info on this model:
- Using ode4 fixed step solver (ode4)
- Using time step of 0.001 s (1000 Hz)
- Stop Simulation triggered when h [m] crosses zero. This occurs at t = 6.4340 [s] for this demo
Here is a scope output of it running nominally:
Run Cases
In each case Timing info is gathered by:
h = out.logsout.get('h [m]').Values
Case 1: Leaving signal logging sample time as inherited (outputs at simulation rate):
Case 2: Specifying signal logging sample time as 0.1 [s]:
Case 3: Specifying signal logging sample time as 1 [s]:
Intuitivly, this result make sense, as it is not set to log until the next time specified logging time step, but I would like a method for including the final point of t = 6.4340 in the signal log as well.
Possible Alternative & Issues:
- Duplicate every logged signal. Have one be at the desired output rate, and one a the simulation rate, but limit it to one data point
- This would just make every thing way more messy. Every signal of interest would need to be copied, and then each pair would need to be stiched together in post processing
- Downsample as a post process step
- The motivation for setting the signal logging sample time was due to memory and storage limitations, so it is preferable to avoid methods that even temporarilly increase resource allocation.
- Also this would increase overall run time by needing an additional processing step on each signal of each run where as just saving off one additional
- "Pause" the model until current time equals a logged time
- Figuring out this logic for each subsystem and model reference may be trickey, and it could break some realistic effects.
- Also, a quick test by putting the whole model inside an enabled subsystem yielded no difference in result as above, so a more complex fix would be needed here.
Other Notes:
- It looks like there is a model configuration option for saving the final operating points of "states" but I would want every signal logged at the final operating point, and not just the states.