Why does the maximum scan/sampling rate differ significantly between MATLAB and Simulink Desktop Real Time?
4 views (last 30 days)
Justinus Hartoyo on 25 Aug 2021
I'm working with Simulink Desktop Real Time (SDRT) to receive analog inputs from sensors and send control signals. The DAQ board is NI PCIe 6343. The SDRT file is currently quite simple - just analog input and frequency output blocks (see attached file). However, the PC keeps crashing whenever I set the sampling rate to 1ms. I use the external mode, and I've tried running the model step-by-step (build --> connect --> run). The build and connect steps were completed successfully, then my PC crashed several seconds after I clicked run. If I set the sampling rate to e.g., 2ms, the model runs without problem.
While this might simply be due to the limitations of the PC, I recalled that other students have ran MATLAB scripts that do similar stuff on the same PC. Here's a snippet of the script:
The full script shows that they were measuring 8 analog inputs at 0.1ms sampling rate, which seems to be the default setting (in contrast, the default rate for SDRT blocks is 0.1s). So, why is there such a big difference between the viable sampling rate between MATLAB and SDRT?
A couple things that I've explored:
- Commented out I/O blocks in the SDRT model, then run the model with 1ms sampling rate. The runs were completed successfully up to 6 active (uncommented) blocks.
- Checked the configuration parameters to ensure that the fixed-step size is consistent with sampling rate - I assigned step size & sampling rate using the same MATLAB variable.
Another thing I'm going to try is to combine similar analog inputs into 1 block as shown in the default SDRT model example. However, I'm not sure how much improvement this would do. My final model will include at least double the amount of I/O signals, and include many computations. So, while I don't necessarily need 1ms sampling rate, I'd like to have significant wiggle room when I finally include those other stuff.
Jan Houska on 25 Aug 2021
for maximum performance of a Simulink Desktop Real-Time model, please use just a single Analog Input block and use a vector of channels as a parameter. This will produce a vector of the analog input signal values that you can then split to individual signals using a Demux or a similar block. Using just a single Analog Input block allows to use optimized algorithm for reading all the analog input channels together, which can help a lot depending of the data acquisition hardware used. Specifically, the NI PCIe 63xx series is known to significantly benefit from this optimization and you should be able to get well below 1ms sampling period this way.
As to why Simulink Desktop Real-Time maximum sample rates are lower than Data Acquisition Toolbox sample rates, this is because the products do very different things. Data Acquisition Toolbox reads the data and stores them to a buffer without any processing. This allows to use a hardware buffer on the board, which is first filled with the analog input values and only transferred to the host computer after the end of the data acquisition process. This allows to acquire the data at the maximum speed of the A/D converter, but provides no way to use any sample immediately, before the whole buffer is read.
In contrast to that, Simulink Desktop Real-Time allows you to process every single data point immediately after it is read, which requires transferring it to the host computer immediately. This of course requires some additional time and decreases the maximum achievable sample rate, but allows for closed-loop scenarios where every data point read can be immediately used for e.g. computing an analog output value that can in turn have effect on the input value read in the next step.
Good Luck, Jan