How can I synthesize the Stereo Image Rectification simulink model for Intel Arria?

1 visualizzazione (ultimi 30 giorni)
It says at the bottom that "This design was synthesized for the Intel Arria 10 GX (115S2F45I1SG) FPGA." I tried using the HDL workflow advisor for this board, but at the target platform interfaces, I couldn't find the Pixel Control bus input or output.
Also, I want to create the IP Core first.

Risposta accettata

Steve Kuznicki
Steve Kuznicki il 13 Gen 2021
If you want to target a Zynq device, I would work towards that.
There are a few ways that you can create the IP Core and deploy this for use in Vivado IPI. This really depends on what (video format) your camera acquisition IP Cores deliver. This could be MIPI, CameraLink, AXI-Stream-Video or some other custom format.
Since there are no default platform target that supports 2 pixelcontrol bus ports, you will need to either
1) create your own custom reference design plugin_rd.m file that has 2 pixel streaming inputs/outputs, or
2) wrap the top-level subsystem (HDLStereoImageRectification) in another subsystem that contains your "adaptor" to interface to your stereo (pair) of streaming interfaces. This subsystem would have the same interface as your input/output video acquisition IP Cores.
3) you could just split out the pixelcontrol bus into its 5 control lines and generate a basic signal IP core. You would need to develop other IP Cores in Vivado (or Simulink) to translate to the pixel control bus protocol. This would be helpful if you really want to synthesize the code to get an accurate estimation on resources.
If you are looking at (Xilinx) AXI-Stream-Video interface, then our SoC Blockset product provides a HDL Workflow Advisor IP Core Generation target that allows multiple Video inteface inputs. This would generate an IP Core that has 2 AXI-Stream-Video input/output interfaces.
In the end, if you really want to deploy this, you will need to determine what form/video format your input cameras are in. Typically, stereo applications rely on sensors directly interfaced to the FPGA (e.g. MIPI-CSI2 or an LVDS interface).

Più risposte (1)

Yuval Levental
Yuval Levental il 18 Apr 2021
Answering my own question: I recently learned that the Bit Concat block can concatenate two pixel streams into one pixel stream, if they are integer streams. They can be deconcatenated into two pixel streams by using the Bit Slice block after the single pixel stream enters the subsystem.
https://www.mathworks.com/help/hdlcoder/ref/bitconcat.html
https://www.mathworks.com/help/hdlcoder/ref/bitslice.html

Prodotti


Release

R2020b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by