Main Content

Authoring a Reference Design for Live Camera Integration with Deep Learning Processor IP Core

This example shows how to create an HDL Coder™ reference design that contains a generated deep learning processor IP core. The reference design receives a live camera input and uses a deployed series network to classify the objects in the camera input. This figure is a high-level architectural diagram that shows the reference design that will be implemented on the Xilinx™ Zynq™ Ultrascale+(TM) MPsoC ZCU102 Evaluation Kit.

The user IP core block:

  • Extracts the region of interest (ROI) based on ROI dimensions from the processing system (PS) (ARM).

  • Performs downsampling on the input image.

  • Zero-centers the input image.

  • Transfers the preprocessed image to the external DDR memory.

  • Triggers the deep learning processor IP core.

  • Notifies the PS(ARM) processor.

The deep learning processor IP core accesses the preprocessed inputs, performs the object classification and loads the output results back into the external DDR memory.

The PS (ARM):

  • Takes the ROI dimensions and passes them to the user IP core.

  • Performs post-processing on the image data.

  • Annotates the object classification results from the deep learning processor IP core on the output video frame.

You can also use MATLAB® to retrieve the classification results and verify the generated deep learning processor IP core. The user DUT for this reference design is the preprocessing algorithm (User IP Core). You can design the preprocessing DUT algorithm in Simulink®, generate the DUT IP core, and integrate the generated DUT IP core into the larger system that contains the deep learning processor IP core. To learn how to generate the DUT IP core, see Run a Deep Learning Network on FPGA with Live Camera Input.

Generate Deep Learning Processor IP Core

Follow these steps to configure and generate the deep learning processor IP core into the reference design.

1. Create a custom deep learning processor configuration.

hPC = dlhdl.ProcessorConfig

To learn more about the deep learning processor architecture, see Deep Learning Processor IP Core Architecture. To get information about the custom processor configuration parameters and modifying the parameters, see getModuleProperty and setModuleProperty.

2. Generate the Deep Learning Processor IP core.

To learn how to generate the custom deep learning processor IP, see Generate Custom Processor IP. The deep learning processor IP core is generated by using the HDL Coder™ IP core generation workflow. For more information, see Custom IP Core Generation (HDL Coder).

dlhdl.buildProcessor(hPC)

The generated IP core files are located at cwd\dlhdl_prj\ipcore. cwd is the current working directory. The ipcore folder contains an HTML report located at cwd\dlhdl_prj\ipcore\DUT_ip_v1_0\doc.

The HTML report contains a description of the deep learning processor IP core, instructions for using the core and integrating the core into your Vivado™ reference design, and a list of AXI4 registers. You will need the AXI4 register list to enter addresses into the Vivado™ Address Mapping tool. For more information about the AXI4 registers, see Deep Learning Processor IP Core Report.

Integrate the Generated Deep Learning Processor IP Core into the Reference Design

Insert the generated deep learning processor IP core into your reference design. After inserting the generated deep learning processor IP core into the reference design, you must:

  • Connect the generated deep learning processor IP core AXI4 slave interface to an AXI4 master device such as a JTAG AXI master IP core or a Zynq™ processing system (PS). Use the AXI4 master device to communicate with the deep learning processor IP core.

  • Connect the vendor provided external memory interface IP core to the three AXI4 master interfaces of the generated deep learning processor IP core.

The deep learning processor IP core uses the external memory interface to access the external DDR memory. The image shows the deep learning processor IP core integrated into the Vivado™ reference design and connected to the DDR memory interface generator (MIG) IP.

Connect the External Memory Interface Generator

In your Vivado™ reference design add an external memory interface generator (MIG) block and connect the generated deep learning processor IP core to the MIG module. The MIG module is connected to the processor IP core through an AXI interconnect module. The image shows the high level architectural design and the Vivado™ reference design implementation.

Create the Reference Design Definition File

The following code describes the contents of the ZCU102 reference design definition file plugin_rd.m for the above Vivado™ reference design. For more details on how to define and register the custom board, refer to the Define Custom Board and Reference Design for Zynq Workflow (HDL Coder).

function hRD = plugin_rd(varargin)
% Parse config
config = ZynqVideoPSP.common.parse_config(...
   'ToolVersion', '2019.1', ...
   'Board', 'zcu102', ...
    'Design', 'visionzynq_base', ...
   'ColorSpace', 'RGB' ...
);
% Construct reference design object
hRD = hdlcoder.ReferenceDesign('SynthesisTool', 'Xilinx Vivado');
hRD.BoardName = ZynqVideoPSP.ZCU102Hdmicam.BoardName();
hRD.ReferenceDesignName = 'HDMI RGB with DL Processor';
% Tool information
hRD.SupportedToolVersion = {'2019.1'}
...

Verify the Reference Design

After creating the reference design, use the HDL Coder™ IP core generation workflow to generate the bitstream and program the ZCU102 board. You can then use MATLAB® and a dlhdl.Workflow object to verify the deep learning processor IP core or you can use the HDL Coder™ workflow to prototype the entire system. To verify the reference design, see Run a Deep Learning Network on FPGA with Live Camera Input.

See Also

|

Related Topics