This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English version of the page.

Note: This page has been translated by MathWorks. Click here to see
To view all translated materials including this page, select Country from the country navigator on the bottom of this page.

Specifying the Color Space

Specifying the Color Space

For most image acquisition devices, the video format of the video stream determines the color space of the acquired image data, that is, the way color information is represented numerically.

For example, many devices represent colors as RGB values. In this color space, colors are represented as a combination of various intensities of red, green, and blue. Another color space, widely used for digital video, is the YCbCr color space. In this color space, luminance (brightness or intensity) information is stored as a single component (Y). Chrominance (color) information is stored as two color-difference components (Cb and Cr). Cb represents the difference between the blue component and a reference value. Cr represents the difference between the red component and a reference value.

The toolbox can return image data in grayscale, RGB, and YCbCr. To specify the color representation of the image data, set the value of the ReturnedColorSpace property. To display image frames using the image, imagesc, or imshow functions, the data must use the RGB color space. Another MathWorks® product, the Image Processing Toolbox™ software, includes functions that convert YCbCr data to RGB data, and vice versa.

Note

Some devices that claim to support the YUV color space actually support the YCbCr color space. YUV is similar to YCbCr but not identical. The difference between YUV and YCbCr is the scaling factor applied to the result. YUV refers to a particular scaling factor used in composite NTSC and PAL formats. In most cases, you can specify the YCbCr color space for devices that support YUV.

You can determine your device’s default color space using this code: vid.ReturnedColorSpace, where vid is the name of the video object. An example of this is shown in step 2 in the example below. There may be situations when you wish to change the color space. The example below shows a case where the default color space is rgb, and you change it to grayscale (step 3).

The following example illustrates how to specify the color space of the returned image data.

  1. Create an image acquisition object — This example creates a video input object for a generic Windows® image acquisition device. To run this example on your system, use the imaqhwinfo function to get the object constructor for your image acquisition device and substitute that syntax for the following code.

    vid = videoinput('winvideo',1);
  2. View the default color space used for the data — The value of the ReturnedColorSpace property indicates the color space of the image data.

    vid.ReturnedColorSpace
    
    ans = 
    
    rgb
  3. Modify the color space used for the data — To change the color space of the returned image data, set the value of the ReturnedColorSpace property.

    vid.ReturnedColorSpace = 'grayscale'
    
    ans = 
    
    grayscale
  4. Clean up — Always remove image acquisition objects from memory, and the variables that reference them, when you no longer need them.

    delete(vid)
    clear vid

Converting Bayer Images

You can use the ReturnedColorSpace and BayerSensorAlignment properties to control Bayer demosaicing.

If your camera uses Bayer filtering, the toolbox supports the Bayer pattern and can return color if desired. By setting the ReturnedColorSpace property to 'bayer', the Image Acquisition Toolbox™ software will demosaic Bayer patterns returned by the hardware. This color space setting will interpolate Bayer pattern encoded images into standard RGB images.

In order to perform the demosaicing, the toolbox needs to know the pixel alignment of the sensor. This is the order of the red, green, and blue sensors and is normally specified by describing the four pixels in the upper-left corner of the sensor. It is the band sensitivity alignment of the pixels as interpreted by the camera's internal hardware. You must get this information from the camera's documentation and then specify the value for the alignment.

If your camera can return Bayer data, the toolbox can automatically convert it to RGB data for you, or you can specify it to do so. The following two examples illustrate both use cases.

Manual Conversion

The camera in this example has a Bayer sensor. The GigE Vision™ standard allows cameras to inform applications that the data is Bayer encoded and provides enough information for the application to convert the Bayer pattern into a color image. In this case the toolbox automatically converts the Bayer pattern into an RGB image.

  1. Create a video object vid using the GigE Vision adaptor and the designated video format.

    vid = videoinput('gige', 1, 'BayerGB8_640x480');
  2. View the default color space used for the data.

    vid.ReturnedColorSpace
    
    ans = 
    
    rgb
  3. Create a one-frame image img using the getsnapshot function.

    img = getsnapshot(vid);
  4. View the size of the acquired image.

    size(img)
    
    ans = 
    
    480  640  3 
  5. Sometimes you might not want the toolbox to automatically convert the Bayer pattern into a color image. For example, there are a number of different algorithms to convert from a Bayer pattern into an RGB image and you might wish to specify a different one than the toolbox uses or you might want to further process the raw data before converting it into a color image.

    % Set the color space to grayscale.
    vid.ReturnedColorSpace = 'grayscale';
    
    % Acquire another image frame.
    img = getsnapshot(vid);
    
    % Now check the size of the new frame acquired using grayscale.
    size(img)
    
    ans = 
    
    480  640 

    Notice how the size changed from the rgb image to the grayscale image by comparing the size output in steps 4 and 5.

  6. You can optionally use the demosaic function in the Image Processing Toolbox to convert Bayer patterns into color images.

    % Create an image colorImage by using the demosaic function on the 
    % image img and convert it to color.
    colorImage = demosaic(img, 'gbrg');
    
    % Now check the size of the new color image.
    size(colorImage)
    
    ans = 
    
    480  640  3
  7. Always remove image acquisition objects from memory, and the variables that reference them, when you no longer need them.

    delete(vid)
    clear vid

Automatic Conversion

The camera in this example returns data that is a Bayer mosaic, but the toolbox doesn't know it since the DCAM standard doesn't have any way for the camera to communicate that to software applications. You need to know that by reading the camera specifications or manual. The toolbox can automatically convert the Bayer encoded data to RGB data, but it must be programmed to do so.

  1. Create a video object vid using the DCAM adaptor and the designated video format for raw data.

    vid = videoinput('dcam', 1, 'F7_RAW8_640x480');
  2. View the default color space used for the data.

    vid.ReturnedColorSpace
    
    ans = 
    
    grayscale
  3. Create a one-frame image img using the getsnapshot function.

    img = getsnapshot(vid);
  4. View the size of the acquired image.

    size(img)
    
    ans = 
    
    480  640 
  5. The value of the ReturnedColorSpace property is grayscale because Bayer data is single-banded and the toolbox doesn't yet know that it needs to decode the data. Setting the ReturnedColorSpace property to 'bayer' indicates that the toolbox should decode the data.

    % Set the color space to Bayer.
    vid.ReturnedColorSpace = 'bayer';
  6. In order to properly decode the data, the toolbox also needs to know the alignment of the Bayer filter array. This should be in the camera documentation. You can then use the BayerSensorAlignment property to set the alignment.

    % Set the alignment.
    vid.BayerSensorAlignment = 'grbg';

    The getdata and getsnapshot functions will now return color data.

    % Acquire another image frame.
    img = getsnapshot(vid);
    
    % Now check the size of the new frame acquired returning color data.
    size(img)
    
    ans = 
    
    480  640  3

    Remove the image acquisition object from memory.

    delete(vid)
    clear vid