User's Guide: Image Acquisition Toolbox™
User's Guide: Image Acquisition Toolbox™
User's Guide
R2021a
How to Contact MathWorks
Phone: 508-647-7000
Getting Started
1
Image Acquisition Toolbox Product Description . . . . . . . . . . . . . . . . . . . . . 1-2
Introduction
2
Toolbox Components Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2
Toolbox Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3
The Image Processing Toolbox Software Required to Use the Image
Acquisition Toolbox Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3
The Image Acquisition Tool (GUI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-4
Supported Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-4
v
Previewing Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-7
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-7
Opening a Video Preview Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-7
Stopping the Preview Video Stream . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-8
Closing a Video Preview Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-9
Previewing Data in Custom GUIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-9
Performing Custom Processing of Previewed Data . . . . . . . . . . . . . . . . . 2-11
vi Contents
Image Acquisition Support Packages
4
Image Acquisition Support Packages for Hardware Adaptors . . . . . . . . . . 4-2
Connecting to Hardware
5
Getting Hardware Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-2
Getting Hardware Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-2
Determining the Device Adaptor Name . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-2
Determining the Device ID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-3
Determining Supported Video Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-4
Use Advanced Property Support in the GigE Vision and GenICam GenTL
Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-18
Advanced Property Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-18
Change Properties While the Acquisition Is Running . . . . . . . . . . . . . . . . 5-18
Dynamic Accessibility and Readability . . . . . . . . . . . . . . . . . . . . . . . . . . 5-19
Dynamic Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-20
Grouped Selector Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-21
vii
Deleting Image Acquisition Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-28
viii Contents
Working with Acquired Image Data
7
Image Acquisition Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-2
ix
Using the From Video Device Block in Simulink
9
Open Image Acquisition Toolbox Block Library . . . . . . . . . . . . . . . . . . . . . 9-2
From the Command Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-2
From the Simulink Library Browser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-2
Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-13
x Contents
Set GigE Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-6
Use GigE Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-7
xi
Using VideoDevice System Object to Acquire Frames . . . . . . . . . . . . . . . 14-4
Kinect for Windows Metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-5
Troubleshooting
16
Troubleshooting Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-2
xii Contents
GenICam GenTL Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-24
Device Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-24
xiii
Calculating the Length of a Pendulum in Motion . . . . . . . . . . . . . . . . . . 17-49
Functions
18
Properties
19
Blocks
20
xiv Contents
1
Getting Started
The best way to learn about Image Acquisition Toolbox capabilities is to look at a simple example.
This chapter introduces the toolbox and illustrates the basic steps to create an image acquisition
application by implementing a simple motion detection application. The example cross-references
other sections that provide more details about relevant concepts.
Image Acquisition Toolbox provides functions and blocks for connecting cameras to MATLAB® and
Simulink®. It includes a MATLAB app that lets you interactively detect and configure hardware
properties. You can then generate equivalent MATLAB code to automate your acquisition in future
sessions. The toolbox enables acquisition modes such as processing in-the-loop, hardware triggering,
background acquisition, and synchronizing acquisition across multiple devices.
Image Acquisition Toolbox supports all major standards and hardware vendors, including USB3
Vision, GigE Vision®, and GenICam™ GenTL. You can connect to machine vision cameras and frame
grabbers, as well as high-end scientific and industrial devices.
1-2
Product Overview
Product Overview
In this section...
“Introduction” on page 1-3
“Installation and Configuration Notes” on page 1-3
“The Image Processing Toolbox Software Required to Use the Image Acquisition Toolbox Software”
on page 1-4
“Related Products” on page 1-4
“Supported Hardware” on page 1-4
Introduction
The Image Acquisition Toolbox software is a collection of functions that extend the capability of the
MATLAB numeric computing environment. The toolbox supports a wide range of image acquisition
operations, including:
• Acquiring images through many types of image acquisition devices, from professional grade frame
grabbers to USB-based webcams
• Viewing a preview of the live video stream
• Triggering acquisitions (includes external hardware triggers)
• Configuring callback functions that execute when certain events occur
• Bringing the image data into the MATLAB workspace
Many of the toolbox functions are MATLAB files. You can view the MATLAB code for these functions
using this statement:
type function_name
You can extend Image Acquisition Toolbox capabilities by writing your own MATLAB files, or by using
the toolbox in combination with other toolboxes, such as the Image Processing Toolbox™ software
and the Data Acquisition Toolbox™ software.
The Image Acquisition Toolbox software also includes a Simulink block, called From Video Device,
that you can use to bring live video data into a model.
ver
When you enter this command, MATLAB displays information about the version of MATLAB you are
running, including a list of all toolboxes installed on your system and their version numbers.
For information about installing the toolbox, see the MATLAB Installation Guide.
For the most up-to-date information about system requirements, see the system requirements page,
available in the products area at the MathWorks Web site (www.mathworks.com).
1-3
1 Getting Started
Note With previous versions of the Image Acquisition Toolbox, the files for all of the adaptors were
included in your installation. Starting with version R2014a, each adaptor is available separately
through support packages. In order to use the Image Acquisition Toolbox, you must install the
adaptor that your camera uses. See “Image Acquisition Support Packages for Hardware Adaptors” on
page 4-2 for information about installing the adaptors using MATLAB Add-Ons.
If you already have the Image Processing Toolbox product, you do not need to do anything.
If you do not have the Image Processing Toolbox product, the Image Acquisition Toolbox software
R2008a and earlier will continue to work. If you want to use R2008b or future releases, and you have
a current active license for the Image Acquisition Toolbox software, you can download the Image
Processing Toolbox product for free. New customers will need to purchase both products to use the
Image Acquisition Toolbox product.
Related Products
MathWorks provides several products that are relevant to the kinds of tasks you can perform with the
Image Acquisition Toolbox software and that extend the capabilities of MATLAB. For information
about these related products, see www.mathworks.com/products/imaq/related.html.
Supported Hardware
The list of hardware that the Image Acquisition Toolbox software supports can change in each
release, since hardware support is frequently added. The MathWorks Web site is the best place to
check for the most up to date listing.
To see the full list of hardware that the toolbox supports, visit the Image Acquisition Toolbox product
page at the MathWorks Web site www.mathworks.com/products/imaq and click the Supported
Hardware link.
1-4
Image Acquisition Tool (GUI)
To open the tool, type imaqtool at the MATLAB command line, or select Image Acquisition on the
Apps tab in MATLAB. The tool has extensive Help in the desktop. As you click in different panes of
the user interface, the relevant Help appears in the Image Acquisition Tool Help pane.
Most of the User's Guide describes performing tasks using the toolbox via the MATLAB command
line. To learn how to use the desktop tool, see “Getting Started with the Image Acquisition Tool” on
page 3-5.
1-5
1 Getting Started
In this section...
“Overview” on page 1-6
“Step 1: Install Your Image Acquisition Device” on page 1-7
“Step 2: Retrieve Hardware Information” on page 1-7
“Step 3: Create a Video Input Object” on page 1-9
“Step 4: Preview the Video Stream (Optional)” on page 1-10
“Step 5: Configure Object Properties (Optional)” on page 1-11
“Step 6: Acquire Image Data” on page 1-14
“Step 7: Clean Up” on page 1-16
Overview
This section illustrates the basic steps required to create an image acquisition application by
implementing a simple motion detection application. The application detects movement in a scene by
performing a pixel-to-pixel comparison in pairs of incoming image frames. If nothing moves in the
scene, pixel values remain the same in each frame. When something moves in the image, the
application displays the pixels that have changed values.
The example highlights how you can use the Image Acquisition Toolbox software to create a working
image acquisition application with only a few lines of code.
Note To run the sample code in this example, you must have an image acquisition device connected
to your system. The device can be a professional grade image acquisition device, such as a frame
grabber, or a generic Microsoft® Windows® image acquisition device, such as a webcam. The code
can be used with various types of devices with only minor changes.
Note With previous versions of the Image Acquisition Toolbox, the files for all of the adaptors were
included in your installation. Starting with version R2014a, each adaptor is available separately
through support packages. In order to use the Image Acquisition Toolbox, you must install the
adaptor that your camera uses. See “Image Acquisition Support Packages for Hardware Adaptors” on
page 4-2 for information about installing the adaptors using MATLAB Add-Ons.
To use the Image Acquisition Toolbox software to acquire image data, you must perform the following
basic steps.
Step Description
Step 1 on page 1- Install and configure your image acquisition device
7:
Step 2: on page 1- Retrieve information that uniquely identifies your image acquisition device to the
7 Image Acquisition Toolbox software
Step 3 on page 1- Create a video input object
9:
1-6
Getting Started Doing Image Acquisition Programmatically
Step Description
Step 4 on page 1- Preview the video stream (Optional)
10:
Step 5: on page 1- Configure image acquisition object properties (Optional)
11
Step 6 on page 1- Acquire image data
14:
Step 7 on page 1- Clean up
16:
The Image Processing Toolbox Software Required to Use the Image Acquisition Toolbox
Software
The Image Acquisition Toolbox product, including the Image Acquisition Tool, requires you to have a
license for the Image Processing Toolbox product starting in R2008b.
If you already have the Image Processing Toolbox product, you do not need to do anything.
If you do not have the Image Processing Toolbox product, the Image Acquisition Toolbox software
R2008a and earlier will continue to work. If you want to use R2008b or future releases, and you have
a current active license for the Image Acquisition Toolbox software, you can download the Image
Processing Toolbox product for free. New customers will need to purchase both products to use the
Image Acquisition Toolbox product.
Generic Windows image acquisition devices, such as webcams and digital video camcorders, typically
do not require the installation of a frame grabber board. You connect these devices directly to your
computer via a USB or FireWire port.
After installing and configuring your image acquisition hardware, start MATLAB on your computer by
double-clicking the icon on your desktop. You do not need to perform any special configuration of
MATLAB to perform image acquisition.
1-7
1 Getting Started
The following table lists this information. You use the imaqhwinfo function to retrieve each item.
Note Specifying the device ID is optional; the toolbox uses the first available
device ID as the default.
Video format The video format specifies the image resolution (width and height) and other
aspects of the video stream. Image acquisition devices typically support
multiple video formats. See “Determining the Supported Video Formats” on
page 1-9 for more information.
Note Specifying the video format is optional; the toolbox uses one of the
supported formats as the default.
To determine the name of the adaptor, enter the imaqhwinfo function at the MATLAB prompt
without any arguments.
imaqhwinfo
ans =
In the data returned by imaqhwinfo, the InstalledAdaptors field lists the adaptors that are
available on your computer. In this example, imaqhwinfo found two adaptors available on the
computer: 'dcam' and 'winvideo'. The listing on your computer might contain only one adaptor
name. Select the adaptor name that provides access to your image acquisition device. For more
information, see “Determining the Device Adaptor Name” on page 5-2.
To find the device ID of a particular image acquisition device, enter the imaqhwinfo function at the
MATLAB prompt, specifying the name of the adaptor as the only argument. (You found the adaptor
name in the first call to imaqhwinfo, described in “Determining the Adaptor Name” on page 1-8.) In
the data returned, the DeviceIDs field is a cell array containing the device IDs of all the devices
accessible through the specified adaptor.
Note This example uses the DCAM adaptor. You should substitute the name of the adaptor you would
like to use.
1-8
Getting Started Doing Image Acquisition Programmatically
info = imaqhwinfo('dcam')
info =
To determine which video formats an image acquisition device supports, look in the DeviceInfo
field of the data returned by imaqhwinfo. The DeviceInfo field is a structure array where each
structure provides information about a particular device. To view the device information for a
particular device, you can use the device ID as a reference into the structure array. Alternatively, you
can view the information for a particular device by calling the imaqhwinfo function, specifying the
adaptor name and device ID as arguments.
To get the list of the video formats supported by a device, look at SupportedFormats field in the
device information structure. The SupportedFormats field is a cell array of character vectors where
each character vector is the name of a video format supported by the device. For more information,
see “Determining Supported Video Formats” on page 5-4.
dev_info = imaqhwinfo('dcam',1)
dev_info =
DefaultFormat: 'F7_Y8_1024x768'
DeviceFileSupported: 0
DeviceName: 'XCD-X700 1.05'
DeviceID: 1
VideoInputConstructor: 'videoinput('dcam', 1)'
VideoDeviceConstructor: 'imaq.VideoDevice('dcam', 1)'
SupportedFormats: {'F7_Y8_1024x768' 'Y8_1024x768'}
To create a video input object, use the videoinput function at the MATLAB prompt. The
DeviceInfo structure returned by the imaqhwinfo function contains the default videoinput
function syntax for a device in the VideoInputConstructor field. For more information the device
information structure, see “Determining the Supported Video Formats” on page 1-9.
The following example creates a video input object for the DCAM adaptor. Substitute the adaptor
name of the image acquisition device available on your system.
vid = videoinput('dcam',1,'Y8_1024x768')
The videoinput function accepts three arguments: the adaptor name, device ID, and video format.
You retrieved this information in step 2. The adaptor name is the only required argument; the
videoinput function can use defaults for the device ID and video format. To determine the default
1-9
1 Getting Started
video format, look at the DefaultFormat field in the device information structure. See “Determining
the Supported Video Formats” on page 1-9 for more information.
Instead of specifying the video format, you can optionally specify the name of a device configuration
file, also known as a camera file. Device configuration files are typically supplied by frame grabber
vendors. These files contain all the required configuration settings to use a particular camera with
the device. See “Using Device Configuration Files (Camera Files)” on page 5-9 for more
information.
To view a summary of the video input object you just created, enter the variable name vid at the
MATLAB command prompt. The summary information displayed shows many of the characteristics of
the object, such as the number of frames that will be captured with each trigger, the trigger type, and
the current state of the object. You can use video input object properties to control many of these
characteristics. See “Step 5: Configure Object Properties (Optional)” on page 1-11 for more
information.
vid
Note This step is optional at this point in the procedure because you can preview a video stream at
any time after you create a video input object.
To preview the video stream in this example, enter the preview function at the MATLAB prompt,
specifying the video input object created in step 3 as an argument.
preview(vid)
The preview function opens a Video Preview figure window on your screen containing the live video
stream. To stop the stream of live video, you can call the stoppreview function. To restart the
preview stream, call preview again on the same video input object.
1-10
Getting Started Doing Image Acquisition Programmatically
While a preview window is open, the video input object sets the value of the Previewing property to
'on'. If you change characteristics of the image by setting image acquisition object properties, the
image displayed in the preview window reflects the change.
The following figure shows the Video Preview window for the example.
To close the Video Preview window, click the Close button in the title bar or use the closepreview
function, specifying the video input object as an argument.
closepreview(vid)
Calling closepreview without any arguments closes all open Video Preview windows.
• Describes the types of image acquisition objects on page 1-12 used by the toolbox
• Describes how to view all the properties on page 1-12 supported by these objects, with their
current values
• Describes how to set the values on page 1-13 of object properties
1-11
1 Getting Started
The toolbox uses two types of objects to represent the connection with an image acquisition device:
A video input object represents the connection between MATLAB and a video acquisition device at a
high level. The properties supported by the video input object are the same for every type of device.
You created a video input object using the videoinput function in step 3 on page 1-9.
When you create a video input object, the toolbox automatically creates one or more video source
objects associated with the video input object. Each video source object represents a collection of one
or more physical data sources that are treated as a single entity. The number of video source objects
the toolbox creates depends on the device and the video format you specify. At any one time, only one
of the video source objects, called the selected source, can be active. This is the source used for
acquisition. For more information about these image acquisition objects, see “Creating Image
Acquisition Objects” on page 5-6.
To view a complete list of all the properties supported by a video input object or a video source
object, use the get function. To list the properties of the video input object created in step 3, enter
this code at the MATLAB prompt.
get(vid)
The get function lists all the properties of the object with their current values.
General Settings:
DeviceID = 1
DiskLogger = []
DiskLoggerFrameCount = 0
EventLog = [1x0 struct]
FrameGrabInterval = 1
FramesAcquired = 0
FramesAvailable = 0
FramesPerTrigger = 10
Logging = off
LoggingMode = memory
Name = Y8_1024x768-dcam-1
NumberOfBands = 1
Previewing = on
ReturnedColorSpace = grayscale
ROIPosition = [0 0 1024 768]
Running = off
Tag =
Timeout = 10
Type = videoinput
UserData = []
VideoFormat = Y8_1024x768
VideoResolution = [1024 768]
.
.
.
1-12
Getting Started Doing Image Acquisition Programmatically
To view the properties of the currently selected video source object associated with this video input
object, use the getselectedsource function in conjunction with the get function. The
getselectedsource function returns the currently active video source. To list the properties of the
currently selected video source object associated with the video input object created in step 3, enter
this code at the MATLAB prompt.
get(getselectedsource(vid))
The get function lists all the properties of the object with their current values.
Note Video source object properties are device specific. The list of properties supported by the
device connected to your system might differ from the list shown in this example.
General Settings:
Parent = [1x1 videoinput]
Selected = on
SourceName = input1
Tag =
Type = videosource
To set the value of a video input object property or a video source object property, you reference the
object property as you would a field in a structure, using dot notation.
Some properties are read only; you cannot set their values. These properties typically provide
information about the state of the object. Other properties become read only when the object is
running. To view a list of all the properties you can set, use the set function, specifying the object as
the only argument.
To implement continuous image acquisition, the example sets the TriggerRepeat property to Inf.
To set this property, enter this code at the MATLAB prompt.
vid.TriggerRepeat = Inf;
To help the application keep up with the incoming video stream while processing data, the example
sets the FrameGrabInterval property to 5. This specifies that the object acquire every fifth frame
in the video stream. (You might need to experiment with the value of the FrameGrabInterval
property to find a value that provides the best response with your image acquisition setup.) This
example shows how you can set the value of an object property by referencing the property as you
would reference a field in a MATLAB structure.
vid.FrameGrabInterval = 5;
To set the value of a video source object property, you must first use the getselectedsource
function to retrieve the object. (You can also get the selected source by searching the video input
object Source property for the video source object that has the Selected property set to 'on'.)
1-13
1 Getting Started
vid_src = getselectedsource(vid);
Note To get a list of options you can use on a function, press the Tab key after entering a function on
the MATLAB command line. The list expands, and you can scroll to choose a property or value. For
information about using this advanced tab completion feature, see “Using Tab Completion for
Functions” on page 5-17.
• Starting the video input object — You start an object by calling the start function. Starting an
object prepares the object for data acquisition. For example, starting an object locks the values of
certain object properties (they become read only). Starting an object does not initiate the
acquiring of image frames, however. The initiation of data logging depends on the execution of a
trigger.
The following example calls the start function to start the video input object. Objects stop when
they have acquired the requested number of frames. Because the example specifies a continuous
acquisition, you must call the stop function to stop the object.
• Triggering the acquisition — To acquire data, a video input object must execute a trigger.
Triggers can occur in several ways, depending on how the TriggerType property is configured.
For example, if you specify an immediate trigger, the object executes a trigger automatically,
immediately after it starts. If you specify a manual trigger, the object waits for a call to the
trigger function before it initiates data acquisition. For more information, see “Acquiring Image
Data” on page 6-2.
In the example, because the TriggerType property is set to 'immediate' (the default) and the
TriggerRepeat property is set to Inf, the object automatically begins executing triggers and
acquiring frames of data, continuously.
• Bringing data into the MATLAB workspace — The toolbox stores acquired data in a memory
buffer, a disk file, or both, depending on the value of the video input object LoggingMode
property. To work with this data, you must bring it into the MATLAB workspace. To bring multiple
frames into the workspace, use the getdata function. Once the data is in the MATLAB
workspace, you can manipulate it as you would any other data. For more information, see
“Working with Image Data in MATLAB Workspace” on page 7-9.
Note The toolbox provides a convenient way to acquire a single frame of image data that doesn't
require starting or triggering the object. See “Bringing a Single Frame into the Workspace” on page
7-7 for more information.
To run the example, enter the following code at the MATLAB prompt. The example loops until a
specified number of frames have been acquired. In each loop iteration, the example calls getdata to
bring the two most recent frames into the MATLAB workspace. To detect motion, the example
1-14
Getting Started Doing Image Acquisition Programmatically
subtracts one frame from the other, creating a difference image, and then displays it. Pixels that have
changed values in the acquired frames will have nonzero values in the difference image.
The getdata function removes frames from the memory buffer when it brings them into the MATLAB
workspace. It is important to move frames from the memory buffer into the MATLAB workspace in a
timely manner. If you do not move the acquired frames from memory, you can quickly exhaust all the
memory available on your system.
Note The example uses functions in the Image Processing Toolbox software.
stop(vid)
Note that a drawnow is used after the call to imshow in order to ensure that the figure window is
updated. This is good practice when updating a GUI or figure inside a loop.
The following figure shows how the example displays detected motion. In the figure, areas
representing movement are displayed.
1-15
1 Getting Started
Note To get a list of options you can use on a function, press the Tab key after entering a function on
the MATLAB command line. The list expands, and you can scroll to choose a property or value. For
information about using this advanced tab completion feature, see “Using Tab Completion for
Functions” on page 5-17.
In the example, the getdata function returns the image frames in the variable data as a 480-by-640-
by-1-by-10 array of 8-bit data (uint8).
whos
Name Size Bytes Class
The height and width of the array are primarily determined by the video resolution of the video
format. However, you can use the ROIPosition property to specify values that supersede the video
resolution. Devices typically express video resolution as column-by-row; MATLAB expresses matrix
dimensions as row-by-column.
The third dimension represents the number of color bands in the image. Because the example data is
a grayscale image, the third dimension is 1. For RGB formats, image frames have three bands: red is
the first, green is the second, and blue is the third. The fourth dimension represents the number of
frames that have been acquired from the video stream.
Step 7: Clean Up
When you finish using your image acquisition objects, you can remove them from memory and clear
the MATLAB workspace of the variables associated with these objects.
1-16
Getting Started Doing Image Acquisition Programmatically
delete(vid)
clear
close(gcf)
For more information, see “Deleting Image Acquisition Objects” on page 5-28.
1-17
2
Introduction
This chapter describes the Image Acquisition Toolbox software and its components.
In this section...
“Introduction” on page 2-2
“Toolbox Components” on page 2-3
“The Image Processing Toolbox Software Required to Use the Image Acquisition Toolbox Software”
on page 2-3
“The Image Acquisition Tool (GUI)” on page 2-4
“Supported Devices” on page 2-4
Introduction
Image Acquisition Toolbox enables you to acquire images and video from cameras and frame
grabbers directly into MATLAB and Simulink. You can detect hardware automatically, and configure
hardware properties. Advanced workflows let you trigger acquisitions while processing in-the-loop,
perform background acquisitions, and synchronize sampling across several multimodal devices. With
support for multiple hardware vendors and industry standards, you can use imaging devices, ranging
from inexpensive Web cameras to high-end scientific and industrial devices that meet low-light, high-
speed, and other challenging requirements.
Once you establish a connection to a device, you can acquire image data by executing a trigger. In the
toolbox, all image acquisition is initiated by a trigger. The toolbox supports several types of triggers
that let you control when an acquisition takes place. For example, using hardware triggers you can
synchronize an acquisition with an external device. “Acquiring Image Data” on page 6-2 describes
how to trigger the acquisition of image data.
To work with the data you acquire, you must bring it into the MATLAB workspace. When the frames
are acquired, the toolbox stores them in a memory buffer. The toolbox provides several ways to bring
one or more frames of data into the workspace where you can manipulate it as you would any other
multidimensional numeric array. “Bringing Image Data into the MATLAB Workspace” on page 7-3
describes this process.
Finally, you can enhance your image acquisition application by using event callbacks. The toolbox has
defined certain occurrences, such as the triggering of an acquisition, as events. You can associate the
execution of a particular function with a particular event. “Using Events and Callbacks” on page 8-
2 describes this process.
Note With previous versions of the Image Acquisition Toolbox, the files for all of the adaptors were
included in your installation. Starting with version R2014a, each adaptor is available separately
through support packages. In order to use the Image Acquisition Toolbox, you must install the
adaptor that your camera uses. See “Image Acquisition Support Packages for Hardware Adaptors” on
page 4-2 for information about installing the adaptors using MATLAB Add-Ons.
2-2
Toolbox Components Overview
Toolbox Components
The toolbox uses components called hardware device adaptors to connect to devices through their
drivers. The toolbox includes adaptors that support devices produced by several vendors of image
acquisition equipment. In addition, the toolbox includes an adaptor for generic Windows video
acquisition devices.
If you already have the Image Processing Toolbox product, you do not need to do anything.
If you do not have the Image Processing Toolbox product, the Image Acquisition Toolbox software
R2008a and earlier will continue to work. If you want to use R2008b or future releases, and you have
a current active license for the Image Acquisition Toolbox software, you can download the Image
Processing Toolbox product for free. New customers will need to purchase both products to use the
Image Acquisition Toolbox product.
2-3
2 Introduction
To open the tool, type imaqtool at the MATLAB command line, or select Image Acquisition on the
Apps tab in MATLAB. The tool has extensive Help in the desktop. As you click in different panes of
the user interface, the relevant Help appears in the Image Acquisition Tool Help pane.
Most of the User's Guide describes performing tasks using the toolbox via the MATLAB command
line. To learn how to use the desktop tool, see “Getting Started with the Image Acquisition Tool” on
page 3-5.
Supported Devices
The Image Acquisition Toolbox software includes adaptors that provide support for several vendors of
professional grade image acquisition equipment, devices that support the IIDC 1394-based Digital
Camera Specification (DCAM), and devices that provide Windows Driver Model (WDM) or Video for
Windows (VFW) drivers, such as USB and IEEE® 1394 (FireWire, i.LINK®) Web cameras, Digital video
(DV) camcorders, and TV tuner cards. For the latest information about supported hardware, visit the
Image Acquisition Toolbox product page at the MathWorks Web site (www.mathworks.com/
products/image-acquisition).
The DCAM specification, developed by the 1394 Trade Association, describes a generic interface for
exchanging data with IEEE 1394 (FireWire) digital cameras that is often used in scientific
applications. The toolbox's DCAM adaptor supports Format 7, also known as partial scan mode. The
toolbox uses the prefix F7_ to identify Format 7 video format names.
Note The toolbox supports only connections to IEEE 1394 (FireWire) DCAM-compliant devices using
the Carnegie Mellon University DCAM driver. The toolbox is not compatible with any other vendor-
supplied driver, even if the driver is DCAM compliant.
You can add support for additional hardware by writing an adaptor. For more information, see
“Support for Additional Hardware” on page 15-2.
Note With previous versions of the Image Acquisition Toolbox, the files for all of the adaptors were
included in your installation. Starting with version R2014a, each adaptor is available separately
through support packages. In order to use the Image Acquisition Toolbox, you must install the
adaptor that your camera uses. See “Image Acquisition Support Packages for Hardware Adaptors” on
page 4-2 for information about installing the adaptors using MATLAB Add-Ons.
2-4
Setting Up Image Acquisition Hardware
In this section...
“Introduction” on page 2-5
“Setting Up Frame Grabbers” on page 2-5
“Setting Up Generic Windows Video Acquisition Devices” on page 2-5
“Setting Up DCAM Devices” on page 2-6
“Resetting Your Image Acquisition Hardware” on page 2-6
“A Note About Frame Rates and Processing Speed” on page 2-6
Introduction
To acquire image data, you must perform the setup required by your particular image acquisition
device. In a typical image acquisition setup, an image acquisition device, such as a camera, is
connected to a computer via an image acquisition board, such as a frame grabber, or via a Universal
Serial Bus (USB) or IEEE 1394 (FireWire) port. The setup required varies with the type of device.
After installing and configuring your image acquisition hardware, start MATLAB on your computer by
double-clicking the icon on your desktop. You do not need to perform any special configuration of
MATLAB to acquire data.
Note With previous versions of the Image Acquisition Toolbox, the files for all of the adaptors were
included in your installation. Starting with version R2014a, each adaptor is available separately
through support packages. In order to use the Image Acquisition Toolbox, you must install the
adaptor that your camera uses. See “Image Acquisition Support Packages for Hardware Adaptors” on
page 4-2 for information about installing the adaptors using MATLAB Add-Ons.
2-5
2 Introduction
If you connect another image acquisition device to your system after MATLAB is started, you can use
imaqreset to make the toolbox aware of the new hardware.
Devices that support industry-standard video formats must provide frames at the rate specified by the
standard. For RS170 and NTSC, the standard dictates a frame rate of 30 frames per second (30 Hz).
The CCIR and PAL standards define a frame rate of 25 Hz. Nonstandard devices can be configured to
operate at higher rates. Generic Windows image acquisition devices, such as webcams, might support
many different frame rates. Depending on the device being used, the frame rate might be
configurable using a device-specific property of the image acquisition object.
The rate at which the Image Acquisition Toolbox software can process images depends on the
processor speed, the complexity of the processing algorithm, and the frame rate. Given a fast
processor, a simple algorithm, and a frame rate tuned to the acquisition setup, the Image Acquisition
Toolbox software can process data as it comes in.
2-6
Previewing Data
Previewing Data
In this section...
“Introduction” on page 2-7
“Opening a Video Preview Window” on page 2-7
“Stopping the Preview Video Stream” on page 2-8
“Closing a Video Preview Window” on page 2-9
“Previewing Data in Custom GUIs” on page 2-9
“Performing Custom Processing of Previewed Data” on page 2-11
Introduction
After you connect MATLAB to the image acquisition device you can view the live video stream using
the Video Preview window. Previewing the video data can help you make sure that the image being
captured is satisfactory.
For example, by looking at a preview, you can verify that the lighting and focus are correct. If you
change characteristics of the image, by using video input object and video source object properties,
the image displayed in the Video Preview window changes to reflect the new property settings.
The following sections provide more information about using the Video Preview window.
Instead of using the toolbox's Video Preview window, you can display the live video preview stream in
any Handle Graphics® image object you specify. In this way, you can include video previewing in a
GUI of your own creation. The following sections describe this capability.
The following example creates a video input object and then opens a Video Preview window for the
video input object.
vid = videoinput('winvideo');
preview(vid);
The following figure shows the Video Preview window created by this example. The Video Preview
window displays the live video stream. The size of the preview image is determined by the value of
the video input object's ROIPosition property. The Video Preview window displays the video data at
100% magnification.
2-7
2 Introduction
In addition to the preview image, the Video Preview window includes information about the image,
such as the timestamp of the video frame, the video resolution, the frame rate, and the current status
of the video input object.
Note Because video formats typically express resolution as width-by-height, the Video Preview
window expresses the size of the image frame as column-by-row, rather than the standard MATLAB
row-by-column format.
Note The Image Acquisition Toolbox Preview window and the Preview window that is built into the
Image Acquisition Tool support the display of up to 16-bit image data. The Preview window was
designed to only show 8-bit data, but many cameras return 10-, 12-, 14-, or 16-bit data. The Preview
window display supports these higher bit-depth cameras. However, larger bit data is scaled to 8-bit
for the purpose of displaying previewed data. To capture the image data in the Preview window in its
full bit depth for grayscale images, set the PreviewFullBitDepth property to 'on'.
2-8
Previewing Data
This example creates a video input object and opens a Video Preview window. The example then calls
the stoppreview function on this video input object. The Video Preview window stops updating the
image displayed and stops updating the timestamp. The status displayed in the Video Preview window
also changes to indicate that previewing has been stopped.
vid = videoinput('winvideo');
preview(vid)
stoppreview(vid)
To restart the video stream in the Video Preview window, call preview again on the same video input
object.
preview(vid)
closepreview(vid)
To close all currently open Video Preview windows, use the closepreview function without any
arguments.
closepreview
Note When called without an argument, the closepreview function only closes Video Preview
windows. The closepreview function does not close any other figure windows in which you have
directed the live preview video stream. For more information, see “Previewing Data in Custom GUIs”
on page 2-9.
To use this capability, create an image object and then call the preview function, specifying a handle
to the image object as an argument. The preview function outputs the live video stream to the image
object you specify.
The following example creates a figure window and then creates an image object in the figure, the
same size as the video frames. The example then calls the preview function, specifying a handle to
the image object.
2-9
2 Introduction
figure('Toolbar','none',...
'Menubar', 'none',...
'NumberTitle','Off',...
'Name','My Preview Window');
vidRes = vid.VideoResolution;
nBands = vid.NumberOfBands;
hImage = image( zeros(vidRes(2), vidRes(1), nBands) );
preview(vid, hImage);
When you run this example, it creates the GUI shown in the following figure.
Custom Preview
2-10
Previewing Data
1 Create the function you want executed for each image frame, called the update preview window
function. For information about this function, see “Creating the Update Preview Window
Function” on page 2-11.
2 Create an image object.
3 Configure the value of the image object's 'UpdatePreviewWindowFcn' application-defined
data to be a function handle to your update preview window function. For more information, see
“Specifying the Update Preview Function” on page 2-12.
4 Call the preview function, specifying the handle of the image object as an argument.
Note If you specify an update preview window function, in addition to whatever processing your
function performs, it must display the video data in the image object. You can do this by updating the
CData of the image object with the incoming video frames. For some performance guidelines about
updating the data displayed in an image object, see Technical Solution 1-1B022.
When preview calls the update preview window function you specify, it passes your function the
following arguments.
Argument Description
obj Handle to the video input object being previewed
event A data structure containing the following fields:
Data Current image frame specified as an H-by-W-by-B array,
where H is the image height and W is the image width, as
specified in the ROIPosition property, and B is the
number of color bands, as specified in the
NumberOfBands property
Resolution Character vector specifying the current image width and
height, as defined by the ROIPosition property
Status Character vector describing the status of the video input
object
Timestamp Character vector specifying the time associated with the
current image frame, in the format hh:mm:ss:ms
FrameRate Character vector specifying the current frame rate of the
video input object in frames per second
himage Handle to the image object in which the data is to be displayed
The following example creates an update preview window function that displays the timestamp of
each incoming video frame as a text label in the custom GUI. The update preview window function
2-11
2 Introduction
uses getappdata to retrieve a handle to the text label uicontrol object from application-defined
data in the image object. The custom GUI stores this handle to the text label uicontrol object — see
“Specifying the Update Preview Function” on page 2-12.
Note that the update preview window function also displays the video data by updating the CData of
the image object.
function mypreview_fcn(obj,event,himage)
% Example update preview window function.
To use an update preview window function, store a function handle to your function in the
'UpdatePreviewWindowFcn' application-defined data of the image object. The following example
uses the setappdata function to configure this application-defined data to a function handle to the
update preview window function described in “Creating the Update Preview Window Function” on
page 2-11.
This example extends the simple custom preview window created in “Previewing Data in Custom
GUIs” on page 2-9. This example adds three push button uicontrol objects to the GUI: Start
Preview, Stop Preview, and Close Preview.
In addition, to illustrate using an update preview window function, the example GUI includes a text
label uicontrol object to display the timestamp value. The update preview window function updates
this text label each time a framed is received. The example uses setappdata to store a handle to the
text label uicontrol object in application-defined data in the image object. The update preview
window function retrieves this handle to update the timestamp display.
% Create a video input object.
vid = videoinput('winvideo');
2-12
Previewing Data
'Units','normalized',...
'Position',[.17 0 .15 .07]);
uicontrol('String', 'Close',...
'Callback', 'close(gcf)',...
'Units','normalized',...
'Position',[0.34 0 .15 .07]);
% Specify the size of the axes that contains the image object
% so that it displays the image at the right resolution and
% centers it in the figure window.
figSize = get(hFig,'Position');
figWidth = figSize(3);
figHeight = figSize(4);
gca.unit = 'pixels';
gca.position = [ ((figWidth - imWidth)/2)...
((figHeight - imHeight)/2)...
imWidth imHeight ];
preview(vid, hImage);
When you run this example, it creates the GUI shown in the following figure. Each time preview
receives a video frame, it calls the update preview window function that you specified, which updates
the timestamp text label in the GUI.
2-13
2 Introduction
2-14
3
The Image Acquisition Tool provides a desktop environment that integrates a preview/acquisition
area with Acquisition Parameters so that you can change settings and see the changes dynamically
applied to your image data.
Note The right pane in the tool is the Desktop Help pane. As you work in the tool the Help will
provide information for the part of the interface that you are working in. If the Desktop Help is
closed, you can open it be selecting Desktop > Desktop Help.
3-2
The Image Acquisition Tool Desktop
• Hardware Browser – Shows the image acquisition devices currently connected to your system.
Each device is a separate node in the browser. All of the formats the device supports are listed
under the device. Each device's default format is indicated in parentheses. Select the device
format or camera file you want to use for the acquisition. When the format is selected, you can
then set acquisition parameters and preview your data. See “Selecting Your Device in Image
Acquisition Tool” on page 3-7 for more information about using the Hardware Browser.
• Preview window – Use to preview and acquire image data from the selected device format, and to
export data that has been acquired in memory to a MAT-file, the MATLAB Workspace, VideoWriter,
or to tools provided by the Image Processing Toolbox software. See “Previewing and Acquiring
Data in Image Acquisition Tool” on page 3-22 for more information about using the Preview
window.
• Acquisition Parameters – Use these tabs to set up general acquisition parameters, such as
frames per trigger and color space, device-specific properties, logging options, triggering options,
and region of interest. Settings you make on any tab will apply to the currently selected device
format in the Hardware Browser. See “Setting Acquisition Parameters in Image Acquisition Tool”
on page 3-9 for more information about using the Acquisition Parameters. Also see the Help
3-3
3 Using the Image Acquisition Tool GUI
for each tab while using the tool for more details. When you click any tab, the help for that tab will
appear in the Desktop Help pane.
• Information pane – Displays a summary of information about the selected node in the Hardware
Browser.
• Session Log – Displays a dynamically generated log of the commands that correspond to actions
taken in the tool. You can save the log to a MATLAB code file or copy it.
• Desktop Help – Displays Help for the pane of the desktop that has focus. Click inside a pane for
help on that area of the tool. For the Acquisition Parameters pane, click each tab to display
information about the settings for that tab.
If the Desktop Help is closed, you can open it by selecting Desktop > Desktop Help.
3-4
Getting Started with the Image Acquisition Tool
For information on the parts of the Desktop GUI or how to open the Tool, see “The Image Acquisition
Tool Desktop” on page 3-2.
Note With previous versions of the Image Acquisition Toolbox, the files for all of the adaptors were
included in your installation. Starting with version R2014a, each adaptor is available separately
through support packages. In order to use the Image Acquisition Tool, you must install the adaptor
that your camera uses. See “Image Acquisition Support Packages for Hardware Adaptors” on page 4-
2 for information about installing the adaptors using MATLAB Add-Ons.
The Hardware Browser shows the image acquisition devices currently connected to your
system. If the device you want to use is not connected to your system, plug it in and then select
Tools > Refresh Image Acquisition Hardware to display the new device in the Hardware
Browser.
2 Choose the format to work with.
The nodes listed under the device name are the formats the device supports. They may
correspond to the different resolutions and color spaces that your device supports, or to different
video standards or camera configurations. This information comes from your device adaptor.
Select the format you want to use.
3 Preview to check that the device is working and the image is what you expect.
If necessary, physically adjust the device to achieve the desired image area, or use the Region of
Interest tab to define the acquisition region.
4 Decide how many frames you want to acquire.
The number of frames that will be acquired when you start the acquisition is dependent on what
is set in the Frames Per Trigger field on the General tab and the Number of Triggers field on
the Triggering tab. For example, if you set Frames Per Trigger to 4 and Number of Triggers
to 2, the total number of frames acquired will be 8.
If you just want a snapshot of one frame, leave the default settings of 1 in both of those fields. If
you want a specific number of frames, use the fields to set it.
Alternatively, you can set the tool to acquire continuously and use the buttons in the Preview
Window to manually start and stop the acquisition. This is discussed in a later step.
5 Set any general or device-specific parameters you need to set, on those tabs of the Acquisition
Parameters pane, or use the default settings.
6 Choose your log mode, which determines where the acquisition data is stored.
3-5
3 Using the Image Acquisition Tool GUI
On the Logging tab, use the Log To field to choose to log to memory, disk, or both. Disk logging
results in a saved VideoWriter file. If you choose memory logging, you can export your data after
the acquisition using the Export Data button on the Preview Window.
For more information on logging, see the Help for the Logging tab in the Desktop Help pane in
the tool.
7 Start the acquisition by clicking the Start Acquisition button.
– If you set Trigger Type (on the Triggering tab) to Immediate, the tool will immediately start
logging data.
– If you set Trigger Type to Manual, click the Trigger button when you want to start logging
data.
8 Stop the acquisition.
– If you set Frames Per Trigger (on the General tab) to 1 or any other number, your acquisition
will stop when that number of frames is reached.
– If you set Frames Per Trigger to Infinite, click the Stop Acquisition button to stop the
acquisition.
Note that you can also click Stop Acquisition to abort an acquisition if number of frames was
specified.
9 Optionally you can export data that was saved to memory.
You can export the data that has been acquired in memory to a MAT-file, the MATLAB Workspace,
VideoWriter, or to the Image Tool, Image File, or Movie Player tools that are provided by the
Image Processing Toolbox software using the Export Data button. For more information, see the
“Exporting Data” section of the Desktop Help on the Preview Window in the Desktop Help
pane in the tool.
10 Optionally you can save your configuration(s), using the File > Save Configuration or File >
Export Hardware Configuration menus. For more information about these commands, see the
“Image Acquisition Tool Menus” section of the Help on the Hardware Browser in the Desktop
Help pane in the tool.
3-6
Selecting Your Device in Image Acquisition Tool
3-7
3 Using the Image Acquisition Tool GUI
Select the device format or camera file you want to use for the acquisition by clicking its name in the
tree. When the format is selected, you can then set acquisition parameters and preview your data.
Under the device name in the Hardware Browser, you would see a node that says Click to add
camera file... if the device supports the use of camera files.
1 In the Hardware Browser, single-click the node under your device name that says Click to add
camera file....
2 In the Specify camera file dialog box, type the path and name of the file, or click the Browse
button to locate it, and then click OK.
The camera file will then become a new node under the device, similar to any of the formats
listed under a device. You can then set acquisition parameters, preview, and acquire data using
it.
Note The tool ignores hardware trigger configurations included in a camera file. To configure
hardware triggering, use the Trigger tab in the Acquisition Parameters pane.
3-8
Setting Acquisition Parameters in Image Acquisition Tool
• General – Use to set up general acquisition parameters, such as frames per trigger and color
space.
• Device Properties – Use to view or change device-specific properties.
• Logging – Use to set up logging options, such as logging mode, which determines whether your
acquired data is logged to memory, disk, or both. If you want to generate a VideoWriter file of your
data, use the Disk Logging option on this tab.
• Triggering – Use to set up triggering options, such as number of triggers and trigger type. If you
want to do manual triggering using the Trigger button, use the Trigger Type option on this tab.
• Region of Interest – Use to set a Region of Interest (ROI) if you only want to use part of an
image.
For more detailed information about the settings on each tab, see the Help topic for the tab while
using the tool. When you click a tab, the corresponding topic will appear in the Desktop Help pane.
Note Once you have changed parameters of a device, you can restore the device's default
parameters by selecting the device format in the Hardware Browser and right-clicking Clear
Selected Hardware Configuration.
• If you want your acquisition to be a specific number of frames per trigger, use the default of 1
frame, or use the arrows to select the number of frames or type in the number.
3-9
3 Using the Image Acquisition Tool GUI
• If you want to acquire frames continuously, set the Frames Per Trigger to infinite and then
use the Stop Acquisition button to stop the acquisition, or do manual triggering using the
Triggering tab.
The number of frames that will be acquired when you start an acquisition depends on what is set in
the Frames Per Trigger field on the General tab and the Number of Triggers field on the
Triggering tab. For example, if you set Frames Per Trigger to 4 and Number of Triggers to 2, the
total number of frames acquired will be 8.
Note that if you set Frames Per Trigger to infinite, you cannot set Number of Triggers on the
Triggering tab.
Note Some devices need a few frames to warm up, or may always skip the first frame. If your device
does that, change the number of frames accordingly to adjust for that. You can also adjust for camera
warm-up by using manual triggering on the Triggering tab.
Additionally, if the default color space is grayscale, a value of bayer will be available in the
Returned Color Space field for some devices, and the Bayer Sensor Alignment field will also be
displayed. Use the drop-down list to select one of the four possible sensor alignments. This feature
allows the tool to demosaic Bayer patterns returned by the hardware and interpolate them into
standard RGB color images. For more information about this feature, see the
BayerSensorAlignment property reference page.
3-10
Setting Acquisition Parameters in Image Acquisition Tool
The Selected source field specifies the name of the selected source for the current device. Many
device adaptors only have one input source, so for example, this might show something like input1,
port1, or input0 by default. If your device supports multiple source names, they will appear in the
drop-down list.
Changes you make in the Properties area are applied to your acquisition or preview dynamically. For
example, to change the exposure for the camera you are using, edit the value in the Exposure
property text field or use the slider to change it. You will immediately see the change in the Preview
window if you are previewing at the time, or in the next acquisition when you click the Start
Acquisition button.
Click the Reset to defaults button to undo any modifications you made and restore the default
settings of the device.
Property Help
To get help on any of the properties in the Device Properties tab, right-click a property and select
What's This?. A Help window opens and displays the help for the selected property, as well as the
rest of the properties, which are available by scrolling. This is the same information as the device
help you access using the imaqhelp command. For more detailed information on device-specific
properties, see your device's documentation.
3-11
3 Using the Image Acquisition Tool GUI
If FrameRate appears in the Properties area, that means your device has a FrameRate property.
The information in the table comes from your device. The value set there will be the frame rate that
your device uses, in frames per second.
If FrameRate does not appear in the list, your device does not support that property.
Use the Log to options to select where to log your acquisition. Select one of the following:
• Memory — Acquisition is logged to memory. This means that the acquired data that you do not
otherwise save (using Export Data) will be logged to your system's memory, and will be available
to you only during the acquisition session. The data will be lost if you do another acquisition, or
you close the tool without exporting the data. This is the default setting.
• Disk — Acquisition is logged to disk as a VideoWriter file, in the location you specify in the Disk
logging area. This means that the acquired data will be logged to disk and will be available to you
there after the acquisition session. After selecting Disk, the Disk logging area becomes editable
and you can enter or browse to the location and name the file.
• Disk and memory — Acquisition will be logged to both disk, in the location you specify in the
Disk logging area, and memory.
Disk Logging
If you select Disk or Disk and memory in the Log to options, the Disk logging area becomes
editable so you can designate a file and location to save to.
3-12
Setting Acquisition Parameters in Image Acquisition Tool
Note Disk logging generates a VideoWriter file. If you select a VideoWriter profile that generates an
AVI file, note that AVI files are limited to a bit-depth of 8 bits per pixel for each band. If you have
higher bit data, you should not log it to an AVI file because the AVI format is restricted to 8-bit data.
If you log higher bit data to an AVI file, it will be scaled and then logged as 8-bit data.
1 Click the Browse button to select a location for the file, or enter the name of the location.
2 In the Save dialog box, browse to the location and then enter a name in the File name field, and
click Save.
Uncompressed AVI is the default profile for color devices and Grayscale AVI is the default
profile for monochrome devices, so the .avi extension is appended to the name on the Logging
tab initially, and the other fields become editable. You can change the profile in step 4.
3 Optionally select Automatically increment filename if you want the tool to name subsequent
acquisitions using the same root name, plus an incremented number. For example, if you enter
the file name experiment.avi and then select this option, it will be replaced by
experiment_0001.avi, followed by experiment_0002.avi, etc.
This option is useful if you need to acquire multiple videos of one or more subjects. For example,
a lab technician might want to acquire 10 seconds of video on a group of five different cultures
and save them for later analysis. The technician may want resulting file names such as
sample_0001.avi, sample_0002.avi, etc.
4 You can use any of the profiles offered by VideoWriter. Accept the default profile (Uncompressed
AVI for color devices and Grayscale AVI for monochrome devices) or select another. Currently
supported profiles are:
• 'Motion JPEG 2000' — Compressed Motion JPEG 2000 file. Can log single-banded
(grayscale) data as well as multi-byte data.
• 'Archival' — Motion JPEG 2000 file with lossless compression.
• 'Motion JPEG AVI' — Compressed AVI file using Motion JPEG codec.
• 'Uncompressed AVI' — Uncompressed AVI file with RGB24 video.
• 'MPEG-4' — Compressed MPEG-4 file with H.264 encoding (systems with Windows 7 or
macOS 10.7 and later).
• 'Grayscale AVI' — Uncompressed AVI file with grayscale video. Only used for
monochrome devices.
• 'Indexed AVI' — Uncompressed AVI file with indexed video. Only used for monochrome
devices.
5 Additional logging options appear dynamically after you select a profile.
If you select Motion JPEG 2000 or Archival as your profile, you can set the Compression
Ratio, Frame Rate, Lossless Compression, and MJ2 Bit Depth options. Accept the default
values or change them.
If you select Motion JPEG AVI or MPEG-4 as your profile, you can set the Frame Rate and
Quality options. Accept the default values or change them.
If you select Uncompressed AVI or Grayscale AVI as your profile, you can set the Frame
Rate option. Accept the default value or change it.
3-13
3 Using the Image Acquisition Tool GUI
If you select Indexed AVI as your profile, you can set the Frame Rate and Colormap options.
Accept the default value for Frame Rate or change it. You must enter a Colormap value. See
the “VideoWriter Options” section below for more information.
VideoWriter Options
• Compression Ratio is a number greater than 1 that specifies the target ratio between the
number of bytes in the input image and the number of bytes in the compressed image. The
data is compressed as much as possible, up to the specified target. This is only available for
objects associated with Motion JPEG 2000 files. The default is 10.
• Frame Rate is the rate of playback for the video in frames per second. The default is 30. If
your device has a set frame rate, that will be used instead.
• Lossless Compression is a Boolean value (logical true or false) only available for objects
associated with Motion JPEG 2000 files. If you select true, VideoWriter uses reversible mode
so that the decompressed data is identical to the input data, and ignores any specified value
for CompressionRatio. The default is false for the Motion JPEG 2000 profile, and true
for the Archival profile.
• MJ2 Bit Depth is the number of least significant bits in the input image data, from 1 to 16.
This is only available for objects associated with Motion JPEG 2000 files. If you do not specify
a value, VideoWriter sets the bit depth based on the input data type. For example, if the input
data is an array of uint8 or int8 values, MJ2BitDepth is 8.
• Quality is a number from 0 to 100. Higher quality numbers result in higher video quality and
larger file sizes. Lower quality numbers result in lower video quality and smaller file sizes.
Only available for objects associated with the Motion JPEG AVI profile. The default is 75.
• Colormap is a value that specifies the intensity of red, green, and blue for the image. Type in
a value, such as hsv(128), or the name of one of the built-in MATLAB colormaps, such as
jet or hot. For a list of the built-in colormaps, see the colormap function in the MATLAB
documentation.
6 After setting your profile and options, start your acquisition to log a VideoWriter file to disk.
AVI files are limited to a bit depth of 8 bits per pixel for each band. If you have higher bit data, you
should not log it to a profile that creates an AVI file because the AVI format is restricted to 8-bit data.
If you log higher bit data to an AVI file, it will be scaled and then logged as 8-bit data. The Archival
and Motion JPEG 2000 profiles do not have this issue.
Setting Up Triggering
Use the Triggering tab to set up triggering options.
3-14
Setting Acquisition Parameters in Image Acquisition Tool
The total number of frames that will be acquired when you start an acquisition depends on what is set
in the Frames Per Trigger field on the General tab and the Number of Triggers field on the
Triggering tab. For example, if you set Frames Per Trigger to 4 and Number of Triggers to 2, the
total number of frames in the acquisition will be 8.
If you want to do an acquisition that is comprised of a finite number of frames, set the Number of
Triggers to any number, or use the default of 1 trigger.
If you want to control the start and stop of the acquisition, regardless of the number of frames
acquired, select infinite. With an infinite number of triggers, you stop the acquisition manually by
clicking the Stop Acquisition button in the Preview window.
The default of Immediate means that when you start an acquisition using the Start Acquisition
button, the acquisition begins immediately.
If you change the setting to Manual, the Trigger button is activated in the Preview window, and you
use it to start the acquisition.
1 Select your device format and optionally click Start Preview to preview the device.
2 Optionally set any acquisition parameters and stop the preview.
3 Select Manual in the Trigger Type field on the Triggering tab.
4 Click the Start Acquisition button to get live feed from the device.
The Trigger button is activated in the Preview window once the acquisition starts.
5 Click the Trigger button when you want to start logging data.
3-15
3 Using the Image Acquisition Tool GUI
If you have a defined number of triggers (not infinite), then the acquisition will stop when you
have acquired that number of frames, based on the Frames Per Trigger field on the General
tab.
If Number of Triggers is set to infinite, use the Stop Acquisition button to stop the
acquisition.
If your device supports hardware triggering, that option will also appear in the Trigger Type field.
1 Select your device format and optionally click Start Preview to preview the device.
2 Optionally set any acquisition parameters and stop the preview.
3 Select Hardware in the Trigger Type field on the Triggering tab.
4 Select your Trigger Source. This indicates the hardware source that is monitored for trigger
conditions. When the condition specified in Trigger Condition is met, the trigger is executed
and the acquisition starts. Trigger Source is device-specific. The drop-down list will show the
mechanisms your particular device uses to generate triggers. For example, it might be something
like Port0 and Port1, or OptoTrig and TTL.
5 Select your Trigger Condition. This specifies the condition that must be met, via the Trigger
Source, before a trigger event occurs. Trigger Condition is device-specific. The drop-down list
will show the conditions your particular device uses to generate triggers. For example, it might
be something like risingEdge and fallingEdge.
6 Click the Start Acquisition button to get live feed from the device.
7 When the Trigger Condition is met, the acquisition begins.
If you have a defined number of triggers (not infinite), then the acquisition will stop when you
have acquired that number of frames, based on the Frames Per Trigger field on the General
tab.
3-16
Setting Acquisition Parameters in Image Acquisition Tool
If Number of Triggers is set to infinite, use the Stop Acquisition button to stop the
acquisition.
You can set a Region of Interest (ROI) manually by using the Manual Configuration settings on the
Region of Interest tab, or interactively in the Preview Window.
To set up an ROI manually using the Manual Configuration field on the Region of Interest tab:
1 Start your preview by clicking the Start Preview button in the Preview Window.
2 Adjust one or more of the X-Offset, Y-Offset, Width, or Height settings until you reach the
desired region.
Use the arrows in each field to adjust the numbers. The preview resizes as you make changes.
3 When the region is the desired size, start your acquisition by clicking the Start Acquisition
button.
Note: You cannot adjust the ROI after starting the acquisition.
You can also set a region of interest interactively while previewing your image.
3-17
3 Using the Image Acquisition Tool GUI
1 Start your preview by clicking the Start Preview button in the Preview Window.
2 Click the Select Region of Interest button in the top-left corner of the Preview Window to
activate the interactive ROI feature.
Note that the Select Region of Interest button is enabled only during preview mode.
3 Position the cursor at one of the edges of the region you want to capture and click the left mouse
button. Hold the button while dragging the selection tool over the image to outline the region you
want to capture.
4 Release the mouse button to freeze the region.
The region is not set until you take action to commit it.
3-18
Setting Acquisition Parameters in Image Acquisition Tool
5 If the selected area is the region you want to use, start your acquisition by clicking the Start
Acquisition button.
3-19
3 Using the Image Acquisition Tool GUI
Before starting the acquisition, if you want to adjust the region further, you can drag the selected
region around while still in selection mode. You can also drag any of the handles on the region
outline to change the dimensions of the region. You can then commit the region by pressing
Enter or using the right-click menu Commit Region of Interest inside the region. You can also
commit a region by pressing the space bar or double-clicking inside the selection, or starting the
acquisition.
You can clear the drawn region before you commit it by single-clicking anywhere in the Preview
Window outside of the selected area. You will still be in ROI selection mode. If you want to clear
the selected region and exit ROI selection mode, press the Delete key, press the Escape key, or
use the right-click menu Exit Region of Interest Mode inside the region.
Note: If you start another acquisition with the same device, the ROI that you set will remain the
default that is used in subsequent acquisitions. To reset to the original image size, click the Reset
Region of Interest to Maximum button in the Preview Window or the Reset button on the
Region of Interest tab.
3-20
Setting Acquisition Parameters in Image Acquisition Tool
If you want to save a configuration before clearing it, first select Export Selected Hardware
Configuration from the right-click menu.
3-21
3 Using the Image Acquisition Tool GUI
• Preview your image. See “Previewing Data” on page 3-23 for more information.
• Acquire data. See “Acquiring Data” on page 3-23 for more information.
• Export data. See “Exporting Data in the Image Acquisition Tool” on page 3-26 for more
information.
• Set Region of Interest. See “Setting a Region of Interest” on page 3-17 for more information.
Below the area that displays the frames you will see text messages with information relative to the
current state of the window. For example in the figure above, that text indicates that all the frames
3-22
Previewing and Acquiring Data in Image Acquisition Tool
that were acquired are being displayed. After you start and stop a preview, the text will indicate that
the tool is ready to acquire data.
During an acquisition, a running timer appears under the display area that indicates the actual time
of the frame acquisition.
Note The Image Acquisition Toolbox Preview window and the Preview window that is built into the
Image Acquisition Tool now support the display of up to 16-bit image data. The Preview window was
designed to only show 8-bit data, but many cameras return 10-, 12-, 14-, or 16-bit data. The Preview
window display now supports these higher bit-depth cameras.
Previewing Data
To preview data:
Acquiring Data
To acquire data:
1 Select the device and format in the Hardware Browser. The Hardware Browser shows the
image acquisition devices currently connected to your system. If the device you want to use is not
connected to your system, plug it in and then select Tools > Refresh Image Acquisition
Hardware to display the new device in the Hardware Browser.
The nodes listed under the device name are the formats the device supports. They may
correspond to the different resolutions and color spaces that your device supports, or to different
video standards or camera configurations. This information comes from your device adaptor.
Select the format you want to use.
See “Selecting Your Device in Image Acquisition Tool” on page 3-7 for more information about
devices and formats.
2 Use the Preview feature to test and set up your device by clicking the Start Preview button. If
necessary, physically adjust the device to achieve the desired image area, or use the Region of
Interest tab of the Acquisition Parameters pane to constrain the image.
3-23
3 Using the Image Acquisition Tool GUI
For example, if you set Frames Per Trigger to 4 and Number of Triggers to 2, the total
number of frames acquired will be 8.
If you just want a snapshot of one frame, leave the default settings of 1 in both fields. If you want
a specific number of frames, use the fields to set it.
Alternatively, you can set the tool to acquire continuously and use the buttons in the Preview
window to manually start and stop the acquisition.
4 Set any necessary acquisition parameters if you did not do so while previewing. See “Setting
Acquisition Parameters in Image Acquisition Tool” on page 3-9 for more information.
5 Choose your log mode, which determines where the acquisition data is stored.
On the Logging tab, use the Log To field to choose to log to memory, disk, or both. Disk logging
results in a saved VideoWriter file. If you choose memory logging, you can export your data after
the acquisition using the Export Data button on the Preview window.
For more information about logging, see “Logging Your Data” on page 3-12.
6 Start the acquisition by clicking the Start Acquisition button.
• If you set Trigger Type (on the Triggering tab) to Immediate, the tool will immediately
start logging data.
• If you set Trigger Type to Manual, click the Trigger button when you want to start logging
data. For more information about manual triggering, see “Setting Up Triggering” on page 3-
14.
7 Stop the acquisition:
• If you set Frames Per Trigger (on the General tab) to 1 or any other number, your
acquisition will stop when that number of frames is reached.
• If you set Frames Per Trigger to infinite, click the Stop Acquisition button to stop the
acquisition.
Note that you can also click Stop Acquisition to abort an acquisition if the number of frames
was specified.
When the acquisition stops, if you logged to memory or disk and memory, the Preview window will
display all or some of the frames of the acquisition. The window can show up to nine frames. If you
acquire more than nine frames, it will display frames at an even interval based on the total number of
frames. The montage of nine frames are indexed linearly from the acquired images. The text under
the images will list which frames are shown. You can also hover your cursor over each frame to see
which frame number it is, as shown in the following figure.
3-24
Previewing and Acquiring Data in Image Acquisition Tool
If the first one or more frames of your acquisition are blurry, black, or of low quality, your camera may
need to warm up before you capture frames.
You can allow for device warm-up by using manual triggering. This allows you to start the acquisition
after the device has warmed up and is acquiring image data that meets your needs.
To use manual triggering, go to the Triggering tab of the Acquisition Parameters pane and select
Manual in the Trigger Type field.
For more detailed instructions about manual triggering, see “Selecting the Trigger Type” on page 3-
15.
For more information about troubleshooting specific devices, see “Troubleshooting Overview” on
page 16-2 in the Troubleshooting chapter.
3-25
3 Using the Image Acquisition Tool GUI
1 Click the Export Data button in the Preview window to export the last acquisition that was
logged to memory.
2 In the Data Exporter dialog box, select MAT-File, MATLAB Workspace, or one of the other
options in the Data Destination field. You can choose Image Tool or Image File for single-
frame acquisitions.
These two options are provided by the Image Processing Toolbox software. The Movie Player tool
is also provided by the Image Processing Toolbox software and is only available for multiple-
frame acquisitions. VideoWriter is part of core MATLAB and is recommended.
3 If you selected MAT-File or MATLAB Workspace, then enter a name for the new variable in the
Variable Name field, and click OK.
4 If you selected VideoWriter, you need to select a profile from the Profile list. For information
on the profiles, see “Disk Logging” on page 3-12.
3-26
Exporting Data in the Image Acquisition Tool
The VideoWriter Parameters dialog box opens after you select a file name and profile and click
OK. If you selected Motion JPEG 2000 or Archival as your profile, you can set the
Compression Ratio, Frame Rate, Lossless Compression, and MJ2 Bit Depth options. Accept
the default values or change them.
If you selected Motion JPEG AVI as your profile, you can set the Frame Rate and Quality
options. Accept the default values or change them.
If you selected Uncompressed AVI as your profile, you can set the Frame Rate option. Accept
the default value or change it.
VideoWriter Options
• Compression Ratio is a number greater than 1 that specifies the target ratio between the
number of bytes in the input image and the number of bytes in the compressed image. The
data is compressed as much as possible, up to the specified target. This is only available for
objects associated with Motion JPEG 2000 files. The default is 10.
3-27
3 Using the Image Acquisition Tool GUI
• Frame Rate is the rate of playback for the video in frames per second. The default is 30.
• Lossless Compression is a Boolean value (logical true or false) only available for objects
associated with Motion JPEG 2000 files. If you select true, VideoWriter uses reversible mode
so that the decompressed data is identical to the input data, and ignores any specified value
for CompressionRatio. The default is false for the Motion JPEG 2000 profile, and true
for the Archival profile.
• MJ2 Bit Depth is the number of least significant bits in the input image data, from 1 to 16.
This is only available for objects associated with Motion JPEG 2000 files. If you do not specify
a value, VideoWriter sets the bit depth based on the input data type. For example, if the input
data is an array of uint8 or int8 values, MJ2BitDepth is 8.
• Quality is a number from 0 to 100. Higher quality numbers result in higher video quality and
larger file sizes. Lower quality numbers result in lower video quality and smaller file sizes.
Only available for objects associated with the Motion JPEG AVI profile. The default is 75.
5 If you exported to the MATLAB Workspace, the dialog box closes and the data is saved to the
Workspace.
If you exported to a MAT-File, the dialog box closes and the file is saved to the location you
specified in the Data Exporter dialog box.
If you exported to Image Tool, Image File, or Movie Player, the file immediately opens in
that tool.
If you exported to VideoWriter, the file is created and saved in the location you specified in the
Data Exporter dialog box.
3-28
Saving Image Acquisition Tool Configurations
To save a configuration:
The Save Configuration dialog box lists the currently selected device format, as well as any
others you selected in the Hardware Browser during the tool session. All formats are selected
by default, meaning their configurations will be saved. If you do not want to save a configuration,
clear it from the list.
3 Click Save.
The configuration is saved to an Image Acquisition Tool (IAT) file in the location you specified.
You can then open the saved configuration file in a future tool session by selecting File > Open
Configuration. In the Open Configuration dialog box, browse to an IAT file and click Open.
Note You can also export hardware configuration information to other formats such as a MATLAB
code file or a MAT-file that can be accessed from MATLAB. See “Exporting Image Acquisition Tool
Hardware Configurations to MATLAB” on page 3-30.
3-29
3 Using the Image Acquisition Tool GUI
• MATLAB Workspace saves the video input object to the MATLAB Workspace for the duration
of the MATLAB session. (You can then save it before exiting MATLAB if you want to retain it.)
• MATLAB Code File is the same as the File > Generate MATLAB Code File command. It
generates a MATLAB code file containing the video input object and its configured
parameters. You could then incorporate the MATLAB code file into other MATLAB code or
projects.
• MAT-File saves the video input object and its parameters to a MAT-file.
3 Decide what object configuration(s) to export.
The Object Exporter dialog box lists the currently selected device format, as well as any others
you selected in the Hardware Browser during the tool session. All formats are selected by
default, meaning their configurations will be saved. If you do not want to save a configuration,
clear it from the list.
4 Click Save.
If you exported to the MATLAB Workspace, the dialog box closes and the data is saved to the
MATLAB Workspace.
5 If you export to a MAT-file or MATLAB code file, an Export dialog box opens. Select the save
location and type a file name, and then click Save.
3-30
Exporting Image Acquisition Tool Hardware Configurations to MATLAB
Note You can also save configuration information to an Image Acquisition Tool (IAT) file that can
then be loaded in the tool in a future session. See “Saving Image Acquisition Tool Configurations” on
page 3-29.
3-31
3 Using the Image Acquisition Tool GUI
You cannot directly edit in the Session Log pane. You can save the contents to a MATLAB code file or
copy it to another destination, and then edit the contents.
Each device format has its own session log, independent of any other formats. If you switch to a
different device or format, the contents of the session log will reflect the currently selected device. If
you switch back to a previous node in the Hardware Browser, the session log for that device will
remain in the same state it was in before you switched nodes.
1 Click the Save the current session log to a file button in the Session Log toolbar. You can also
right-click in the log pane and select Save.
2 In the Save Session Log dialog box, browse to the location where you want to save the file.
3 Use the default name, imaqtoolSessionLog.m, or rename it.
3-32
Saving and Copying Image Acquisition Tool Session Log
4 When you click the Save button, the file will open in the MATLAB Editor. You can then either edit
it or close it.
Note that the entire contents of the session log is saved. If you just want to save a portion of the log,
use the Copy command instead.
3-33
3 Using the Image Acquisition Tool GUI
To register an adaptor:
1 Click Tools > Register a Third-Party Adaptor on the Image Acquisition Tool menu.
2 In the Register a 3rd Party Adaptor dialog box, browse to the .dll file that represents your
adaptor.
3 Select the file and click OK to register the adaptor.
3-34
4
With previous versions of the Image Acquisition Toolbox, the files for all of the adaptors were
included in your installation. Starting with version R2014a, each adaptor is available separately in
support packages via MATLAB Add-Ons. All of the support packages contain the necessary MATLAB
files to use the toolbox with your adaptor. Some also contain third-party files, such as drivers or
camera set-up utilities. Offering the adaptor files via Add-Ons allows us to provide the most up to date
versions of files.
To install a support package, on the MATLAB Home tab, in the Environment section, click Add-Ons
> Get Hardware Support Packages. In the Add-On Explorer, scroll to the Hardware Support
Packages section, and click show all to find your support package. You can refine the list by
selecting Imaging/Cameras in the Refine by Hardware Type section on the left side of the
Explorer. Select your adaptor, for example Image Acquisition Toolbox Support Package for GigE
Vision Hardware or Image Acquisition Toolbox Support Package for OS Generic Video Interface, from
the list.
Note For any cameras that use the Windows Video (winvideo), Macintosh Video (macvideo), or
Linux Video (linuxvideo) adaptors, use the support package called Image Acquisition Toolbox
Support Package for OS Generic Video Interface. The correct files will be installed, depending on
your operating system.
The following table shows the support package name for each adaptor. In Add-On Explorer, select
your adaptor using the name listed in the table.
4-2
Image Acquisition Support Packages for Hardware Adaptors
To use the cameras or frame grabbers you have been using with the toolbox, you must install the
support package for the adaptor that your camera uses. If you use multiple adaptors, you need to
install the support package for each one you use. For example, if you have a Webcam on a Windows
system and a Matrox camera, you would need to install the Image Acquisition Toolbox Support
Package for OS Generic Video Interface for the winvideo adaptor for the Webcam and the Image
Acquisition Toolbox Support Package for Matrox Hardware for the matrox adaptor.
4-3
4 Image Acquisition Support Packages
Go to MATLAB Add-Ons and use the adaptor name in the table to install the correct package(s) that
you need. To install more than one package, select the support packages in the Add-On Explorer one
at a time.
The following topic describes how to install the Image Acquisition Toolbox support packages:
“Installing the Support Packages for Image Acquisition Toolbox Adaptors” on page 4-5
4-4
Installing the Support Packages for Image Acquisition Toolbox Adaptors
To use the cameras or frame grabbers you have been using with the toolbox, you must install the
support package for the adaptor that your camera uses. If you use multiple adaptors, you need to
install the support package for each one you use. For example, if you have a Webcam on a Windows
system and a Matrox camera, you would need to install the Image Acquisition Toolbox Support
Package for OS Generic Video Interface for the winvideo adaptor for the Webcam and the Image
Acquisition Toolbox Support Package for Matrox Hardware for the matrox adaptor.
To use the Image Acquisition Toolbox for acquisition from any generic video interface, you need to
install the Image Acquisition Toolbox Support Package for OS Generic Video Interface. This includes
any cameras that use the Windows Video (winvideo), Macintosh Video (macvideo), or Linux Video
(linuxvideo) adaptors. The correct files will be installed, depending on your operating system.
All video interface adaptors are available through the Hardware Support Packages. Using this
installation process, you download and install the following file(s) on your host computer:
1 On the MATLAB Home tab, in the Environment section, click Add-Ons > Get Hardware
Support Packages.
2 In the Add-On Explorer, scroll to the Hardware Support Packages section, and click show all
to find your support package.
3 You can refine the list by selecting Imaging/Cameras in the Refine by Hardware Type section
on the left side of the Explorer.
4 Select the support package for your adaptor. The table in “Image Acquisition Support Packages
for Hardware Adaptors” on page 4-2 shows the names of the support packages for each adaptor
type.
On the MATLAB Home tab, in the Environment section, click Add-Ons > Manage Add-Ons.
4-5
4 Image Acquisition Support Packages
On the MATLAB Home tab, in the Environment section, click Add-Ons > Check for Updates >
Hardware Support Packages.
For more information about using Add-On Explorer, see “Get and Manage Add-Ons”.
4-6
Install the MATLAB Support Package for USB Webcams
Webcam support is available through MATLAB Add-Ons. Using this installation process, you download
and install the following files on your host computer:
1 On the MATLAB Home tab, in the Environment section, click Add-Ons > Get Hardware
Support Packages.
2 In the Add-On Explorer, scroll to the Hardware Support Packages section, and click show all
to find your support package.
3 Refine the list by selecting Imaging/Cameras in the Refine by Hardware Type section on the
left side of the Explorer.
4 In the Imaging/Cameras list, select the MATLAB Support Package for USB Webcams.
4-7
5
Connecting to Hardware
To connect to an image acquisition device from within MATLAB, you must create a video input object.
This object represents the connection between MATLAB and the device. You can use object properties
to control various aspects of the acquisition. Before you can create the object, you need several
pieces of information about the device that you want to connect to.
To access an image acquisition device, the toolbox needs several pieces of information:
• The name of the adaptor the toolbox uses to connect to the image acquisition device
• The device ID of the device you want to access
• The video format of the video stream or, optionally, a device configuration file (camera file)
You use the imaqhwinfo function to retrieve this information, as described in the following
subsections.
Note When using imaqhwinfo to get information about a device, especially devices that use a Video
for Windows (VFW) driver, you might encounter dialog boxes reporting an assertion error. Make sure
that the software drivers are installed correctly and that the acquisition device is connected to the
computer.
To determine which adaptors are available on your system, call the imaqhwinfo function. The
imaqhwinfo function returns information about the toolbox software and lists the adaptors available
on the system in the InstalledAdaptors field. In this example, there are two adaptors available on
the system.
imaqhwinfo
ans =
5-2
Getting Hardware Information
Note While every adaptor supported by the Image Acquisition Toolbox software is installed with the
toolbox, imaqhwinfo lists only adaptors in the InstalledAdaptors field that are loadable. That is,
the device drivers required by the vendor are installed on the system. Note, however, that inclusion in
the InstalledAdaptors field does not necessarily mean that an adaptor is connected to a device.
To find the device ID of a particular image acquisition device, call the imaqhwinfo function,
specifying the name of the adaptor as the only argument. When called with this syntax, imaqhwinfo
returns a structure containing information about all the devices available through the specified
adaptor.
In this example, the imaqhwinfo function returns information about all the devices available through
the Matrox adaptor.
info = imaqhwinfo('matrox');
info =
The fields in the structure returned by imaqhwinfo provide the following information.
Field Description
AdaptorDllName Character vector that identifies the name of the adaptor dynamic link
library (DLL)
AdaptorDllVersion Information about the version of the adaptor DLL
AdaptorName Name of the adaptor
DeviceIDs Cell array containing the device IDs of all the devices accessible
through this adaptor
DeviceInfo Array of device information structures. See “Getting More
Information About a Particular Device” on page 5-3 for more
information.
If an adaptor provides access to multiple devices, you might need to find out more information about
the devices before you can select a device ID. The DeviceInfo field is an array of device information
structures. Each device information structure contains detailed information about a particular device
available through the adaptor.
To view the information for a particular device, you can use the device ID as a reference into the
DeviceInfo structure array. Call imaqhwinfo again, this time specifying a device ID as an
argument.
dev_info = imaqhwinfo('matrox',1)
5-3
5 Connecting to Hardware
dev_info =
DefaultFormat: 'M_RS170'
DeviceFileSupported: 1
DeviceName: 'Orion'
DeviceID: 1
VideoInputConstructor: 'videoinput('matrox', 1)'
VideoDeviceConstructor: 'imaq.VideoDevice('matrox', 1)'
SupportedFormats: {1x10 cell}
The fields in the device information structure provide the following information about a device.
Field Description
DefaultFormat Character vector that identifies the video format used by the
device if none is specified at object creation time
DeviceFileSupported If set to 1, the device supports device configuration files;
otherwise 0. See “Using Device Configuration Files (Camera
Files)” on page 5-9 for more information.
DeviceName Descriptive character vector, assigned by the adaptor, that
identifies the device
DeviceID ID assigned to the device by the adaptor
VideoInputConstructor Default syntax you can use to create a video input object to
represent this device. See “Creating Image Acquisition Objects”
on page 5-6 for more information.
VideoDeviceConstructor Default syntax you can use to create a VideoDevice System
object to represent this device.
SupportedFormats Cell array of character vectors that identify the video formats
supported by the device. See “Determining Supported Video
Formats” on page 5-4 for more information.
Image acquisition devices typically support multiple video formats. You can specify the video format
when you create the video input object to represent the connection to the device. See “Creating
Image Acquisition Objects” on page 5-6 for more information.
Note Specifying the video format is optional; the toolbox uses one of the supported formats as the
default.
To determine which video formats an image acquisition device supports, look in the
SupportedFormats field of the DeviceInfo structure returned by the imaqhwinfo function. To
view the information for a particular device, call imaqhwinfo, specifying the device ID as an
argument.
dev_info = imaqhwinfo('matrox',1)
5-4
Getting Hardware Information
dev_info =
DefaultFormat: 'M_RS170'
DeviceFileSupported: 1
DeviceName: 'Orion'
DeviceID: 1
VideoInputConstructor: 'videoinput('matrox', 1)'
VideoDeviceConstructor: 'imaq.VideoDevice('matrox', 1)'
SupportedFormats: {1x10 cell}
The DefaultFormat field lists the default format selected by the toolbox. The SupportedFormats
field is a cell array containing character vectors that identify all the supported video formats. The
toolbox assigns names to the formats based on vendor-specific terminology. If you want to specify a
video format when you create an image acquisition object, you must use one of the character vectors
in this cell array. See “Creating Image Acquisition Objects” on page 5-6 for more information.
celldisp(dev_info.SupportedFormats)
ans{1} =
M_RS170
ans{2} =
M_RS170_VIA_RGB
ans{3} =
M_CCIR
ans{4} =
M_CCIR_VIA_RGB
ans{5} =
M_NTSC
ans{6} =
M_NTSC_RGB
ans{7} =
M_NTSC_YC
ans{8} =
M_PAL
ans{9} =
M_PAL_RGB
ans{10} =
M_PAL_YC
5-5
5 Connecting to Hardware
Types of Objects
After you get information about your image acquisition hardware, described in “Getting Hardware
Information” on page 5-2, you can establish a connection to the device by creating an image
acquisition object. The toolbox uses two types of image acquisition objects:
At any one time, only one of the video source objects, called the selected source, can be active. This is
the source used for acquisition. The toolbox selects one of the video source objects by default, but you
can change this selection. See “Specifying the Selected Video Source Object” on page 5-10 for more
information.
The following figure illustrates how a video input object acts as a container for one or more video
source objects.
5-6
Creating Image Acquisition Objects
For example, a Matrox frame grabber device can support eight physical connections, which Matrox
calls channels. These channels can be configured in various ways, depending upon the video format.
If you specify a monochrome video format, such as RS170, the toolbox creates eight video source
objects, one object for each of the eight channels on the device. If you specify a color video format,
such as NTSC RGB, the Matrox device uses three physical channels to represent one RGB connection,
where each physical connection provides the red data, green data, and blue data separately. With this
format, the toolbox only creates two video source objects for the same device.
This example creates a video input object to represent the connection to a Matrox image acquisition
device. The imaqhwinfo function includes the default videoinput syntax in the
VideoInputConstructor field of the device information structure.
vid = videoinput('matrox');
This syntax uses the default video format listed in the DefaultFormat field of the data returned by
imaqhwinfo. You can optionally specify the video format. See “Specifying the Video Format” on page
5-8 for more information.
To view a summary of the characteristics of the video input object you created, enter the variable
name you assigned to the object at the command prompt. For example, this is the summary for the
object vid.
vid
5-7
5 Connecting to Hardware
The items in this list correspond to the numbered elements in the object summary:
1 The title of the summary includes the name of the image acquisition device this object
represents. In the example, this is a Matrox Orion frame grabber.
2 The Acquisition Source section lists the name of all the video source objects associated with this
video input object. For many objects, this list might only contain one video source object. In the
example, the Matrox device supports eight physical input channels and, with the default video
format, the toolbox creates a video source object for each connection. For an example showing
the video source objects created with another video format, see “Specifying the Video Format” on
page 5-8.
3 The Acquisition Parameters section lists the values of key video input object properties. These
properties control various aspects of the acquisition, such as the number of frames to acquire
and the location where acquired frames are stored. For information about these properties, see
“Acquiring Image Data” on page 6-2.
4 The Trigger Parameters section lists the trigger type configured for the object and the number of
times the trigger is to be executed. Trigger execution initiates data logging, and the toolbox
supports several types of triggers. The example object is configured by default with an immediate
trigger. For more information about configuring triggers, see “Specifying the Trigger Type” on
page 6-8.
5 The Status section lists the current state of the object. A video input object can be in one of
several states:
In the example, the object describes its state as Waiting for START. This indicates it is not
running. For more information about the running state, see “Starting and Stopping a Video Input
Object” on page 5-25. This section also reports how many frames of data have been acquired
and how many frames are available in the buffer where the toolbox stores acquired frames. For
more information about these parameters, see “Controlling Logging Parameters” on page 6-19.
5-8
Creating Image Acquisition Objects
The following sections describe these options. If you do not specify a video format, the videoinput
function uses one of the video formats supported by the device. For Matrox and Data Translation®
devices, it chooses the RS170 video format. For Windows devices, it uses the first RGB format in the
list of supported formats or, if no RGB formats are supported, the device's default format.
To specify a video format as a character vector, use the imaqhwinfo function to determine the list of
supported formats. The imaqhwinfo function returns this information in the SupportedFormats
field of the device information structure. See “Determining Supported Video Formats” on page 5-4 for
more information.
In this example, each of the character vectors is a video format supported by a Matrox device.
info = imaqhwinfo('matrox');
info.DeviceInfo.SupportedFormats
ans =
Columns 1 through 4
Columns 5 through 8
Columns 9 through 10
'M_PAL_RGB' 'M_PAL_YC'
For Matrox devices, the toolbox uses the RS170 format as the default. (To find out which is the
default video format, look in the DefaultFormat field of the device information structure returned
by the imaqhwinfo function.)
Note For Matrox devices, the M_NTSC_RGB format represents a component video format.
This example creates a video input object, specifying a color video format.
For some devices, you can use a device configuration file, also known as a camera file, to specify the
video format as well as other configuration settings. Image acquisition device vendors supply these
device configuration files.
Note The toolbox ignores hardware trigger configurations included in a device configuration file. To
configure a hardware trigger, you must use the toolbox triggerconfig function. See “Using a
Hardware Trigger” on page 6-12 for more information.
5-9
5 Connecting to Hardware
For example, with Matrox frame grabbers, you can download digitizer configuration format (DCF)
files, in their terminology. These files configure their devices to support particular cameras.
Some image acquisition device vendors provide utility programs you can use to create a device
configuration file or edit an existing one. See your hardware vendor's documentation for more
information.
To determine if your image acquisition device supports device configuration files, check the value of
the DeviceFileSupported field of the device information structure returned by imaqhwinfo. See
“Getting More Information About a Particular Device” on page 5-3 for more information.
When you use a device configuration file, the value of the VideoFormat property of the video input
object is the name of the file, not a video format character vector.
This example creates a video input object specifying a Matrox device configuration file as an
argument.
To illustrate, this example lists the video source objects associated with the video input object vid.
vid.Source
Display Summary for Video Source Object Array:
5-10
Creating Image Acquisition Objects
By default, the video input object makes the first video source object in the array the selected source.
To use another video source, change the value of the SelectedSourceName property.
This example changes the currently selected video source object from CH0 to CH1 by setting the value
of the SelectedSourceName property.
vid.SelectedSourceName = 'CH1';
Note The getselectedsource function returns the video source object that is currently selected at
the time the function is called. If you change the value of the SelectedSourceName property, you
must call the getselectedsource function again to retrieve the new selected video source object.
out = imaqhwinfo(vid)
out =
AdaptorName: 'winvideo'
DeviceName: 'IBM PC Camera'
MaxHeight: 96
MaxWidth: 128
NativeDataType: 'uint8'
TotalSources: 1
VendorDriverDescription: 'Windows WDM Compatible Driver'
VendorDriverVersion: 'DirectX 9.0'
5-11
5 Connecting to Hardware
The video input object properties control aspects of an acquisition that are common to all image
acquisition devices. For example, you can use the FramesPerTrigger property to specify the
amount of data you want to acquire.
The video source object properties control aspects of the acquisition associated with a particular
source. The set of properties supported by a video source object varies with each device. For
example, some image acquisition devices support properties that enable you to control the quality of
the image being produced, such as Brightness, Hue, and Saturation.
With either type of object, you can use the same toolbox functions to
• View a list of all the properties supported by the object, with their current values
• View the value of a particular property
• Get information about a property
• Set the value of a property
Note Three video input object trigger properties require the use of a special configuration function.
For more information, see “Setting Trigger Properties” on page 5-16.
This example uses the get function to display a list of all the properties of the video input object vid.
“Viewing the Properties of a Video Source Object” on page 5-13 describes how to do this for video
source objects.
If you do not specify a return value, the get function displays the object properties in four categories:
General Settings, Callback Function Settings, Trigger Settings, and Acquisition Sources.
get(vid)
General Settings:
5-12
Configuring Image Acquisition Object Properties
DeviceID = 1
DiskLogger = []
DiskLoggerFrameCount = 0
EventLog = [1x0 struct]
FrameGrabInterval = 1
FramesAcquired = 0
FramesAvailable = 0
FramesPerTrigger = 10
Logging = off
LoggingMode = memory
Name = M_RS170-matrox-1
NumberOfBands = 1
Previewing = off
ReturnedColorSpace = grayscale
ROIPosition = [0 0 640 480]
Running = off
Tag =
Timeout = 10
Type = videoinput
UserData = []
VideoFormat = M_RS170
VideoResolution = [640 480]
Trigger Settings:
InitialTriggerTime = [0 0 0 0 0 0]
TriggerCondition = none
TriggerFrameDelay = 0
TriggerRepeat = 0
TriggersExecuted = 0
TriggerSource = none
TriggerType = immediate
Acquisition Sources:
SelectedSourceName = CH0
Source = [1x8 videosource]
To view the properties supported by the video source object (or objects) associated with a video input
object, use the getselectedsource function to retrieve the currently selected video source object.
This example lists the properties supported by the video source object associated with the video input
object vid. Note the device-specific properties that are included.
Note The video source object for your device might not include device-specific properties. For
example, devices accessed with the 'winvideo' adaptor, such as webcams, that use a Video for
5-13
5 Connecting to Hardware
Windows (VFW) driver, may not provide a way for the toolbox to programmatically query for device
properties. Use the configuration tools provided by the manufacturer to configure these devices.
get(getselectedsource(vid))
General Settings:
Parent = [1x1 videoinput]
Selected = on
SourceName = CH0
Tag =
Type = videosource
This example illustrates how to access a property by referencing the object as if it were a MATLAB
structure using dot notation.
vid.Previewing
ans =
off
The propinfo function returns a structure that contains information about the property such as its
data type, default value, and a list of all possible values, if the property supports such a list. This
example uses propinfo to get information about the LoggingMode property.
propinfo(vid,'LoggingMode')
ans =
The imaqhelp function returns reference information about the property with a complete
description. This example uses imaqhelp to get information about the LoggingMode property.
5-14
Configuring Image Acquisition Object Properties
imaqhelp(vid,'LoggingMode')
Note Because some properties are read-only, only a subset of all video input and video source
properties can be set.
This example sets the value of a property by assigning the value to the object as if it were a MATLAB
structure.
vid.LoggingMode = 'disk';
ans =
disk
To view a list of all the properties of a video input object or video source object that can be set, use
the set function.
set(vid)
5-15
5 Connecting to Hardware
To ensure that you specify only valid combinations for the values of these properties, you must use
two functions:
• The triggerinfo function returns all the valid combinations of values for the specified video
input object.
• The triggerconfig function sets the values of these properties.
For more information, see “Specifying Trigger Type, Source, and Condition” on page 6-6.
Note To get a list of options you can use on a function, press the Tab key after entering a function on
the MATLAB command line. The list expands, and you can scroll to choose a property or value. For
information about using this advanced tab completion feature, see “Using Tab Completion for
Functions” on page 5-17.
5-16
Using Tab Completion for Functions
v = videoinput('winvideo',1,
When you press Tab after the Device ID (1 in this example), the list of formats displays, as shown
here.
If you pressed Tab just after the first parentheses, a list of available adaptors would appear. The
format for the object constructor function is:
v = videoinput('adaptorName',deviceID,'VideoFormat')
When you press Tab where a field should appear, you get the list of options for that field.
You can also get the values for property-value pairs, which can be used on the videoinput function
after the first three required fields. For example,
v = videoinput('winvideo',1,'RGB1024x768','LoggingMode',
Press Tab after typing LoggingMode to get the possible values for that property, which are
'memory', 'disk', and 'disk&memory'.
Many of the other toolbox functions also have tab completion. For example, when using the getdata
function you can specify the data type using tab completion.
data = getdata(obj,10,
Press Tab after typing the number of frames (10 in this example) to get the possible values for the
data types, such as 'uint8' or 'double', since data type is the next input getdata takes after
number of frames.
5-17
5 Connecting to Hardware
In this section...
“Advanced Property Support” on page 5-18
“Change Properties While the Acquisition Is Running” on page 5-18
“Dynamic Accessibility and Readability” on page 5-19
“Dynamic Constraints” on page 5-20
“Grouped Selector Properties” on page 5-21
Previously, changing the exposure time after starting the acquisition resulted in an error.
In the Image Acquisition Tool, on the Device Properties tab, you can see that properties are
dynamically accessible and settable. While the acquisition is running, you can change a property on
the tab, and it will take effect in the remaining frames of the acquisition.
5-18
Use Advanced Property Support in the GigE Vision and GenICam GenTL Interfaces
The propinfo function includes a new field called Accessible, which is a read-only boolean
property. A disp on a property that has Accessible set to 0 results in “Currently not accessible.” To
enable accessibility, set Accessible to 1. For example, if you have the ReverseY property set to
Accessible, the following:
propinfo(src,'ReverseY')
Accessible: 1
The same is true for the ReadOnly property. Readability is now dynamic and the propinfo function
shows a ReadOnly property as either 'notCurrently', if it is writable, or 'currently', if it is
read-only. The example in the Dynamic Constraints section demonstrates the dynamic use of this
property.
You can view the source properties to see if any properties are currently not accessible. In this
example, for the part of the disp shown below, AcquisitionFrameCount and BalanceRatioRaw
are currently not accessible.
src =
General Settings:
Parent = [1x1 videoinput]
Selected = on
SourceName = input1
Tag = [0x0 character vector]
Type = videosource
In the Image Acquisition Tool, on the Device Properties tab, you can see that properties are
dynamically accessible. In the previous example, if the properties AcquisitionFrameCount and
BalanceRatioRaw became inaccessible, they would appear as greyed out on the tab.
5-19
5 Connecting to Hardware
Dynamic Constraints
If you change a property that results in a change of possible values, or constraint change, for another
property, the other property’s constraint values are updated dynamically. Consider a camera that has
an automatic sharpness setting that you can set to Continuous to automatically adjust the sharpness
or set to Off. The automatic sharpness property then affects the related Sharpness property. In this
example, when SharpnessAuto is set to Continuous, a disp of the Sharpness property shows the
constrained values and that it is not able to be set.
>> propinfo(src, 'SharpnessAuto')
ans =
ans =
Type: 'integer'
Constraint: 'bounded'
ConstraintValue: [1532 1532]
DefaultValue: 1532
ReadOnly: 'currently'
DeviceSpecific: 1
Accessible: 1
If you then set the SharpnessAuto property to Off, a second disp of the Sharpness property shows
that the constrained values have dynamically updated, and that it is now able to be set (no longer
read-only).
>> src.SharpnessAuto = 'Off'
>> propinfo(src, 'Sharpness')
ans =
Type: 'integer'
Constraint: 'bounded'
ConstraintValue: [0 4095]
DefaultValue: 1532
ReadOnly: 'notCurrently'
DeviceSpecific: 1
Accessible: 1
In the Image Acquisition Tool, on the Device Properties tab, you can see that constraint properties
are dynamically updated. For example, if you turn off a property that is for an automatic feature on
the camera, such as automatic sharpness or auto-focus, in the Device Properties tab, you can then
immediately change the related primary property, such as sharpness or focus, since it is no longer
constrained. The constrained values are updated dynamically.
5-20
Use Advanced Property Support in the GigE Vision and GenICam GenTL Interfaces
For example, in previous versions of the toolbox, for a GainSelector with possible values of Red,
Blue, and Green and a Gain property, the gain properties displayed as follows:
Compatibility Considerations
If you have any MATLAB code written prior to R2016a which references the previous, separate
properties, you need to change the code to reflect them as values on the selector property. Setting
and getting properties that belong to a selector using the previous composite-name style is no longer
supported. For example, RedGain no longer works. Instead use GainSelector set to Red, as shown
in the example.
To set a property value, first set the selector value, then set the property value:
src.GainSelector = 'Green';
src.Gain = 0.1;
5-21
5 Connecting to Hardware
Create the videoinput object using the Point Grey adaptor and get the video source.
vid = videoinput(“pointgrey”);
src = vid.Source;
vid.FramesPerTrigger = Inf;
start(vid)
View the Exposure property information to determine whether the property can be changed while
acquisition is running.
propinfo(src,"Exposure")
ans =
Type: 'double'
Constraint: 'bounded'
ConstraintValue: [-7.5850 2.4136]
DefaultValue: -0.0614
ReadOnly: 'never'
DeviceSpecific: 1
Accessible: 1
Since ReadOnly is 'never', you can change this property during acquisition. The current value is
-0.0614 and the maximum and minimum constraints are [-7.5850 2.4136].
src.Exposure = 2;
Previously, changing the exposure after starting the acquisition resulted in an error.
stop(vid)
5-22
Use Advanced Property Support with Point Grey Camera
Note This workflow is not supported in the Image Acquisition Tool. While the acquisition is running,
you can not change a property on the Device Properties tab.
Create the videoinput object using the Point Grey adaptor and get the video source.
vid = videoinput(“pointgrey”);
src = vid.Source;
ans =
0 0 612 512
ans =
Type: 'double'
Constraint: 'bounded'
ConstraintValue: [1 29]
DefaultValue: 2.5000
ReadOnly: 'never'
DeviceSpecific: 1
Accessible: 1
The minimum and maximum values for this property are [1 29].
Set the ROIPosition property to [0 0 320 240] and view the FrameRate property again to see
the updated values.
vid.ROIPosition = [0 0 320 240];
propinfo(src,"FrameRate")
ans =
5-23
5 Connecting to Hardware
Type: 'double'
Constraint: 'bounded'
ConstraintValue: [1 34]
DefaultValue: 2.5000
ReadOnly: 'never'
DeviceSpecific: 1
Accessible: 1
The minimum and maximum values are now [1 34] because the region of interest is lowered.
Note In the Image Acquisition Tool, on the Device Properties tab, you can see that frame rate
property constraints are dynamically updated when you change the region of interest on the Region
of Interest tab.
See Also
videoinput
More About
• “Use Advanced Property Support in the GigE Vision and GenICam GenTL Interfaces” on page 5-
18
5-24
Starting and Stopping a Video Input Object
start(vid);
When you start an object, you reserve the device for your exclusive use and lock the configuration.
Thus, certain properties become read only while running.
An image acquisition object stops running when any of the following conditions is met:
When an object is started, the toolbox sets the object's Running property to 'on'. When an object is
not running, the toolbox sets the object's Running property to 'off'; this state is called stopped.
The following figure illustrates how an object moves from a running to a stopped state.
1 Create an image acquisition object — This example creates a video input object for a webcam
image acquisition device. To run this example on your system, use the imaqhwinfo function to
get the object constructor for your image acquisition device and substitute that syntax for the
following code.
vid = videoinput('winvideo',1);
2 Verify that the image is in a stopped state — Use the isrunning function to determine the
current state of the video input object.
isrunning(vid)
5-25
5 Connecting to Hardware
ans =
0
3 Configure properties To illustrate object states, set the video input object's TriggerType
property to 'Manual'. To set the value of certain trigger properties, including the TriggerType
property, you must use the triggerconfig function. See “Setting the Values of Trigger
Properties” on page 6-6 for more information.
triggerconfig(vid, 'Manual')
Configure an acquisition that takes several seconds so that you can see the video input in logging
state.
vid.FramesPerTrigger = 100;
4 Start the image acquisition object — Call the start function to start the image acquisition
object.
start(vid)
5 Verify that the image is running but not logging — Use the isrunning and islogging
functions to determine the current state of the video input object. With manual triggers, the
video input object is in running state after being started but does not start logging data until a
trigger executes.
isrunning(vid)
ans =
islogging(vid)
ans =
0
6 Execute the manual trigger — Call the trigger function to execute the manual trigger.
trigger(vid)
While the acquisition is underway, check the logging state of the video input object.
islogging(vid)
ans =
After it acquires the specified number of frames, the video input object stops running.
isrunning(vid)
ans =
0
7 Clean up — Always remove image acquisition objects from memory, and the variables that
reference them, when you no longer need them.
5-26
Starting and Stopping a Video Input Object
delete(vid)
clear vid
5-27
5 Connecting to Hardware
Note When you delete a video input object, all the video source objects associated with the video
input object are also deleted.
To illustrate, this example creates several video input objects and then deletes them.
1 Create several image acquisition objects — This example creates several video input objects
for a single webcam image acquisition device, specifying several different video formats. To run
this example on your system, use the imaqhwinfo function to get the object constructor for your
image acquisition device and substitute that syntax for the following code.
vid = videoinput('winvideo',1);
vid2 = videoinput('winvideo',1,'RGB24_176x144');
vid3 = videoinput('winvideo',1,'YV12_352x288');
2 Clean up — Always remove image acquisition objects from memory, and the variables that
reference them, when you no longer need them.
You can delete image acquisition objects one at a time, using the delete function.
delete(vid)
You can also delete all the video input objects that currently exist in memory in one call to
delete by using the imaqfind function. The imaqfind function returns an array of all the
video input objects in memory.
imaqfind
Nest a call to the imaqfind function within the delete function to delete all these objects from
memory.
delete(imaqfind)
Note that the variables associated with the objects remain in the workspace.
whos
Name Size Bytes Class
5-28
Deleting Image Acquisition Objects
isvalid(vid)
ans =
0
To remove these variables from the workspace, use the clear command.
5-29
5 Connecting to Hardware
When you save a video input object, all the video source objects associated with the video input
object are also saved.
To load an image acquisition object that was saved to a MAT-file into the MATLAB workspace, use the
load command. For example, to load vid from MAT-file myvid.mat, use
load myvid
Note The values of read-only properties are not saved. When you load an image acquisition object
into the MATLAB workspace, read-only properties revert to their default values. To determine if a
property is read only, use the propinfo function or read the property reference page.
5-30
Image Acquisition Toolbox Properties
• BayerSensorAlignment
• DeviceID
• DiskLogger
• DiskLoggerFrameCount
• ErrorFcn
• EventLog
• FrameGrabInterval
• FramesAcquired
• FramesAcquiredFcn
• FramesAcquiredFcnCount
• FramesAvailable
• FramesPerTrigger
• IgnoreDroppedFrames
• InitialTriggerTime
• Logging
• LoggingMode
• Name
• NumberOfBands
• NumDroppedFrames
• Parent
• PreviewFullBitDepth
• Previewing
• ReturnedColorSpace
• ROIPosition
• Running
• Selected
• SelectedSourceName
• Source
• SourceName
• StartFcn
• StopFcn
• Tag
• Timeout
• TimerFcn
• TimerPeriod
• TriggerCondition
5-31
5 Connecting to Hardware
• TriggerFcn
• TriggerFrameDelay
• TriggerRepeat
• TriggersExecuted
• TriggerSource
• TriggerType
• Type
• UserData
• VideoFormat
• VideoResolution
5-32
6
The following topics describe how to configure and use the various triggering options supported by
the Image Acquisition Toolbox software and control other acquisition parameters.
6-2
Data Logging
Data Logging
In this section...
“Overview” on page 6-3
“Trigger Properties” on page 6-4
Overview
When a trigger occurs, the toolbox sets the object's Logging property to 'on' and starts storing the
acquired frames in a buffer in memory, a disk file, or both. When the acquisition stops, the toolbox
sets the object's Logging property to 'off'.
The following figure illustrates when an object moves into a logging state and the relation between
running and logging states.
Note After Logging is set to 'off', it is possible that the object might still be logging data to disk.
To determine when disk logging is complete, check the value of the DiskLoggerFrameCount
property. For more information, see “Logging Image Data to Disk” on page 6-32.
The following figure illustrates a group of frames being acquired from the video stream and being
logged to memory and disk.
6-3
6 Acquiring Image Data
Trigger Properties
The video input object supports several properties that you can use to configure aspects of trigger
execution. Some of these properties return information about triggers. For example, to find out when
the first trigger occurred, look at the value of the InitialTriggerTime property. Other properties
enable you to control trigger behavior. For example, you use the TriggerRepeat property to specify
how many additional times an object should execute a trigger.
The following table provides a brief description of all the trigger-related properties supported by the
video input object. For information about how to set these properties, see “Setting the Values of
Trigger Properties” on page 6-6.
Property Description
InitialTriggerTime Reports the absolute time when the first trigger executed.
TriggerCondition Specifies the condition that must be met for a trigger to be
executed. This property is always set to 'none' for immediate
and manual triggers.
TriggerFcn Specifies the callback function to execute when a trigger occurs.
For more information about callbacks, see “Using Events and
Callbacks” on page 8-2.
TriggerFrameDelay Specifies the number of frames to skip before logging data to
memory, disk, or both. For more information, see “Delaying Data
Logging After a Trigger” on page 6-24.
TriggerRepeat Specifies the number of additional times to execute a trigger. If
the value of TriggerRepeat is 0 (zero), the trigger executes but
is not repeated any additional times. For more information, see
“Specifying Multiple Triggers” on page 6-25.
TriggersExecuted Reports the number of triggers that have been executed.
TriggerSource Specifies the source to monitor for a trigger condition to be met.
This property is always set to 'none' for immediate and manual
triggers.
6-4
Data Logging
Property Description
TriggerType Specifies the type of trigger: 'immediate', 'manual', or
'hardware'. Use the triggerinfo function to determine
whether your image acquisition device supports hardware
triggers.
Note To get a list of options you can use on a function, press the Tab key after entering a function on
the MATLAB command line. The list expands, and you can scroll to choose a property or value. For
information about using this advanced tab completion feature, see “Using Tab Completion for
Functions” on page 5-17.
6-5
6 Acquiring Image Data
In this section...
“About Trigger Properties” on page 6-6
“Specifying Trigger Type, Source, and Condition” on page 6-6
vid.TriggerRepeat = Inf
For more information, see “Configuring Image Acquisition Object Properties” on page 5-12.
Some trigger properties, however, are interrelated and require the use of the triggerconfig
function to set their values. These properties are the TriggerType, TriggerCondition, and
TriggerSource properties. For example, some TriggerCondition values are only valid when the
value of the TriggerType property is 'hardware'.
1 Determine valid configurations on page 6-6 of these properties by calling the triggerinfo
function.
2 Set the values of these properties by calling the triggerconfig function.
For an example of using these functions, see “Using a Hardware Trigger” on page 6-12.
To find all the valid configurations of the TriggerType, TriggerSource, and TriggerCondition
properties, use the triggerinfo function, specifying a video input object as an argument.
config = triggerinfo(vid);
This function returns an array of structures, one structure for each valid combination of property
values. Each structure in the array is made up of three fields that contain the values of each of these
trigger properties. For example, the structure returned for an immediate trigger always has these
values:
TriggerType: 'immediate'
TriggerCondition: 'none'
TriggerSource: 'none'
A device that supports hardware configurations might return the following structure.
6-6
Setting the Values of Trigger Properties
TriggerType: 'hardware'
TriggerCondition: 'risingEdge'
TriggerSource: 'TTL'
Note The character vectors used as the values of the TriggerCondition and TriggerSource
properties are device specific. Your device, if it supports hardware triggers, might support different
condition and source values.
To set the values of the TriggerType, TriggerSource, and TriggerCondition properties, you
must use the triggerconfig function. You specify the value of the property as an argument to the
function.
For example, this code sets the values of these properties for a hardware trigger.
triggerconfig(vid,'hardware','risingEdge','TTL')
If you are specifying a manual trigger, you only need to specify the trigger type value as an argument.
triggerconfig(vid,'manual')
You can also pass one of the structures returned by the triggerinfo function to the
triggerconfig function and set all three properties at once.
triggerconfig(vid, config(1))
Note To get a list of options you can use on a function, press the Tab key after entering a function on
the MATLAB command line. The list expands, and you can scroll to choose a property or value. For
information about using this advanced tab completion feature, see “Using Tab Completion for
Functions” on page 5-17.
6-7
6 Acquiring Image Data
Note To get a list of options you can use on a function, press the Tab key after entering a function on
the MATLAB command line. The list expands, and you can scroll to choose a property or value. For
6-8
Specifying the Trigger Type
information about using this advanced tab completion feature, see “Using Tab Completion for
Functions” on page 5-17.
Immediate Trigger
1 Create an image acquisition object — This example creates a video input object for a Matrox
image acquisition device. To run this example on your system, use the imaqhwinfo function to
get the object constructor for your image acquisition device and substitute that syntax for the
following code.
vid = videoinput('matrox',1);
vid.FramesAcquired
ans =
0
2 Configure properties — To use an immediate trigger, you do not have to configure the
TriggerType property because 'immediate' is the default trigger type. You can verify this by
using the triggerconfig function to view the current trigger configuration or by viewing the
video input object's properties.
triggerconfig(vid)
ans =
TriggerType: 'immediate'
6-9
6 Acquiring Image Data
TriggerCondition: 'none'
TriggerSource: 'none'
This example sets the value of the FramesPerTrigger property to 5. (The default is 10 frames
per trigger.)
vid.FramesPerTrigger = 5
3 Start the image acquisition object — Call the start function to start the image acquisition
object. By default, the object executes an immediate trigger and acquires five frames of data,
logging the data to a memory buffer. After logging the specified number of frames, the object
stops running.
start(vid)
To verify that the object acquired data, view the value of the FramesAcquired property. The
object updates the value of this property as it acquires data.
vid.FramesAcquired
ans =
To execute another immediate trigger, you must restart the object. Note, however, that this
deletes the data acquired by the first trigger. To execute multiple immediate triggers, specify a
value for the TriggerRepeat property. See “Specifying Multiple Triggers” on page 6-25 for
more information.
4 Clean up — Always remove image acquisition objects from memory, and the variables that
reference them, when you no longer need them.
delete(vid)
clear vid
Manual Trigger
6-10
Specifying the Trigger Type
1 Create an image acquisition object — This example creates a video input object for a webcam
image acquisition device. To run this example on your system, use the imaqhwinfo function to
get the object constructor for your image acquisition device and substitute that syntax for the
following code.
vid = videoinput('winvideo',1);
vid.FramesAcquired
ans =
0
2 Configure properties — Set the video input object's TriggerType property to 'Manual'. To
set the values of certain trigger properties, including the TriggerType property, you must use
the triggerconfig function. See “Setting the Values of Trigger Properties” on page 6-6 for
more information.
triggerconfig(vid, 'Manual')
This example also sets the value of the FramesPerTrigger property to 5. (The default is 10
frames per trigger.)
vid.FramesPerTrigger = 5
3 Start the image acquisition object — Call the start function to start the image acquisition
object.
start(vid);
The video object is now running but not logging. With manual triggers, the video stream begins
when the object starts but no frames are acquired until the trigger executes.
isrunning(vid)
ans =
islogging(vid)
ans =
Verify that the object has still not acquired any frames.
vid.FramesAcquired
ans =
0
4 Execute the manual trigger — Call the trigger function to execute the manual trigger.
trigger(vid)
The object initiates the acquisition of five frames. Check the FramesAcquired property again to
verify that five frames have been acquired.
6-11
6 Acquiring Image Data
vid.FramesAcquired
ans =
5
After it acquires the specified number of frames, the video input object stops running.
isrunning(vid)
ans =
To execute another manual trigger, you must first restart the video input object. Note that this
deletes the frames acquired by the first trigger. To execute multiple manual triggers, specify a
value for the TriggerRepeat property. See “Specifying Multiple Triggers” on page 6-25 for
more information.
5 Clean up — Always remove image acquisition objects from memory, and the variables that
reference them, when you no longer need them.
delete(vid)
clear vid
Note Trigger sources and the conditions that control hardware triggers are device specific. Use the
triggerinfo function to determine whether your image acquisition device supports hardware
triggers and, if it does, which conditions you can configure. Refer to the documentation that came
with your device for more detailed information about its hardware triggering capabilities.
Hardware Trigger
6-12
Specifying the Trigger Type
Alternatively, you can set these values by passing one of the structures returned by the
triggerinfo function to the triggerconfig function.
configs = triggerinfo(vid);
triggerconfig(vid,configs(3));
This example also sets the value of the FramesPerTrigger property to 5. (The default is 10
frames per trigger.)
vid.FramesPerTrigger = 5
4 Start the image acquisition object — Call the start function to start the image acquisition
object.
start(vid)
ans =
islogging(vid)
ans =
6-13
6 Acquiring Image Data
The hardware begins monitoring the trigger source for the specified condition. When the
condition is met, the hardware executes a trigger and begins providing image frames to the
object. The object acquires the number of frames specified by the FramesPerTrigger property.
View the value of the FramesAcquired property to see how much data was acquired. The object
updates the value of this property as it acquires data.
vid.FramesAcquired
ans =
After it executes the trigger and acquires the specified number of frames, the video input object
stops running.
isrunning(vid)
ans =
To execute another hardware trigger, you must first restart the video input object. Note that this
deletes the frames acquired by the first trigger. To execute multiple triggers, specify a value for
the TriggerRepeat property. See “Specifying Multiple Triggers” on page 6-25 for more
information.
5 Clean up — Always remove image acquisition objects from memory, and the variables that
reference them, when you no longer need them.
delete(vid)
clear vid
Control trigger functionality using the triggerinfo and triggerconfig functions and the
triggersource property. Before R2010a, one triggersource was available, externalTrigger.
Selecting externalTrigger configures the camera to use trigger mode 0 with trigger source 0.
The triggersource property is now composed of the trigger type (internal or external), the trigger
source (0, 1, 2, etc.), and the mode number (0 through 5, 14 and 15). The following table summarizes
the options.
6-14
Specifying the Trigger Type
For example, the second triggersource for trigger mode 1 is called externalTrigger1-mode1.
To use mode 3, the triggersource is internalTrigger-mode3.
Note Toolbox versions before R2010a supported DCAM trigger mode 0 with the first available
triggersource as externalTrigger. The existing externalTrigger property will be
maintained so to prevent backward compatibility issues. In addition, in order to preserve symmetry
with the new functionality, triggersource externalTrigger0-mode0, which is synonymous, will
also be supported. The new trigger modes do not work before R2010a.
Usage Notes
If a trigger mode has multiple trigger sources (modes 0, 1, 2, 4, and 5), then triggersource has a
digit indicating the corresponding camera source, even if only one camera source is available. For
example, if the camera has only a single triggersource available, the toolbox reports the
triggersource name as externalTrigger0-modeX. If the trigger mode does not have multiple
sources (mode 3), then no source digit appears in the name (i.e, internalTriggerMode3 instead of
internalTriggerMode3-Source0).
The DCAM adaptor includes a TriggerParameter property that is passed to the camera when you
set trigger configurations. The TriggerParameter property is validated when you call START after
selecting a hardware trigger mode.
If the selected trigger mode prohibits multiple frames per trigger, then an error appears when you
call START without setting FramesPerTrigger to 1.
If the camera supports only trigger mode 0 with source 0, then the original functionality of having
only the externalTrigger triggersource is supported.
Trigger modes 14 and 15 are vendor-specific and are assumed to be external triggers and have no
restrictions on any settings. You must validate any settings you use.
Trigger Mode 0
This is the only trigger mode supported before R2010a. When a trigger is received, a frame is
acquired. You can acquire multiple frames per trigger by switching the camera for hardware
triggered mode to free running mode when a triggered frame is acquired.
No parameter is required.
6-15
6 Acquiring Image Data
The camera starts the integration of the incoming light from the external trigger input falling edge.
Trigger Mode 1
In this mode, the duration of the trigger signal is used to control the integration time of the incoming
light. This mode is used to synchronize the exposure time of the camera to an external event.
No parameter is required.
The camera starts the integration of the incoming light from the external trigger input falling edge.
Integration time is equal to the low state time of the external trigger input if triggersource is
fallingEdge, otherwise it is equal to the high state time.
Trigger Mode 2
This mode is similar to mode 1, except the duration of the trigger signal does govern integration time.
Instead the number of trigger signals received does. Integration commences upon the start of the
first trigger signal and continues until the start of the Nth trigger signal.
6-16
Specifying the Trigger Type
The camera starts the integration of the incoming light from the first external trigger input falling
edge. At the Nth external trigger input falling edge, integration stops. Parameter N is required and
must be 2 or greater. (N >= 2).
Trigger Mode 3
Use this internal trigger mode to achieve a lower frame rate. When the trigger generates internally, a
frame is acquired and returned. A new frame is not acquired for N x Tf when N is the parameter and
Tf is the cycle time of the fastest frame rate supported by the camera.
This is an internal trigger mode. The camera issues the trigger internally and cycle time is N times of
the cycle time of the fastest frame rate. Integration time of incoming light is described in the shutter
register. Parameter N is required and must be 1 or greater (N >= 1).
Trigger Mode 4
This mode is the “multiple shutter preset mode.” It is similar to mode 1, but the exposure time is
governed by the shutter property. On each trigger, shutter property defines the exposure duration.
When N triggers are received, a frame is acquired.
The camera starts integration of incoming light from the first external trigger input falling edge and
exposes incoming light at shutter time. Repeat this sequence until the Nth external trigger input
falling edge, then finish integration. Parameter N is required and must be 1 or greater (N >= 1).
6-17
6 Acquiring Image Data
Trigger Mode 5
This mode is the “multiple shutter pulse width mode.” It is a combination of modes 1 and 2. The
exposure time is governed by the duration of the trigger signal and a number of trigger signals can
be integrated into a single readout. If the trigger parameter is 1, this mode is degenerate with mode
1.
The camera starts integration of incoming light from first the external trigger input falling edge and
exposes incoming light until the trigger is inactive. Repeat this sequence until the Nth external
trigger input falling edge, then finish integration. Parameter N is required and must be 1 or greater
(N >= 1).
Trigger Mode 14
This is a vendor-specific mode and no information is available. Consult the documentation for your
camera.
Trigger Mode 15
This is a vendor-specific mode and no information is available. Consult the documentation for your
camera.
6-18
Controlling Logging Parameters
Data Logging
The following subsections describe how to control various aspects of data logging.
The default value for the LoggingMode property is 'memory'. In this mode, the toolbox logs data to
a buffer in memory. If you want to bring image data into the MATLAB workspace, you must log frames
to memory. The functions provided by the toolbox to move data into the workspace all work with the
memory buffer. For more information, see “Bringing Image Data into the MATLAB Workspace” on
page 7-3.
You can also log data to a disk file by setting the LoggingMode property to 'disk' or to
'disk&memory'. By logging frames to a disk file, you create a permanent record of the frames you
acquire. For example, this code sets the value of the LoggingMode property of the video input object
vid to 'disk&memory'.
vid.LoggingMode = 'disk&memory';
Because the toolbox stores the image frames in Audio Video Interleaved (AVI) format, you can view
the logged frames in any standard media player. For more information, see “Logging Image Data to
Disk” on page 6-32.
Note To get a list of options you can use on a function, press the Tab key after entering a function on
the MATLAB command line. The list expands, and you can scroll to choose a property or value. For
6-19
6 Acquiring Image Data
information about using this advanced tab completion feature, see “Using Tab Completion for
Functions” on page 5-17.
You specify the desired size of your acquisition as the value of the video input object
FramesPerTrigger property. By default, the value of this property is 10 frames per trigger, but you
can specify any value. The following figure illustrates an acquisition using the default value for the
FramesPerTrigger property. To see an example of an acquisition, see “Acquiring 100 Frames” on
page 6-21.
Note While you can specify any size acquisition, the number of frames you can acquire is limited by
the amount of memory you have available on your system for image storage. A large acquisition can
potentially fill all available system memory. For large acquisitions, you might want to remove frames
from the buffer as they are logged. For more information, see “Moving Multiple Frames into the
Workspace” on page 7-3. To learn how to empty the memory buffer, see “Freeing Memory” on page
6-30.
Although FramesPerTrigger specifies the number of frames to acquire, these frames do not have to
be captured contiguously from the video stream. You can specify that the toolbox skip a certain
number of frames between frames it acquires. To do this, set the value of the FrameGrabInterval
property.
Note The FrameGrabInterval property controls the interval at which the toolbox acquires frames
from the video stream (measured in frames). This property does not control the rate at which frames
are provided by the device, otherwise known as the frame rate.
The following figure illustrates how the FrameGrabInterval property affects an acquisition.
6-20
Controlling Logging Parameters
This example illustrates how you can specify the amount of data to be acquired and determine how
much data has been acquired. (For an example of configuring a time-based acquisition, see “Acquiring
10 Seconds of Image Data” on page 7-4.)
1 Create an image acquisition object — This example creates a video input object for a
Windows image acquisition device. To run this example on your system, use the imaqhwinfo
function to get the object constructor for your image acquisition device and substitute that
syntax for the following code.
vid = videoinput('winvideo',1);
2 Configure properties — Specify the amount of data you want to acquire as the number of
frames per trigger. By default, a video input object acquires 10 frames per trigger. For this
example, set the value of this property to 100.
vid.FramesPerTrigger = 100
3 Start the image acquisition object -— Call the start function to start the image acquisition
object.
start(vid)
The object executes an immediate trigger and begins acquiring frames of data. To verify if the
video input object is logging data, use the islogging function.
islogging(vid)
ans =
6-21
6 Acquiring Image Data
The start function returns control to the command line immediately but the object continues
logging the data to the memory buffer. After acquiring the specified number of frames, the object
stops running and logging.
4 Check how many frames have been acquired — To verify that the specified number of frames
has been acquired, check the value of the FramesAcquired property. Note that the object
continuously updates the value of the FramesAcquired property as the acquisition progresses.
If you view the value of this property several times during an acquisition, you can see the number
of frames acquired increase until logging stops.
vid.FramesAcquired
ans =
100
5 Clean up Always remove image acquisition objects from memory, and the variables that
reference them, when you no longer need them.
delete(vid)
clear vid
Note The FramesAvailable property tells the number of frames in the memory buffer, not in the
disk log, if LoggingMode is configured to 'disk' or 'disk&memory'. Because it takes longer to
write frames to a disk file than to memory, the number of frames stored in the disk log might lag
behind those stored in the memory buffer. To see how many frames are available in the disk log, look
at the value of the DiskLoggerFrameCount property. See “Logging Image Data to Disk” on page 6-
32 for more information.
This example illustrates the distinction between the FramesAcquired and the FramesAvailable
properties:
1 Create an image acquisition object — This example creates a video input object for a
Windows image acquisition device. To run this example on your system, use the imaqhwinfo
function to get the object constructor for your image acquisition device and substitute that
syntax for the following code.
vid = videoinput('winvideo',1);
2 Configure properties — For this example, configure an acquisition of 15 frames.
vid.FramesPerTrigger = 15
3 Start the image acquisition object — Call the start function to start the image acquisition
object.
start(vid)
The object executes an immediate trigger and begins acquiring frames of data. The start
function returns control to the command line immediately but the object continues logging the
6-22
Controlling Logging Parameters
data to the memory buffer. After logging the specified number of frames, the object stops
running.
4 Check how many frames have been acquired — To determine how many frames the object
has acquired and how many frames are available in the memory buffer, check the value of the
FramesAcquired and FramesAvailable properties.
vid.FramesAcquired
ans =
15
vid.FramesAvailable
ans =
15
The object updates the value of these properties continuously as it acquires frames of data. The
following figure illustrates how the object puts acquired frames in the memory buffer as the
acquisition progresses.
To remove frames from the memory buffer, call the getdata function, specifying the number of
frames to retrieve. For more information about using getdata, see “Bringing Image Data into
the MATLAB Workspace” on page 7-3.
data = getdata(vid,5);
After you execute the getdata function, check the values of the FramesAcquired and
FramesAvailable properties again. Notice that the FramesAcquired property remains
unchanged but the object has decremented the value of the FramesAvailable property by the
number of frames removed from the memory buffer.
vid.FramesAcquired
ans =
6-23
6 Acquiring Image Data
15
vid.FramesAvailable
ans =
10
The following figure illustrates the contents of the memory buffer after frames are removed.
delete(vid)
clear vid
To account for this characteristic of your setup, you can specify that the toolbox skip a specified
number of frames after a trigger executes. You use the TriggerFrameDelay property to specify the
number of frames you want to skip before logging begins.
For example, to specify a delay of five frames before data logging begins after a trigger executes, you
would set the value of the TriggerFrameDelay property to 5. The number of frames captured is
defined by the FramesPerTrigger property and is unaffected by the delay.
vid.TriggerFrameDelay = 5;
6-24
Controlling Logging Parameters
When it acquires the specified number of frames, the video input object stops running. To execute
another trigger, you must restart the video input object. Restarting an object causes it to delete all
the data it has stored in the memory buffer from the previous trigger. To execute multiple triggers,
retaining the data from each trigger, you must specify a value for the TriggerRepeat property.
Note that the TriggerRepeat property specifies the number of additional times a trigger executes.
For example, to execute a trigger three times, you would set the value of the TriggerRepeat
property to 2. In the following, vid is a video input object created with the videoinput function.
vid.TriggerRepeat = 2;
This figure illustrates an acquisition with three executions of a manual trigger. In the figure, the
FramesPerTrigger property is set to 3.
6-25
6 Acquiring Image Data
6-26
Waiting for an Acquisition to Finish
In some scenarios, you might want your application to wait until the acquisition completes before
proceeding with other processing. To do this, call the wait function immediately after the start or
trigger function returns. The wait function blocks the MATLAB command line until an acquisition
completes or a timeout value expires, whichever comes first.
By default, wait blocks the command line until a video input object stops running. You can optionally
specify that wait block the command line until the object stops logging. For acquisitions using an
immediate trigger, video input objects always stop running and stop logging at the same time.
However, with a manual trigger configured for multiple executions (TriggerRepeat > 0), you can
use wait immediately after each call to the trigger function to block the command line while
logging is in progress, even though the object remains in running state throughout the entire
acquisition.
The following figure illustrates the flow of control at the MATLAB command line for a single
execution of an immediate trigger and a manual trigger, with and without the wait function. A
hardware trigger is similar to the manual trigger diagram, except that the acquisition is triggered by
an external signal to the camera or frame grabber board, not by the trigger function. For an
example, see “Blocking the Command Line Until an Acquisition Completes” on page 6-28.
6-27
6 Acquiring Image Data
6-28
Waiting for an Acquisition to Finish
Configure the video input object trigger properties to one of the valid combinations returned by
triggerinfo. You can specify each property value as an argument to the triggerconfig
function
triggerconfig(vid, 'hardware','risingEdge','TTL')
Alternatively, you can set these values by passing one of the structures returned by the
triggerinfo function to the triggerconfig function.
configs = triggerinfo(vid);
triggerconfig(vid,configs(3));
3 Configure other object properties — This example also sets the value of the
FramesPerTrigger property to configure an acquisition large enough to produce a noticeable
duration. (The default is 10 frames per trigger.)
vid.FramesPerTrigger = 100
4 Start the image acquisition object — Call the start function to start the image acquisition
object.
start(vid)
The start function sets the object running and returns control to the command line.
5 Block the command line until the acquisition finishes — After the start function returns,
call the wait function.
wait(vid,60)
The wait function blocks the command line until the hardware trigger fires and acquisition
completes or until the amount of time specified by the timeout value expires.
6 Clean up — Always remove image acquisition objects from memory, and the variables that
reference them, when you no longer need them.
delete(vid)
clear vid
6-29
6 Acquiring Image Data
Freeing Memory
At times, while acquiring image data, you might want to delete some or all of the frames that are
stored in memory. Using the flushdata function, you can delete all the frames currently stored in
memory or only those frames associated with the execution of a trigger.
The following example illustrates how to use flushdata to delete all the frames in memory or one
trigger's worth of frames.
1 Create an image acquisition object — This example creates a video input object for a
Windows image acquisition device. To run this example on your system, use the imaqhwinfo
function to get the object constructor for your image acquisition device and substitute that
syntax for the following code.
vid = videoinput('winvideo',1);
2 Configure properties — For this example, configure an acquisition of five frames per trigger
and, to show the effect of flushdata, configure multiple triggers using the TriggerRepeat
property.
vid.FramesPerTrigger = 5
vid.TriggerRepeat = 2;
3 Start the image acquisition object — Call the start function to start the image acquisition
object.
start(vid)
The object executes an immediate trigger, acquires five frames of data, and repeats this trigger
two more times. After logging the specified number of frames, the object stops running.
To verify that the object acquired data, view the value of the FramesAvailable property. This
property reports how many frames are currently stored in the memory buffer.
vid.FramesAvailable
ans =
15
4 Delete a trigger's worth of image data — Call the flushdata function, specifying the mode
'triggers'. This deletes the frames associated with the oldest trigger.
flushdata(vid,'triggers');
The following figure shows the frames acquired before and after the call to flushdata. Note
how flushdata deletes the frames associated with the oldest trigger.
6-30
Managing Memory Usage
To verify that the object deleted the frames, view the value of the FramesAvailable property.
vid.FramesAvailable
ans =
10
5 Empty the entire memory buffer — Calling flushdata without specifying the mode deletes
all the frames stored in memory.
flushdata(vid);
To verify that the object deleted the frames, view the value of the FramesAvailable property.
vid.FramesAvailable
ans =
0
6 Clean up — Always remove image acquisition objects from memory, and the variables that
reference them, when you no longer need them.
delete(vid)
clear vid
6-31
6 Acquiring Image Data
For the best performance, logging to disk requires a MATLAB VideoWriter object, which is a
MATLAB function, not an Image Acquisition Toolbox function. After you create and configure a
VideoWriter object, provide it to the videoinput object's DiskLogger property.
VideoWriter provides a number of different profiles that log the data in different formats. The
following example uses the Motion JPEG 2000 profile, which can log single-banded (grayscale) data
as well as multi-byte data. Supported profiles are:
1 Create a video input object that accesses a GigE Vision image acquisition device and uses
grayscale format at 10 bits per pixel.
vidobj.LoggingMode = 'disk'
3 Create a VideoWriter object with the profile set to Motion JPEG 2000.
vidobj.DiskLogger = logfile;
6-32
Logging Image Data to Disk
5 Now that the video input object is configured for logging data to a Motion JPEG 2000 file, initiate
the acquisition.
start(vidobj)
6 Wait for the acquisition to finish.
wait(vidobj, 5)
7 When logging large amounts of data to disk, disk writing occasionally lags behind the acquisition.
To determine whether all frames are written to disk, you can optionally use the
DiskLoggerFrameCount property.
vidobj.FramesAcquired
vidobj.DiskLoggerFrameCount
9 When the video input object is no longer needed, delete it and clear it from the workspace.
delete(vidobj)
clear vidobj
• You should not delete the video input object until logging has been completed as indicated by the
DiskLoggerFrameCount property equaling the FramesAcquired property. Doing so will cause
disk logging to stop without all of the data being logged.
• If START is called multiple times without supplying a new VideoWriter object, the contents of the
previous file will be erased when START is called.
• Once the VideoWriter object has been passed to the DiskLogger property, you should not modify
it.
6-33
7
When you trigger an acquisition, the toolbox stores the image data in a memory buffer, a disk file, or
both. To work with this data, you must bring it into the MATLAB workspace.
This chapter describes how you use video input object properties and toolbox functions to bring the
logged data into the MATLAB workspace.
The following figure illustrates a group of frames being acquired from the video stream, logged to
memory and disk, and brought into the MATLAB workspace as a multidimensional numeric array.
Note that when frames are brought into the MATLAB workspace, they are removed from the memory
buffer.
7-2
Bringing Image Data into the MATLAB Workspace
Overview
The toolbox provides three ways to move frames from the memory buffer into the MATLAB
workspace:
• Removing multiple frames from the buffer — To move a specified number of frames from the
memory buffer into the workspace, use the getdata function. The getdata function removes the
frames from the memory buffer as it moves them into the workspace. The function blocks the
MATLAB command line until all the requested frames are available, or until a timeout value
expires. For more information, see “Moving Multiple Frames into the Workspace” on page 7-3.
• Viewing the most recently acquired frames in the buffer — To bring the most recently
acquired frames in the memory buffer into the workspace without removing them from the buffer,
use the peekdata function. When returning frames, peekdata starts with the most recently
acquired frame and works backward in the memory buffer. In contrast, getdata starts at the
beginning of the buffer, returning the oldest acquired frame first. peekdata does not block the
command line and is not guaranteed to return all the frames you request. For more information,
see “Viewing Frames in the Memory Buffer” on page 7-5.
• Bringing a single frame of data into the workspace — As a convenience, the toolbox provides
the getsnapshot function, which returns a single frame of data into the MATLAB workspace.
Because the getsnapshot function does not require starting the object or triggering an
acquisition, it is the easiest way to bring image data into the workspace. getsnapshot is
independent of the memory buffer; it can return a frame even if the memory buffer is empty, and
the frame returned does not affect the value of the FramesAvailable property. For more
information, see “Bringing a Single Frame into the Workspace” on page 7-7. For an example of
using getsnapshot, see the Image Acquisition Toolbox example Acquiring a Single Image in a
Loop in the Examples list at the top of the Image Acquisition Toolbox main Documentation
Center page, or open the file demoimaq_GetSnapshot.m in the MATLAB Editor.
Note When the getdata function moves frames from the memory buffer into the workspace, it
removes the frames from the memory buffer.
In this figure, getdata is called at T1 with a request for 15 frames but only six frames are available in
the memory buffer. getdata blocks until the specified number of frames becomes available, at T2, at
7-3
7 Working with Acquired Image Data
which point getdata moves the frames into the MATLAB workspace and returns control to the
command prompt.
This example shows how you can configure an approximate time-based acquisition using the
FramesPerTrigger property:
1 Create an image acquisition object — This example creates a video input object for a
Windows image acquisition device. To run this example on your system, use the imaqhwinfo
function to get the object constructor for your image acquisition device and substitute that
syntax for the following code.
vid = videoinput('winvideo',1);
2 Configure properties — To acquire 10 seconds of data, determine the frame rate of your image
acquisition device and then multiply the frame rate by the number of seconds of data you want to
acquire. The product of this multiplication is the value of the FramesPerTrigger property.
For this example, assume a frame rate of 30 frames per second (fps). Multiplying 30 by 10, you
need to set the FramesPerTrigger property to the value 300.
vid.FramesPerTrigger = 300;
3 Start the image acquisition object — Call the start function to start the image acquisition
object.
start(vid)
The object executes an immediate trigger and begins acquiring frames of data. The start
function returns control to the command line immediately but the object continues logging the
data to the memory buffer. After logging the specified number of frames, the object stops
running.
4 Bring the acquired data into the workspace — To verify that you acquired the amount of data
you wanted, use the optional getdata syntax that returns the timestamp of every frame
7-4
Bringing Image Data into the MATLAB Workspace
acquired. The difference between the first timestamp and the last timestamp should approximate
the amount of data you expected.
[data time] = getdata(vid,300);
10.0467
5 Clean up — Always remove image acquisition objects from memory, and the variables that
reference them, when you no longer need them.
delete(vid)
clear vid
The peekdata function always returns the most recently acquired frames in the memory buffer. For
example, if you request three frames, peekdata returns the most recently acquired frame in the
buffer at the time of the request and the two frames that immediately precede it.
The following figure illustrates this process. The command peekdata(vid,3) is called at three
different times (T1, T2, and T3). The shaded frames indicate the frames returned by peekdata at each
call. (peekdata returns frames without removing them from the memory buffer.)
Note in the figure that, at T3, only two frames have become available since the last call to peekdata.
In this case, peekdata returns only the two frames, with a warning that it returned less data than
was requested.
Note The peekdata function does not return any data while running if in disk logging mode.
7-5
7 Working with Acquired Image Data
1 Create an image acquisition object — This example creates a video input object for a Data
Translation image acquisition device. To run this example on your system, use the imaqhwinfo
function to get the object constructor for your image acquisition device and substitute that
syntax for the following code.
vid = videoinput('dt',1);
2 Configure properties — For this example, configure a manual trigger. You must use the
triggerconfig function to specify the trigger type.
triggerconfig(vid,'manual')
In addition, configure a large enough acquisition to allow several calls to peekdata before it
finishes.
vid.FramesPerTrigger = 300;
3 Start the image acquisition object — Call the start function to start the image acquisition
object.
start(vid)
isrunning(vid)
ans =
islogging(vid)
ans =
0
4 Use peekdata to view frames before a trigger — If you call peekdata before you trigger the
acquisition, peekdata can only return a single frame of data because data logging has not been
initiated and the memory buffer is empty. If more than one frame is requested, peekdata issues
a warning that it is returning fewer than the requested number of frames.
pdata = peekdata(vid,50);
Warning: PEEKDATA could not return all the frames requested.
Verify that peekdata returned a single frame. A single frame of data should have the same width
and height as specified by the ROIPosition property and the same number of bands, as
specified by the NumberOfBands property. In this example, the video format of the data is RGB
so the value of the NumberOfBands property is 3.
whos
Name Size Bytes Class
7-6
Bringing Image Data into the MATLAB Workspace
vid.FramesAcquired
ans =
0
5 Trigger the acquisition — Call the trigger function to trigger an acquisition.
trigger(vid)
To verify that peekdata returned the frames you requested, check the dimensions of pdata.
peekdata returns a four-dimensional array of frames, where the last dimension indicates the
number of frames returned.
whos
Name Size Bytes Class
size(pdata)
ans =
96 128 3 50
7 Clean up — Always remove image acquisition objects from memory, and the variables that
reference them, when you no longer need them.
delete(vid)
clear vid
The getsnapshot function returns an image of the same width and height as specified by the
ROIPosition property and the same number of bands as specified by the NumberOfBands
7-7
7 Working with Acquired Image Data
property. In this example, the video format of the data is RGB so the value of the
NumberOfBands property is 3.
whos
Name Size Bytes Class
Note that the frame returned by getsnapshot is not removed from the memory buffer, if frames
are stored there, and does not affect the value of the FramesAvailable property.
3 Clean up — Always remove image acquisition objects from memory, and the variables that
reference them, when you no longer need them.
delete(vid)
clear vid
For an example of using getsnapshot, see the Image Acquisition Toolbox example Acquiring a
Single Image in a Loop in the Examples list at the top of the Image Acquisition Toolbox main
Documentation Center page, or open the file demoimaq_GetSnapshot.m in the MATLAB Editor.
7-8
Working with Image Data in MATLAB Workspace
This section also describes several ways to view acquired image data on page 7-12.
The video input object stores the video resolution in the VideoResolution property.
7-9
7 Working with Acquired Image Data
Each image frame is three dimensional; however, the video format determines the number of bands in
the third dimension. For color video formats, such as RGB, each image frame has three bands: one
each for the red, green, and blue data. Other video formats, such as the grayscale RS170 standard,
have only a single band. The video input object stores the size of the third dimension in the
NumberOfBands property.
Note Because devices typically express video resolution as width-by-height, the toolbox uses this
convention for the VideoResolution property. However, when data is brought into the MATLAB
workspace, the image frame dimensions are listed in reverse order, height-by-width, because
MATLAB expresses matrix dimensions as row-by-column.
When you specify a region-of-interest (ROI) in the image being captured, the dimensions of the ROI
determine the dimensions of the image frames returned. The VideoResolution property specifies
the dimensions of the image data being provided by the device; the ROIPosition property specifies
the dimensions of the image frames being logged. See the ROIPosition property reference page for
more information.
The following example illustrates how video format affects the size of the image frames returned.
1 Select a video format — Use the imaqhwinfo function to view the list of video formats
supported by your image acquisition device. This example shows the video formats supported by
a Matrox Orion frame grabber. The formats are industry standard, such as RS170, NTSC, and
PAL. These standards define the image resolution.
info = imaqhwinfo('matrox');
info.DeviceInfo.SupportedFormats
ans =
Columns 1 through 4
Columns 5 through 8
Columns 9 through 10
'M_PAL_RGB' 'M_PAL_YC'
2 Create an image acquisition object — This example creates a video input object for a Matrox
image acquisition device using the default video format, RS170. To run this example on your
system, use the imaqhwinfo function to get the object constructor for your image acquisition
device and substitute that syntax for the following code.
vid = videoinput('matrox',1);
3 View the video format and video resolution properties — The toolbox creates the object with
the default video format. This format defines the video resolution.
7-10
Working with Image Data in MATLAB Workspace
vid.VideoFormat
ans =
M_RS170
vid.VideoResolution
ans =
[640 480]
4 Bring a single frame into the workspace — Call the getsnapshot function to bring a frame
into the workspace.
frame = getsnapshot(vid);
The dimensions of the returned data reflect the image resolution and the value of the
NumberOfBands property.
vid.NumberOfBands
ans =
size(frame)
ans =
480 640
5 Start the image acquisition object — Call the start function to start the image acquisition
object.
start(vid)
The object executes an immediate trigger and begins acquiring frames of data.
6 Bring multiple frames into the workspace — Call the getdata function to bring multiple
image frames into the MATLAB workspace.
data = getdata(vid,10);
The getdata function brings 10 frames of data into the workspace. Note that the returned data
is a four-dimensional array: each frame is three-dimensional and the nth frame is indicated by the
fourth dimension.
size(data)
ans =
480 640 1 10
7 Clean up — Always remove image acquisition objects from memory, and the variables that
reference them, when you no longer need them.
delete(vid)
clear vid
7-11
7 Working with Acquired Image Data
You can specify the data type you want getdata to use for the returned data. For example, you can
specify that getdata return image frames as an array of class double. To see a list of all the data
types supported, see the getdata reference page.
The following example illustrates the data type of returned image data.
1 Create an image acquisition object — This example creates a video input object for a Matrox
image acquisition device. To run this example on your system, use the imaqhwinfo function to
get the object constructor for your image acquisition device and substitute that syntax for the
following code.
vid = videoinput('matrox',1);
2 Bring a single frame into the workspace — Call the getsnapshot function to bring a frame
into the workspace.
frame = getsnapshot(vid);
3 View the class of the returned data — Use the class function to determine the data type
used for the returned image data.
class(frame)
ans =
uint8
4 Clean up — Always remove image acquisition objects from memory, and the variables that
reference them, when you no longer need them.
delete(vid)
clear vid
The Image Acquisition Toolbox software includes a function, imaqmontage, that you can use to view
all the frames of a multiframe image array in a single MATLAB image object. imaqmontage arranges
the frames so that they roughly form a square. imaqmontage can be useful for visually comparing
multiple frames.
MATLAB includes two functions, image and imagesc, that display images in a figure window. Both
functions create a MATLAB image object to display the frame. You can use image object properties to
control aspects of the display. The imagesc function automatically scales the input data.
7-12
Working with Image Data in MATLAB Workspace
The Image Processing Toolbox software includes an additional display routine called imshow. Like
image and imagesc, this function creates a MATLAB image object. However, imshow also
automatically sets various image object properties to optimize the display.
7-13
7 Working with Acquired Image Data
For example, many devices represent colors as RGB values. In this color space, colors are
represented as a combination of various intensities of red, green, and blue. Another color space,
widely used for digital video, is the YCbCr color space. In this color space, luminance (brightness or
intensity) information is stored as a single component (Y). Chrominance (color) information is stored
as two color-difference components (Cb and Cr). Cb represents the difference between the blue
component and a reference value. Cr represents the difference between the red component and a
reference value.
The toolbox can return image data in grayscale, RGB, and YCbCr. To specify the color representation
of the image data, set the value of the ReturnedColorSpace property. To display image frames
using the image, imagesc, or imshow functions, the data must use the RGB color space. Another
MathWorks® product, the Image Processing Toolbox software, includes functions that convert YCbCr
data to RGB data, and vice versa.
Note Some devices that claim to support the YUV color space actually support the YCbCr color
space. YUV is similar to YCbCr but not identical. The difference between YUV and YCbCr is the
scaling factor applied to the result. YUV refers to a particular scaling factor used in composite NTSC
and PAL formats. In most cases, you can specify the YCbCr color space for devices that support YUV.
You can determine your device’s default color space using this code: vid.ReturnedColorSpace,
where vid is the name of the video object. An example of this is shown in step 2 in the example
below. There may be situations when you wish to change the color space. The example below shows a
case where the default color space is rgb, and you change it to grayscale (step 3).
The following example illustrates how to specify the color space of the returned image data.
1 Create an image acquisition object — This example creates a video input object for a generic
Windows image acquisition device. To run this example on your system, use the imaqhwinfo
function to get the object constructor for your image acquisition device and substitute that
syntax for the following code.
vid = videoinput('winvideo',1);
2 View the default color space used for the data — The value of the ReturnedColorSpace
property indicates the color space of the image data.
vid.ReturnedColorSpace
ans =
rgb
7-14
Specifying the Color Space
3 Modify the color space used for the data — To change the color space of the returned image
data, set the value of the ReturnedColorSpace property.
vid.ReturnedColorSpace = 'grayscale'
ans =
grayscale
4 Clean up — Always remove image acquisition objects from memory, and the variables that
reference them, when you no longer need them.
delete(vid)
clear vid
If your camera uses Bayer filtering, the toolbox supports the Bayer pattern and can return color if
desired. By setting the ReturnedColorSpace property to 'bayer', the Image Acquisition Toolbox
software will demosaic Bayer patterns returned by the hardware. This color space setting will
interpolate Bayer pattern encoded images into standard RGB images.
In order to perform the demosaicing, the toolbox needs to know the pixel alignment of the sensor.
This is the order of the red, green, and blue sensors and is normally specified by describing the four
pixels in the upper-left corner of the sensor. It is the band sensitivity alignment of the pixels as
interpreted by the camera's internal hardware. You must get this information from the camera's
documentation and then specify the value for the alignment.
If your camera can return Bayer data, the toolbox can automatically convert it to RGB data for you, or
you can specify it to do so. The following two examples illustrate both use cases.
Manual Conversion
The camera in this example has a Bayer sensor. The GigE Vision standard allows cameras to inform
applications that the data is Bayer encoded and provides enough information for the application to
convert the Bayer pattern into a color image. In this case the toolbox automatically converts the
Bayer pattern into an RGB image.
1 Create a video object vid using the GigE Vision adaptor and the designated video format.
vid.ReturnedColorSpace
ans =
rgb
3 Create a one-frame image img using the getsnapshot function.
img = getsnapshot(vid);
4 View the size of the acquired image.
7-15
7 Working with Acquired Image Data
size(img)
ans =
480 640 3
5 Sometimes you might not want the toolbox to automatically convert the Bayer pattern into a
color image. For example, there are a number of different algorithms to convert from a Bayer
pattern into an RGB image and you might wish to specify a different one than the toolbox uses or
you might want to further process the raw data before converting it into a color image.
% Now check the size of the new frame acquired using grayscale.
size(img)
ans =
480 640
Notice how the size changed from the rgb image to the grayscale image by comparing the
size output in steps 4 and 5.
6 You can optionally use the demosaic function in the Image Processing Toolbox to convert Bayer
patterns into color images.
ans =
480 640 3
7 Always remove image acquisition objects from memory, and the variables that reference them,
when you no longer need them.
delete(vid)
clear vid
Automatic Conversion
The camera in this example returns data that is a Bayer mosaic, but the toolbox doesn't know it since
the DCAM standard doesn't have any way for the camera to communicate that to software
applications. You need to know that by reading the camera specifications or manual. The toolbox can
automatically convert the Bayer encoded data to RGB data, but it must be programmed to do so.
1 Create a video object vid using the DCAM adaptor and the designated video format for raw data.
7-16
Specifying the Color Space
vid.ReturnedColorSpace
ans =
grayscale
3 Create a one-frame image img using the getsnapshot function.
img = getsnapshot(vid);
4 View the size of the acquired image.
size(img)
ans =
480 640
5 The value of the ReturnedColorSpace property is grayscale because Bayer data is single-
banded and the toolbox doesn't yet know that it needs to decode the data. Setting the
ReturnedColorSpace property to 'bayer' indicates that the toolbox should decode the data.
The getdata and getsnapshot functions will now return color data.
% Now check the size of the new frame acquired returning color data.
size(img)
ans =
480 640 3
delete(vid)
clear vid
7-17
7 Working with Acquired Image Data
Introduction
The following sections describe how the toolbox provides acquisition timing information, particularly,
To see an example of retrieving timing information, see “Determining the Frame Delay Duration” on
page 7-19.
As a convenience, the toolbox returns the time of the first trigger execution in the video input object's
InitialTriggerTime property. This figure indicates which trigger is returned in this property
when multiple triggers are configured.
The trigger timing information is stored in MATLAB clock vector format. The following example
displays the time of the first trigger for the video input object vid. The example uses the MATLAB
datestr function to convert the information into a form that is more convenient to view.
datestr(vid.InitialTriggerTime)
ans =
02-Mar-2007 13:00:24
7-18
Retrieving Timing Information
You can use the getdata function to retrieve both types of timing information.
When you use the getdata function, you can optionally specify two return values. One return value
contains the image data; the other return value contains a vector of timestamps that measure, in
seconds, the time when the frame was acquired relative to the first trigger.
To see an example, see “Determining the Frame Delay Duration” on page 7-19.
When you use the getdata function, you can optionally specify three return values. The first contains
the image data, the second contains a vector of relative acquisition times, and the third is an array of
structures where each structure contains metadata associated with a particular frame.
Each structure in the array contains the following four fields. The AbsTime field contains the
absolute time the frame was acquired. You can also retrieve this metadata by using event callbacks.
See “Retrieving Event Information” on page 8-7 for more information.
Frame Metadata
1 Create an image acquisition object — This example creates a video input object for a Data
Translation image acquisition device using the default video format. To run this example on your
system, use the imaqhwinfo function to get the object constructor for your image acquisition
device and substitute that syntax for the following code.
7-19
7 Working with Acquired Image Data
vid = videoinput('dt',1);
2 Configure properties — For this example, configure a trigger frame delay large enough to
produce a noticeable duration.
vid.TriggerFrameDelay = 50
3 Start the image acquisition object — Call the start function to start the image acquisition
object.
start(vid)
The object executes an immediate trigger and begins acquiring frames of data. The start
function returns control to the command line immediately but data logging does not begin until
the trigger frame delay expires. After logging the specified number of frames, the object stops
running.
4 Bring the acquired data into the workspace — Call the getdata function to bring frames
into the workspace. Specify a return value to accept the timing information returned by
getdata.
The variable time is a vector that contains the time each frame was logged, measured in
seconds, relative to the execution of the first trigger. Check the first value in the time vector. It
should reflect the duration of the delay before data logging started.
time
time =
4.9987
5.1587
5.3188
5.4465
5.6065
5.7665
5.8945
6.0544
6.2143
6.3424
5 Clean up — Always remove image acquisition objects from memory, and the variables that
reference them, when you no longer need them.
delete(vid)
clear vid
7-20
8
When a particular event occurs, the toolbox can execute a function that you specify. This is called a
callback. Certain events can result in one or more callbacks. You can use callbacks to perform
processing tasks while your image acquisition object continues running. For example, you can display
a message, analyze data, or perform other tasks. The start and stop callbacks, however, execute
synchronously; the object does not perform any further processing until the callback function
finishes.
Callbacks are controlled through video input object properties. Each event type has an associated
property. You specify the function that you want executed as the value of the property.
8-2
Using the Default Callback Function
The example uses the default callback function provided with the toolbox, imaqcallback. The
default callback function displays the name of the object along with information about the type of
event that occurred and when it occurred. To learn how to create your own callback functions, see
“Creating and Executing Callback Functions” on page 8-10.
1 Create an image acquisition object — This example creates a video input object for a Matrox
image acquisition device. To run this example on your system, use the imaqhwinfo function to
get the object constructor for your image acquisition device and substitute that syntax for the
following code.
vid = videoinput('matrox',1);
2 Configure properties — Set the values of three callback properties. The example uses the
default callback function imaqcallback.
vid.StartFcn = @imaqcallback
vid.TriggerFcn = @imaqcallback
vid.StopFcn = @imaqcallback
vid.FramesPerTrigger = 100;
3 Start the image acquisition object — Start the image acquisition object. The object executes
an immediate trigger, acquires 100 frames of data, and then stops. With the three callback
functions enabled, the object outputs information about each event as it occurs.
start(vid)
Start event occurred at 14:38:46 for video input object: M_RS170-matrox-1.
Trigger event occurred at 14:38:46 for video input object: M_RS170-matrox-1.
Stop event occurred at 14:38:49 for video input object: M_RS170-matrox-1.
4 Clean up — Always remove image acquisition objects from memory, and the variables that
reference them, when you no longer need them.
delete(vid)
clear vid
8-3
8 Using Events and Callbacks
Event Types
The Image Acquisition Toolbox software supports several different types of events. Each event type
has an associated video input object property that you can use to specify the function that executes
when the event occurs.
This table lists the supported event types, the name of the video input object property associated with
the event, and a brief description of the event. For detailed information about these callback
properties, see the property reference list in “Image Acquisition Toolbox Properties” on page 5-31.
The toolbox generates a specific set of information for each event and stores it in an event structure.
To learn more about the contents of these event structures and how to retrieve this information, see
“Retrieving Event Information” on page 8-7.
Note Callbacks, including ErrorFcn, are executed only when the video object is in a running state.
If you need to use the ErrorFcn callback for error handling during previewing, you must start the
video object before previewing. To do that without logging data, use a manual trigger.
8-4
Event Types
8-5
8 Using Events and Callbacks
8-6
Retrieving Event Information
Introduction
Each event has associated with it a set of information, generated by the toolbox and stored in an
event structure. This information includes the event type, the time the event occurred, and other
event-specific information. While a video input object is running, the toolbox records event
information in the object's EventLog property. You can also access the event structure associated
with an event in a callback function.
This section
• Defines the information in an event structure on page 8-7 for all event types
• Describes how to retrieve information on page 8-8 from the EventLog property
For information about accessing event information in a callback function, see “Creating and
Executing Callback Functions” on page 8-10.
Event Structures
An event structure contains two fields: Type and Data. For example, this is an event structure for a
trigger event:
Type: 'Trigger'
Data: [1x1 struct]
The Type field is a character vector that specifies the event type. For a trigger event, this field
contains the character vector 'Trigger'.
The Data field is a structure that contains information about the event. The composition of this
structure varies depending on which type of event occurred. For details about the information
associated with specific events, see the following sections:
• “Data Fields for Start, Stop, Frames Acquired, and Trigger Events” on page 8-7
• “Data Fields for Error Events” on page 8-8
• “Data Fields for Timer Events” on page 8-8
Data Fields for Start, Stop, Frames Acquired, and Trigger Events
For start, stop, frames acquired, and trigger events, the Data structure contains these fields.
8-7
8 Using Events and Callbacks
The toolbox adds event structures to the EventLog array in the order in which the events occur. The
first event structure reflects the first event recorded, the second event structure reflects the second
event recorded, and so on.
Note Only start, stop, error, and trigger events are recorded in the EventLog property. Frames-
acquired events and timer events are not included in the EventLog. Event structures for these
events (and all the other events) are available to callback functions. For more information, see
“Creating and Executing Callback Functions” on page 8-10.
To illustrate the event log, this example creates a video input object, runs it, and then examines the
object's EventLog property:
1 Create an image acquisition object — This example creates a video input object for a Matrox
image acquisition device. To run this example on your system, use the imaqhwinfo function to
8-8
Retrieving Event Information
get the object constructor for your image acquisition device and substitute that syntax for the
following code.
vid = videoinput('matrox',1);
2 Start the image acquisition object — Start the image acquisition object. By default, the object
executes an immediate trigger, acquires 10 frames of data, and then stops.
start(vid)
3 View the event log — Access the EventLog property of the video input object. The execution of
the video input object generated three events: start, trigger, and stop. Thus the value of the
EventLog property is a 1x3 array of event structures.
events = vid.EventLog
events =
To list the events that are recorded in the EventLog property, examine the contents of the Type
field.
{events.Type}
ans =
'Start' 'Trigger' 'Stop'
To get information about a particular event, access the Data field in that event structure. The
example retrieves information about the trigger event.
trigdata = events(2).Data
trigdata =
delete(vid)
clear vid
8-9
8 Using Events and Callbacks
Introduction
The power of using event callbacks is the processing that you can perform in response to events. You
decide which events you want to associate callbacks with and the functions these callbacks execute.
This section
• Shows how to use callbacks to view a sample frame on page 8-13 from the frames being
acquired
Note Callback function execution might be delayed if the callback involves a CPU-intensive task such
as updating a figure.
The function header for this callback function illustrates this basic syntax.
function mycallback(obj,event)
The first argument, obj, is the image acquisition object itself. Because the object is available, you can
use in your callback function any of the toolbox functions, such as getdata, that require the object
as an argument. You can also access all object properties.
The second argument, event, is the event structure associated with the event. This event information
pertains only to the event that caused the callback function to execute. For a complete list of
supported event types and their associated event structures, see “Event Structures” on page 8-7.
In addition to these two required input arguments, you can also specify additional, application-
specific arguments for your callback function.
8-10
Creating and Executing Callback Functions
Note To receive the object and event arguments, and any additional arguments, you must use a cell
array when specifying the name of the function as the value of a callback property. For more
information, see “Specifying Callback Functions” on page 8-11.
To illustrate, this example implements a callback function for a frames-acquired event. This callback
function enables you to monitor the frames being acquired by viewing a sample frame periodically.
To implement this function, the callback function acquires a single frame of data and displays the
acquired frame in a MATLAB figure window. The function also accesses the event structure passed as
an argument to display the timestamp of the frame being displayed. The drawnow command in the
callback function forces MATLAB to update the display.
function display_frame(obj,event)
sample_frame = peekdata(obj,1);
imagesc(sample_frame);
abstime = event.Data.AbsTime;
t = fix(abstime);
To see how this function can be used as a callback, see “Viewing a Sample Frame” on page 8-13.
You can specify the callback function as the value of the property in any of three ways:
The following sections provide more information about each of these options.
Note To access the object or event structure passed to the callback function, you must specify the
function as a cell array or as a function handle.
You can specify the callback function as a character vector . For example, this code specifies the
callback function mycallback as the value of the start event callback property StartFcn for the
video input object vid.
8-11
8 Using Events and Callbacks
vid.StartFcn = 'mycallback';
You can specify the callback function as a character vector inside a cell array.
For example, this code specifies the callback function mycallback as the value of the start event
callback property StartFcn for the video input object vid.
vid.StartFcn = {'mycallback'};
To specify additional parameters, include them as additional elements in the cell array.
time = datestr(now,0);
vid.StartFcn = {'mycallback',time};
The first two arguments passed to the callback function are still the video input object (obj) and the
event structure (event). Additional arguments follow these two arguments.
For example, this code specifies the callback function mycallback as the value of the start event
callback property StartFcn for the video input object vid.
vid.StartFcn = @mycallback;
To specify additional parameters, include the function handle and the parameters as elements in the
cell array.
time = datestr(now,0);
vid.StartFcn = {@mycallback,time};
If you are executing a local callback function from within a MATLAB file, you must specify the
callback as a function handle.
In addition to specifying callback functions of your own creation, you can also specify the start,
stop, or trigger toolbox functions as callbacks. For example, this code sets the value of the stop
event callback to Image Acquisition Toolbox start function.
vid.StopFcn = @start;
Disabling Callbacks
If an error occurs in the execution of the callback function, the toolbox disables the callback and
displays a message similar to the following.
start(vid)
??? Error using ==> frames_cb
Too many input arguments.
8-12
Creating and Executing Callback Functions
To enable a callback that has been disabled, set the value of the property associated with the callback
or restart the object.
The example sets the TriggerRepeat property of the object to 4 so that 50 frames are acquired.
When run, the example displays a sample frame from the acquired data every time five frames have
been acquired.
1 Create an image acquisition object — This example creates a video input object for a Matrox
image acquisition device. To run this example on your system, use the imaqhwinfo function to
get the object constructor for your image acquisition device and substitute that syntax for the
following code.
vid.FramesPerTrigger = 30;
vid.TriggerRepeat = 4;
vid.FramesAcquiredFcnCount = 5;
vid.FramesAcquiredFcn = {'display_frame'};
3 Acquire data — Start the video input object. Every time five frames are acquired, the object
executes the display_frame callback function. This callback function displays the most
recently acquired frame logged to the memory buffer.
start(vid)
4 Clean up — Always remove image acquisition objects from memory, and the variables that
reference them, when you no longer need them.
delete(vid)
clear vid
8-13
9
The Image Acquisition Toolbox software includes a block that can be used in Simulink to bring live
video data into models.
In this section...
“From the Command Line” on page 9-2
“From the Simulink Library Browser” on page 9-2
imaqlib
simulink
On the Simulink start page, click Blank Model and then Create Model. An empty Editor window
opens.
9-2
Open Image Acquisition Toolbox Block Library
The Simulink Library Browser opens. Its left pane contains a tree of available block libraries in
alphabetical order. Click the Image Acquisition Toolbox node.
See Also
More About
• “Save Video Data to a File” on page 9-6
9-3
9 Using the From Video Device Block in Simulink
1 Develop a model using the From Video Device block and sink blocks from other toolboxes, such
as the Computer Vision Toolbox™.
2 Run the simulation to verify that your device is working.
3 Build the model to generate code and create the executable.
The deployed application can then be used on a machine that does not have MATLAB and Simulink.
Note If you use a GigE Vision camera with the From Video Device block, you must install GenICam to
use the generated application outside of MATLAB. After you install the GenICam driver, load the DLL
files by manually adding the path to the DLL files to the system path (in Control Panel > System >
Advanced system settings > Environment Variables...) .
The block supports use of the packNGo function. Source-specific properties for your device are
honored when code is generated. The generated code compiles with both C and C++ compilers.
To set up packNGo, run the following code in the MATLAB Command Window.
set_param(gcs,'PostCodeGenCommand','packNGo(buildInfo)');
In this example, gcs is the current model that you want to build. Building the model creates a zip file
with the same name as the model name. You can move this zip file to another machine and build the
source code in the zip file to create an executable that can run independent of MATLAB and Simulink.
For more information, see packNGo.
9-4
Code Generation with From Video Device Block
Note The From Video Device block supports the use of Simulink Rapid Accelerator mode and code
generation on Windows platforms. Code generation is also supported on Linux®, but Rapid
Accelerator mode is not.
Note If you get a “Device in use” error message when using the block with certain hardware, such as
Matrox, close any programs that are using the hardware, and try using the block again.
Note On Linux platforms, you need to add the directory where you unzip the libraries to the
environment variable LD_LIBRARY_PATH.
See Also
From Video Device
9-5
9 Using the From Video Device Block in Simulink
This example shows how to build a simple model using the From Video Device block in conjunction
with blocks from other blockset libraries.
Note Block names are not shown by default in the model. To display the hidden block names while
working in the model, select Display and clear the Hide Automatic Names check box.
simulink
On the Simulink start page, click Blank Model and then Create Model. An empty Editor window
opens.
In the Editor, click Save on the Simulation tab to assign a name to your new model.
The Simulink Library Browser opens. Its left pane contains a tree of available block libraries in
alphabetical order. Click the Image Acquisition Toolbox node.
9-6
Save Video Data to a File
Step 3: Drag the From Video Device Block into the Model
To use the From Video Device block in a model, drag the block into the Simulink Editor. Note how the
name on the block changes to reflect the device connected to your system that is associated with the
block. If you have multiple devices connected, you can choose the device to use in the parameters
dialog box by double-clicking the block.
9-7
9 Using the From Video Device Block in Simulink
Open the Computer Vision Toolbox library. In the library window, open the Sinks subsystem. From
this subsystem, click the To Multimedia File block in the library and drag the block into the Simulink
Editor.
9-8
Save Video Data to a File
The input ports of the To Multimedia File block are RGB, but you can use a camera that has either
YCbCr or RGB output ports.
9-9
9 Using the From Video Device Block in Simulink
For example, using this dialog box, you can specify the device you want to use, select the video
format you want to use with the device, or specify the block sample time. For details, see From Video
Device.
9-10
Save Video Data to a File
You can set parameters for any of the blocks you include in your model. For example, to specify the
name of the AVI file, double-click the To Multimedia File block. Make sure that you have write
permission to the directory into which the block writes the AVI file.
While the simulation is running, the status bar at the bottom of the Simulink Editor indicates the
progress of the simulation. When the simulation finishes, the software saves an AVI file to the current
working directory.
9-11
9 Using the From Video Device Block in Simulink
See Also
From Video Device
9-12
10
Types of Setups
The Image Acquisition Toolbox software supports GigE Vision devices. The following sections
describe information on installing and configuring the devices to work with the Image Acquisition
Toolbox software. Separate troubleshooting information is found in “Troubleshooting GigE Vision
Devices on Windows” on page 16-19.
Note Not all cameras that use Ethernet are GigE Vision. A camera must have the GigE Vision logo
appearing on it or its data sheet to be a GigE Vision device.
There are five different setups you can use for GigE Vision cameras.
10-2
Network Hardware Configuration Notes
Using the same network as the PC on a shared network connection — Plug the camera into the
network that the PC is plugged into. They must be on the same subnet. A system administrator can
configure a VLAN if necessary.
Using a private network connection — You can connect the camera through the main/only Ethernet
card, or through a second Ethernet card. In either scenario, a switch can be used to connect multiple
cameras.
Ethernet cards — Ethernet cards must be 1000 Mbps. If direct connection or PC network allows, use
a card that supports jumbo frames for larger packet sizes. Also, on Windows, increase the number of
receive buffers if reception is poor.
Switches for connecting multiple cameras — Use a switch that has full duplex 1000 Gbps per port
capacity. It can be a managed switch, but does not have to be.
10-3
10 Configuring GigE Vision Devices
Windows Configuration
Important Note: When you install your vendor software that came with your device, do not install
your vendor's filtering or performance networking driver.
Let Windows automatically determine the IP if you are using a single direct connection to the PC,
instead of attempting to use static IP. Otherwise, leave organizational IP configuration settings in
place.
If you have multiple cameras connected to multiple Ethernet cards, you cannot have them all set to
automatic IP configuration. You must specify the IP address for each card and each card must be on a
different subnet.
Enable large frame support if your Ethernet card, and switch if present, supports it and you are using
a direct connection. If you are not using a direct connection, you can enable large frame support if all
switches and routers in your organization's network support it.
1 It is not necessary to install your vendor software that came with your device, but you may want
to in order to verify that the device is running outside of MATLAB.
Important Note: Do not install your vendor's filtering or performance networking driver.
2 In the Windows Network Connections dialog box (part of Control Panel), if using a second
network adaptor, you can optionally rename your second network adaptor to “GigE Vision” to
help distinguish it from your primary adaptor.
If the Status column says “Limited or no connectivity,” it will not impact your camera, as that
status applies to the Internet.
3 Open the Properties dialog box of the Ethernet card by double-clicking it in Network
Connections. If you are using a separate Ethernet card for the GigE camera, make sure that in
the This connection uses the following items section on the General tab you have nothing
selected except for Internet Protocol (TCP/IP). Be sure to use TCP/IP version 4, and not
version 6.
Make sure that any vendor drivers are unchecked and that anti-virus program drivers are
unchecked. If you cannot uncheck the anti-virus software from the adaptor due to organization
restrictions, you may need to purchase a second gigabit Ethernet card. In this case, leave all of
10-4
Network Adaptor Configuration Notes
the options as is for the network card for your PC, and configure the second card as described
here, which will only connect to your camera.
4 In Windows Device Manager, make sure your network cards show up as using the correct
network card vendor driver.
For example, in the Device Manager window, under Network adapters, you should see Intel
PRO/1000 PT Desktop Adapter if you use that particular Ethernet card.
Check your adaptor properties. If your situation allows, as described in the next section, make
sure that Jumbo Frames is enabled in the Settings on the Advanced tab. Make sure that
Receive Descriptors is enabled in the Settings > Performance Options on the Advanced tab.
Make sure that the correct adaptor is listed in the Driver tab and that it has not been replaced
with a vendor-specific driver instead of the driver of the Ethernet card.
Note You do not need to install GenICam to use the GigE adaptor, because it is now included in the
installation of the toolbox. However, if you are using the From Video Device block and doing code
generation, you would need to install GenICam to run the generated application outside of MATLAB.
Linux Configuration
You will not need any drivers from your vendor and we recommend that you do not install any that
may have come with your device.
We recommend that you have your system administrator help with the following setup tasks:
For dynamic IP configuration of a camera and Ethernet card not connected to an organizational
network, avahi-autoipd can be used. However, we recommend that each direct connection to a
camera have an interface with a static IP such as 10.10.x.y or 192.168.x.y.
If you want to use jumbo frames and your Ethernet card and switches (if present) allow, configure
the MTU accordingly.
Mac Configuration
You will not need any drivers from your vendor and we recommend that you do not install any that
may have come with your device.
10-5
10 Configuring GigE Vision Devices
In the configuration shown here, the Mac Pro has two Ethernet connections, one to an internal
network, and one for GigE Vision. The GigE Vision connection is set to use DHCP.
10-6
Network Adaptor Configuration Notes
10-7
10 Configuring GigE Vision Devices
10-8
Network Adaptor Configuration Notes
If you are using a MacBook, you may not have the option of Jumbo frames in the MTU.
10-9
10 Configuring GigE Vision Devices
Software Configuration
You need to have GenICam installed, but that is done for you by the Image Acquisition Toolbox. The
necessary environment variables should automatically be set as part of the installation. You can
optionally check to verify the following environment variables. See the examples below.
Note If you have a camera that requires a GenICam XML file on a local drive (most cameras do not),
you should set MWIMAQ_GENICAM_XML_FILES environment variable to the directory of your choice,
and then install the camera's XML file in that directory. However, most cameras do not require or use
local XML files.
Windows Example
MWIMAQ_GENICAM_XML_FILES=C:\cameraXML
imaqhwinfo('gige')
and by looking at the relevant sections of the output when you run the imaqsupport function.
Linux Example
MWIMAQ_GENICAM_XML_FILES=/local/cameraXML
imaqhwinfo('gige')
Mac Example
MWIMAQ_GENICAM_XML_FILES=/local/cameraXML
imaqhwinfo('gige')
and by looking at the relevant sections of the output when you run the imaqsupport function.
Note You do not need to install GenICam to use the GigE adaptor, because it is now included in the
installation of the toolbox. However, if you are using the From Video Device block and doing code
generation, you would need to install GenICam to run the generated application outside of MATLAB.
10-10
Setting Preferences
Setting Preferences
There are three GigE Vision related preferences in the Image Acquisition Preferences. In MATLAB, on
the Home tab, in the Environment section, click Preferences > Image Acquisition.
Timeout for packet acknowledgement – this is a timeout value for the time between the sending
of a command (for camera discovery or control) and the time that the acknowledgement is received
from the camera.
Timeout for heartbeat – the camera requires that the application send a packet every so often (like
a heartbeat) to keep the control connection alive. This is the setting for that packet period. Setting it
too low can add unnecessary load to the computer and to the camera. Setting it too high can cause
the camera to remain in use too long beyond when the toolbox attempts to relinquish control, leading
to a failure to obtain control to start another acquisition.
Retries for commands – this is the number of attempts that the toolbox will make to send a
command to the camera before deciding that the send has failed. The time between retries is set by
the Timeout for packet acknowledgement setting.
10-11
10 Configuring GigE Vision Devices
Disable camera IP correction – check if you want to disable automatic IP correction for your
camera. Clear the check mark to re-enable IP correction.
10-12
Troubleshooting
Troubleshooting
For troubleshooting information for GigE Vision devices on Windows, see “Troubleshooting GigE
Vision Devices on Windows” on page 16-19.
For troubleshooting information for GigE Vision devices on Linux, see “Troubleshooting GigE Vision
Devices on Linux” on page 16-20.
For troubleshooting information for GigE Vision devices on macOS, see “Troubleshooting GigE Vision
Devices on macOS” on page 16-21.
10-13
11
• “GigE Vision Acquisition: gigecam Object vs. videoinput Object” on page 11-2
• “Connect to GigE Vision Cameras” on page 11-3
• “Set Properties for GigE Acquisition” on page 11-4
• “Acquire Images from GigE Vision Cameras” on page 11-8
11 Using the GigE Vision Interface
You can continue to use the GigE Vision adaptor (gige) with the videoinput object, or you can use
the gigecam object, which takes advantage of GigE properties and features and is more consistent
with GigE Vision conventions for displaying properties and managing selector properties.
Note The GigE Vision support, using either object, requires that you download and install the
necessary files via MATLAB Add-Ons. The GigE Vision Hardware support package installs the files for
both the gige adaptor for the videoinput object and the gigecam object. For more information,
see “Installing the Support Packages for Image Acquisition Toolbox Adaptors” on page 4-5.
If you do not need to use any advanced toolbox features and you do not need to use the Image
Acquisition Tool, the VideoDevice System Object, or Simulink, use the gigecam object to take
advantage of the advanced GigE Vision Standard feature support it offers.
11-2
Connect to GigE Vision Cameras
gigecamlist
ans =
Note The GigE Vision support requires that you download and install the necessary files via MATLAB
Add-Ons. The GigE Vision Hardware support package installs the files for both the gige adaptor for
the videoinput object and the gigecam object. For more information, see “Installing the Support
Packages for Image Acquisition Toolbox Adaptors” on page 4-5.
If you have the support package installed and gigecamlist does not recognize your camera, see the
troubleshooting information in “GigE Vision Hardware” on page 16-19.
11-3
11 Using the GigE Vision Interface
Property Display
One of the main advantages of using the gigecam object for image acquisition, instead of the gige
adaptor with the videoinput object, is the advanced property features of GigE Vision Compliant
hardware.
When you create the gigecam object, the basic properties are displayed, as shown here.
g = gigecam
g =
DeviceModelName: 'MV1-D1312-80-G2-12'
SerialNumber: '022600017445'
IPAddress: '169.254.192.165'
PixelFormat: 'Mono8'
AvailablePixelFormats: {'Mono8' 'Mono10Packed' 'Mono12Packed' 'Mono10' 'Mono12'}
Height: 1082
Width: 1312
Timeout: 10
When you click Beginner, the Beginner level camera properties are displayed.
11-4
Set Properties for GigE Acquisition
The list of available properties is specific to your camera. The display of properties is broken into
categories based on GenICam categories as specified by camera manufacturers. For example, in the
display shown here, you can see a set of device control properties, and a set of acquisition control
properties. There are other categories not shown in this graphic, such as analog control, convolver,
and image format control.
The GigE Vision category standard also provides levels of expertise for the available categories. When
you create the gigecam object, you see a small set of commonly used properties with links to the
11-5
11 Using the GigE Vision Interface
expanded property list based on expertise. To see the additional properties, click Beginner, Expert,
or Guru.
When you use the gigecam function with no arguments, it creates the object and connects to the
single GigE Vision Compliant camera on your system, or to the first camera it finds listed in the
output of the gigecamlist function if you have multiple cameras. If you use an argument to create
the object — either an IP address, index number, or serial number — as described in “Create the
gigecam Object” on page 11-8, that argument must be the first argument.
g = gigecam('169.254.242.122')
To set a property when creating the object, it must be specified as a name-value pair after the IP
address, index number, or serial number. The following command creates the object using the camera
on the IP address used as the first argument, then sets the PixelFormat property to Mono10.
If you are creating the object with just one camera connected, you can use the index number 1 as the
first input argument, then a property-value pair.
You can set multiple properties in this way, and you can use pairs of either character vectors or
numerics.
You can set or change properties any time after you create the object, using this syntax, where g is
the object name.
g.ExposureTime = 20000
If you want to change the Timeout from its default value of 10 seconds, to increase it to 20 seconds
for example, use this syntax.
g.Timeout = 20
This way of setting properties also supports both character vectors and numerics.
g.LinLog_Mode = 'On';
Note To get a list of options you can use on a function, press the Tab key after entering a function on
the MATLAB command line. The list expands, and you can scroll to choose a property or value. For
information about using this advanced tab completion feature, see “Using Tab Completion for
Functions” on page 5-17.
11-6
Set Properties for GigE Acquisition
The commands function tells you what commands are available for your camera to use. The output
depends on the commands that are supported by your specific hardware. To get the list, use the
commands function with the object name, which is g in this example.
commands(g)
Then use executeCommand to execute any of the commands found by the commands function. The
command name is passed as a character vector. For example, set a calibration correction.
executeCommand(g, 'Correction_CalibrateGrey');
The camera is set to correct the grey calibration when you acquire images.
You may have a camera that has a command to perform auto focus. With a gigecam object named
gcam and a GigE command named AutoFocus.
executeCommand(gcam, 'AutoFocus');
You can also see the list of commands for your camera by clicking the Show Commands link at the
bottom of the properties list when you create the gigecam object.
11-7
11 Using the GigE Vision Interface
You can also optionally set a property when you create the object. For more information, see “Set
Properties for GigE Acquisition” on page 11-4.
Note that you cannot create more than one object connected to the same device, and trying to do that
generates an error.
After you create the object, you can preview and acquire images.
Note The GigE Vision support requires that you download and install the necessary files via MATLAB
Add-Ons. The GigE Vision Hardware support package installs the files for both the gige adaptor for
the videoinput object and the gigecam object. For more information, see “Installing the Support
Packages for Image Acquisition Toolbox Adaptors” on page 4-5.
Use the gigecamlist function to ensure that MATLAB is discovering your camera.
gigecamlist
ans =
Using the gigecam function with no arguments creates the object, and connects to the single GigE
Vision camera on your system. If you have multiple cameras and you use the gigecam function with
no input argument, it creates the object and connects it to the first camera it finds listed in the output
of the gigecamlist function.
Create an object, g.
g = gigecam
11-8
Acquire Images from GigE Vision Cameras
g =
DeviceModelName: 'MV1-D1312-80-G2-12'
SerialNumber: '022600017445'
IPAddress: '169.254.192.165'
PixelFormat: 'Mono8'
AvailablePixelFormats: {'Mono8' 'Mono10Packed' 'Mono12Packed' 'Mono10' 'Mono12'}
Height: 1082
Width: 1312
Timeout: 10
Use the gigecam function with the IP address of the camera (specified as a character vector) as the
input argument to create the object and connect it to the camera with that address. You can see the
IP address for your camera in the list returned by the gigecamlist function.
Use the gigecamlist function to ensure that MATLAB is discovering your cameras.
gigecamlist
ans =
g = gigecam('169.254.242.122')
g =
DeviceModelName: 'mvBlueCOUGER-X120aG'
SerialNumber: 'GX000818'
IPAddress: '169.254.242.122'
PixelFormat: 'Mono8'
AvailablePixelFormats: {'Mono8' 'Mono12' 'Mono14' 'Mono16' 'Mono12Packed'
'BayerGR8' 'BayerGR10' 'BayerGR12' 'BayerGR16' 'BayerGR12Packed'
'YUV422Packed' 'YUV422_YUYVPacked' 'YUV444Packed'}
Height: 1082
Width: 1312
Timeout: 10
11-9
11 Using the GigE Vision Interface
You can also create the object in this same way using the serial number. You use the same syntax, but
use a serial number instead of the IP address, also as a character vector.
g = gigecam('022600017445')
Use the gigecam function with an index as the input argument to create the object corresponding to
that index and connect it to that camera. The index corresponds to the order of cameras in the table
returned by gigecamlist when you have multiple cameras connected.
Use the gigecamlist function to ensure that MATLAB is discovering your cameras.
gigecamlist
ans =
g = gigecam(2)
g =
DeviceModelName: 'mvBlueCOUGER-X120aG'
SerialNumber: 'GX000818'
IPAddress: '169.254.242.122'
PixelFormat: 'Mono8'
AvailablePixelFormats: {'Mono8' 'Mono12' 'Mono14' 'Mono16' 'Mono12Packed'
'BayerGR8' 'BayerGR10' 'BayerGR12' 'BayerGR16' 'BayerGR12Packed'
'YUV422Packed' 'YUV422_YUYVPacked' 'YUV444Packed'}
Height: 1082
Width: 1312
Timeout: 10
It creates the object and connects it to the Matrix Vision camera with that index number, in this case,
the second one displayed by gigecamlist. If you only have one camera, you do not need to use the
index.
1 Use the gigecamlist function to ensure that MATLAB is discovering your camera.
gigecamlist
11-10
Acquire Images from GigE Vision Cameras
ans =
g = gigecam
g =
DeviceModelName: 'MV1-D1312-80-G2-12'
SerialNumber: '022600017445'
IPAddress: '169.254.192.165'
PixelFormat: 'Mono8'
AvailablePixelFormats: {'Mono8' 'Mono10Packed' 'Mono12Packed' 'Mono10' 'Mono12'}
Height: 1082
Width: 1312
Timeout: 10
preview(g)
The preview window displays live video stream from your camera. The preview dynamically
updates, so if you change a property while previewing, the image changes to reflect the property
change.
4 Optionally, set any properties. Properties are displayed when you create the object, as shown in
step 2. For example, you could change the ExposureTime setting.
g.ExposureTime = 20000
For more information, see “Set Properties for GigE Acquisition” on page 11-4.
5 Optionally, use any of the GigE camera commands that your camera supports.
For more information, see “Set Properties for GigE Acquisition” on page 11-4.
6 Close the preview.
closePreview(g)
7 Acquire a single image from the camera using the snapshot function, and assign it to the
variable img
img = snapshot(g);
8 Display the acquired image.
imshow(img)
9 Clean up by clearing the object.
11-11
11 Using the GigE Vision Interface
clear g
11-12
12
Doing image acquisition with a Kinect for Windows camera is similar to using other cameras and
adaptors, but with several key differences:
• The Kinect for Windows device has two separate physical sensors, and each one uses a different
DeviceID in the videoinput object. The Kinect color sensor returns color image data. The
Kinect depth sensor returns depth and skeletal data. For information about Kinect device
discovery and the use of two device IDs, see “Detecting the Kinect Devices” on page 12-5.
• The Kinect for Windows device returns four data streams. The image stream is returned by the
color sensor and contains color data in various color formats. The depth stream is returned by the
depth sensor and returns depth information in pixels. The skeletal stream is returned by the depth
sensor and returns metadata about the skeletons. There is also an audio stream, but this is unused
by Image Acquisition Toolbox. For details on the streams, see “Data Streams Returned by the
Kinect” on page 12-3.
• The Kinect for Windows can track up to six people. It can provide full tracking on two people, and
position tracking on up to four more.
• In Image Acquisition Toolbox, skeletal metadata is accessed through the depth sensor object. For
an example showing how to access the skeletal metadata, see “Acquiring Image and Skeletal Data
Using Kinect” on page 12-7.
Note The Kinect adaptor is intended for use only with the Kinect for Windows sensor.
Note With previous versions of the Image Acquisition Toolbox, the files for all of the adaptors were
included in your installation. Starting with version R2014a, each adaptor is available separately
through support packages. In order to use the Image Acquisition Toolbox, you must install the
adaptor that your camera uses. See “Image Acquisition Support Packages for Hardware Adaptors” on
page 4-2 for information about installing the adaptors using MATLAB Add-Ons. See “Installing the
Kinect for Windows Sensor Support Package” on page 12-20 for information specific to installing the
Kinect support package. Also, in order to use the Kinect for Windows support, you must have version
1.6 of the Kinect for Windows Runtime installed on your system. If you do not already have it
installed, it will be installed when you install the Kinect support package.
12-2
Data Streams Returned by the Kinect
Image Stream
The image stream returns color image data and other formats using the Kinect color sensor. It
supports the following formats.
Format Description
RawYUV_640x480 Raw YUV format. Resolution of 640 x 480, frame
rate of 15 frames per second, which is the
maximum allowed.
RGB_1280x960 RGB format. Resolution of 1280 x 960, frame rate
of 12 frames per second, which is the maximum
allowed.
RGB_640x480 RGB format. Resolution of 640 x 480, frame rate
of 30 frames per second, which is the maximum
allowed.
YUV_640x480 YUV format. Resolution of 640 x 480, frame rate
of 15 frames per second, which is the maximum
allowed.
Infrared_640x480 Infrared format. MONO16 frame type with
resolution of 640 x 480, frame rate of 30 frames
per second, which is the maximum allowed.
12-3
12 Using the Kinect for Windows Adaptor
Format Description
RawBayer_640x480 Raw Bayer format. MONO8 frame type with
resolution of 640 x 480, frame rate of 30 frames
per second, which is the maximum allowed.
Depth Stream
The depth stream returns person segmentation data using the Kinect depth sensor. The depth map is
distance in millimeters from the camera plane. For Skeletal Tracking only two people can be tracked
at a given time, although six people can be segmented at a time. This means it can provide full
tracking on two skeletons, and partial position tracking on up to four more. The tracking ranges are a
default range of 50 cm to 400 cm and a near range of 40 cm to 300 cm.
Format Description
Depth_640x480 Resolution of 640 x 480, frame rate of 30 frames
per second
Depth_320x240 Resolution of 320 x 240, frame rate of 30 frames
per second
Depth_80x60 Resolution of 80 x 60, frame rate of 30 frames per
second
Skeletal Stream
The skeletal stream returns skeletal data using the Kinect depth device. The skeleton frame returned
contains data on the ground plane position and a time stamp. It contains the overall position of the
skeleton and the 3-D position of all 20 joints (position in meters). Two skeletons are actively tracked,
and another four are tracked passively.
Note To understand the differences in using the Kinect adaptor compared to other toolbox adaptors,
see “Important Information About the Kinect Adaptor” on page 12-2. For information about Kinect
device discovery and the use of two device IDs, see “Detecting the Kinect Devices” on page 12-5.
For an example that shows how to access the skeletal metadata, see “Acquiring Image and Skeletal
Data Using Kinect” on page 12-7.
12-4
Detecting the Kinect Devices
info =
AdaptorDllName: '<matlabroot>\toolbox\imaq\imaqadaptors\win64\mwkinectimaq.dll'
AdaptorDllVersion: '4.6 (R2013b)'
AdaptorName: 'kinect'
DeviceIDs: {[1] [2]}
DeviceInfo: [1x2 struct]
If you look at each device, you can see that they represent the color sensor and the depth sensor. The
following shows the color sensor.
info.DeviceInfo(1)
ans =
DefaultFormat: 'RGB_640x480'
DeviceFileSupported: 0
DeviceName: 'Kinect Color Sensor'
DeviceID: 1
VideoInputConstructor: 'videoinput('kinect', 1)'
VideoDeviceConstructor: 'imaq.VideoDevice('kinect', 1)'
SupportedFormats: {'RGB_1280x960' 'RGB_640x480' 'RawYUV_640x480' 'YUV_640x480'
'Infrared_640x480' 'RawBayer_1280x960' 'RawBayer_640x480'}
In the output, you can see that Device 1 is the color sensor.
ans =
DefaultFormat: 'Depth_640x480'
DeviceFileSupported: 0
DeviceName: 'Kinect Depth Sensor'
DeviceID: 2
VideoInputConstructor: 'videoinput('kinect', 2)'
VideoDeviceConstructor: 'imaq.VideoDevice('kinect', 2)'
SupportedFormats: {'Depth_640x480' 'Depth_320x240' 'Depth_80x60'}
You can use multiple Kinect cameras together. Multiple Kinect sensors are enumerated as
DeviceIDs [1] [2] [3] [4] and so on. For example, if you had two Kinect cameras, the first one
would have Kinect Color Sensor with DeviceID 1 and Kinect Depth Sensor with DeviceID
2 and the second Kinect camera would have Kinect Color Sensor with DeviceID 3 and Kinect
Depth Sensor with DeviceID 4.
Note To understand the differences in using the Kinect adaptor compared to other toolbox adaptors,
see “Important Information About the Kinect Adaptor” on page 12-2. For more information on the
Kinect streams, see “Data Streams Returned by the Kinect” on page 12-3. For an example that shows
12-5
12 Using the Kinect for Windows Adaptor
how to access the skeletal metadata, see “Acquiring Image and Skeletal Data Using Kinect” on page
12-7.
12-6
Acquiring Image and Skeletal Data Using Kinect
1 Create the videoinput object for the color sensor. DeviceID 1 is used for the color sensor.
vid = videoinput('kinect',1,'RGB_640x480');
2 Look at the device-specific properties on the source device, which is the color sensor on the
Kinect camera.
src = getselectedsource(vid);
src
General Settings:
Parent = [1x1 videoinput]
Selected = on
SourceName = ColorSource
Tag =
Type = videosource
As you can see in the output, the color sensor has a set of device-specific properties.
12-7
12 Using the Kinect for Windows Adaptor
Values are:
12-8
Acquiring Image and Skeletal Data Using Kinect
12-9
12 Using the Kinect for Windows Adaptor
src
General Settings:
Parent = [1x1 videoinput]
Selected = on
SourceName = DepthSource
Tag =
Type = videosource
12-10
Acquiring Image and Skeletal Data Using Kinect
As you can see in the output, the depth sensor has a set of device-specific properties associated
with skeletal tracking. These properties are specific to the depth sensor.
12-11
12 Using the Kinect for Windows Adaptor
[] Default tracking
start(vid2);
8 Skeletal data is accessed as metadata on the depth stream. You can use getdata to access it.
metaData =
12-12
Acquiring Image and Skeletal Data Using Kinect
MetaData Description
AbsTime This is a 1 x 1 double and represents the full
timestamp, including date and time, in
MATLAB clock format.
FrameNumber This is a 1 x 1 double and represents the
frame number.
IsPositionTracked This is a 1 x 6 Boolean matrix of true/false
values for the tracking of the position of each
of the six skeletons. A 1 indicates the
position is tracked and a 0 indicates it is not.
IsSkeletonTracked This is a 1 x 6 Boolean matrix of true/false
values for the tracked state of each of the six
skeletons. A 1 indicates it is tracked and a 0
indicates it is not.
JointDepthIndices If the BodyPosture property is set to
Standing, this is a 20 x 2 x 6 double matrix
of x-and y-coordinates for 20 joints in pixels
relative to the depth image, for the six
possible skeletons. If BodyPosture is set to
Seated, this would be a 10 x 2 x 6 double for
10 joints.
JointImageIndices If the BodyPosture property is set to
Standing, this is a 20 x 2 x 6 double matrix
of x-and y-coordinates for 20 joints in pixels
relative to the color image, for the six
possible skeletons. If BodyPosture is set to
Seated, this would be a 10 x 2 x 6 double for
10 joints.
12-13
12 Using the Kinect for Windows Adaptor
MetaData Description
JointTrackingState This 20 x 6 integer matrix contains
enumerated values for the tracking accuracy
of each joint for all six skeletons. Values
include:
0 not tracked
1 position inferred
2 position tracked
JointWorldCoordinates This is a 20 x 3 x 6 double matrix of x-, y- and
z-coordinates for 20 joints, in meters from
the sensor, for the six possible skeletons, if
the BodyPosture is set to Standing. If it is
set to Seated, this would be a 10 x 3 x 6
double for 10 joints.
12-14
Acquiring Image and Skeletal Data Using Kinect
MetaData Description
TriggerIndex This is a 1 x 1 double and represents the
trigger the event is associated with if
triggering is used.
9 You can look at any individual property by drilling into the metadata. For example, look at the
IsSkeletonTracked property.
metaData.IsSkeletonTracked
ans =
1 0 0 0 0 0
In this case it means that of the six possible skeletons, there is one skeleton being tracked and it
is in the first position. If you have multiple skeletons, this property is useful to confirm which
ones are being tracked.
10 Get the joint locations for the first person in world coordinates using the
JointWorldCoordinates property. Since this is the person in position 1, the index uses 1.
metaData.JointWorldCoordinates(:,:,1)
ans =
The columns represent the X, Y, and Z coordinates in meters of the 20 points on skeleton 1.
11 You can optionally view the segmentation data as an image.
12-15
12 Using the Kinect for Windows Adaptor
The BodyPosture property, in step 5, indicates whether the tracked skeletons are standing or
sitting. Values are Standing (gives 20 point skeleton data) and Seated (gives 10 point skeleton
data, using joint indices 2 - 11).
Hip_Center = 1;
Spine = 2;
Shoulder_Center = 3;
Head = 4;
Shoulder_Left = 5;
Elbow_Left = 6;
Wrist_Left = 7;
Hand_Left = 8;
Shoulder_Right = 9;
Elbow_Right = 10;
Wrist_Right = 11;
Hand_Right = 12;
Hip_Left = 13;
Knee_Left = 14;
Ankle_Left = 15;
Foot_Left = 16;
Hip_Right = 17;
Knee_Right = 18;
Ankle_Right = 19;
Foot_Right = 20;
When BodyPosture is set to Standing, all 20 indices are returned, as shown above. When
BodyPosture is set to Seated, numbers 2 through 11 are returned, since this represents the upper
body of the skeleton.
Note To understand the differences in using the Kinect adaptor compared to previous toolbox
adaptors, see “Important Information About the Kinect Adaptor” on page 12-2. For information about
Kinect device discovery and the use of two device IDs, see “Detecting the Kinect Devices” on page 12-
5. For an example of simultaneous acquisition, see “Acquiring from Color and Depth Devices
Simultaneously” on page 12-17.
12-16
Acquiring from Color and Depth Devices Simultaneously
1 Create the objects for the color and depth sensors. Device 1 is the color sensor and Device 2 is
the depth sensor.
vid = videoinput('kinect',1);
vid2 = videoinput('kinect',2);
2 Get the source properties for the depth device.
srcDepth = getselectedsource(vid2);
3 Set the frames per trigger for both devices to 1.
vid.FramesPerTrigger = 1;
vid2.FramesPerTrigger = 1;
4 Set the trigger repeat for both devices to 200, in order to acquire 201 frames from both the color
sensor and the depth sensor.
vid.TriggerRepeat = 200;
vid2.TriggerRepeat = 200;
5 Configure the camera for manual triggering for both sensors.
triggerconfig([vid vid2],'manual');
6 Start both video objects.
start([vid vid2]);
7 Trigger the devices, then get the acquired data.
12-17
12 Using the Kinect for Windows Adaptor
1 Create the Kinect objects and acquire image and skeletal data, as shown in “Acquiring Image and
Skeletal Data Using Kinect” on page 12-7.
2 Use the skeletonViewer function to view the skeletal data.
In this code, skeleton is the joint image locations returned by the Kinect depth sensor, and
image is the RGB image corresponding to the skeleton frame. nSkeleton is the number of
skeletons.
function [] = skeletonViewer(skeleton, image, nSkeleton)
12-18
Using Skeleton Viewer for Kinect Skeletal Data
[18 19];
[19 20];
[1 13]; % Left Leg
[13 14];
[14 15];
[15 16]];
5 Draw the skeletons on the RGB image.
for i = 1:19
if nSkeleton > 0
X1 = [skeleton(SkeletonConnectionMap(i,1),1,1) skeleton(SkeletonConnectionMap(i,2),1,1)];
Y1 = [skeleton(SkeletonConnectionMap(i,1),2,1) skeleton(SkeletonConnectionMap(i,2),2,1)];
line(X1,Y1, 'LineWidth', 1.5, 'LineStyle', '-', 'Marker', '+', 'Color', 'r');
end
if nSkeleton > 1
X2 = [skeleton(SkeletonConnectionMap(i,1),1,2) skeleton(SkeletonConnectionMap(i,2),1,2)];
Y2 = [skeleton(SkeletonConnectionMap(i,1),2,2) skeleton(SkeletonConnectionMap(i,2),2,2)];
line(X2,Y2, 'LineWidth', 1.5, 'LineStyle', '-', 'Marker', '+', 'Color', 'g');
end
hold on;
end
hold off;
The viewer will show the following for this example, which contains the color image of one
person, with the skeletal data overlaid on the image.
12-19
12 Using the Kinect for Windows Adaptor
In order to use the Kinect for Windows support in the Image Acquisition Toolbox, you must have the
correct version of the Kinect for Windows Runtime installed on your system. If you do not already
have it installed, it will be installed when you install the Kinect support package. After you complete
the support package installation, you can acquire images using the Kinect for Windows V1 or V2 with
the Image Acquisition Toolbox.
Using this installation process, you download and install the following file(s) on your host computer:
• MATLAB files to use Kinect for Windows V1 cameras with the toolbox
• MATLAB files to use Kinect for Windows V2 cameras with the toolbox
• Kinect for Windows Runtime, if you do not already have a current version installed
Note You can use this support package only on a host computer running a version of 64-bit Windows
that Image Acquisition Toolbox supports.
If the installation fails, see the Troubleshooting section at the end of this topic.
1 On the MATLAB Home tab, in the Environment section, click Add-Ons > Get Hardware
Support Packages.
2 In the Add-On Explorer, scroll to the Hardware Support Packages section, and click show all
to find your support package.
3 You can refine the list by selecting Imaging/Cameras in the Refine by Hardware Type section
on the left side of the Explorer.
4 Select Image Acquisition Toolbox Support Package for Kinect For Windows Sensor.
Troubleshooting
If the setup fails, it could be caused by an internet security setting. If you get an error message such
as “KINECT Setup Failed – An error occurred while installing,” try the following and then run the
installer again.
12-20
13
You can continue to use the Matrox adaptor (matrox) with the videoinput object, or you can use
the matroxcam object, which takes advantage of Matrox features.
Note The Matrox support, using either object, requires that you download and install the necessary
files via MATLAB Add-Ons. The Matrox Hardware support package installs the files for both the
matrox adaptor for the videoinput object and the matroxcam object. For more information, see
“Installing the Support Packages for Image Acquisition Toolbox Adaptors” on page 4-5.
• More robust support for a wider configuration of supported devices. Supports more device
configurations than the videoinput matrox adaptor.
• Added support for the MIL 10.
• Supports newer devices.
If you do not need to use any advanced toolbox features and you do not need to use the Image
Acquisition Tool, the VideoDevice System Object, or the Simulink block, use the matroxcam object to
take advantage of the advanced Matrox feature support it offers.
Note that the matroxcam and matroxlist functions that are available with the matroxcam object
starting in release R2015a work with MIL or MIL-Lite 10.x only. Use of the matrox adaptor with the
videoinput object can use either MIL 9 or 10.
13-2
Connect to Matrox Frame Grabbers
matroxlist
ans =
Note The Matrox support requires that you download and install the necessary files via MATLAB
Add-Ons. The Matrox Hardware support package installs the files for both the matrox adaptor for the
videoinput object and the matroxcam object. For more information, see “Installing the Support
Packages for Image Acquisition Toolbox Adaptors” on page 4-5.
If you have the support package installed and matroxlist does not recognize your hardware, see
the troubleshooting information in “Matrox Hardware” on page 16-10.
13-3
13 Using the Matrox Interface
1 Set any properties you want to configure in your DCF file, using the Matrox Intellicam software.
2 Use the matroxlist function to ensure that MATLAB is discovering your frame grabber.
matroxlist
ans =
m =
The four properties shown when you create the object are read-only properties that identify the
frame grabber.
4 You can then preview and acquire images, as described in “Acquire Images from Matrox Frame
Grabbers” on page 13-6.
Note If you need to change any properties after you preview your image, you must change them in
the DCF file, and then create a new object to pick up the changes.
If your DCF file is configured for hardware triggering, then you must provide the trigger to acquire
images. To do that, call the snapshot function as you normally would, as described in “Acquire One
Image Frame from a Matrox Frame Grabber” on page 13-7, and then perform the hardware trigger
to acquire the frame.
13-4
Set Properties for Matrox Acquisition
Note that when you call the snapshot function with hardware triggering set, it will not timeout as it
normally would. Therefore, the MATLAB command-line will be blocked until you perform the
hardware trigger.
13-5
13 Using the Matrox Interface
Note that you cannot create more than one object connected to the same device, and trying to do that
generates an error.
After you create the object, you can preview and acquire images.
Note The Matrox support requires that you download and install the necessary files via MATLAB
Add-Ons. The Matrox Hardware support package installs the files for both the matrox adaptor for the
videoinput object and the matroxcam object. For more information, see “Installing the Support
Packages for Image Acquisition Toolbox Adaptors” on page 4-5.
Use the matroxcam function with an index as the first input argument to create the object
corresponding to that index and connect it to that frame grabber. The index corresponds to the order
of boards in the cell array returned by matroxlist when you have multiple frame grabbers
connected. If you only have one frame grabber, you must use a 1 as the input argument.
Use the matroxlist function to ensure that MATLAB is discovering your frame grabbers.
matroxlist
ans =
Create an object, m, using the index number and DCF file. If you want to use the second frame
grabber in the list, the Solios XCL at digitizer 1, use a 2 as the index number, since it is the second
camera on the list. The second argument must be the name of your DCF file, entered as a character
vector. It must contain the fully qualified path to the file as well. In this example, the DCF file is
named mycam.dcf.
m = matroxcam(2, 'C:\Drivers\Solios\dcf\XCL\Basler\A404K\mycam.dcf')
m =
13-6
Acquire Images from Matrox Frame Grabbers
DCFName: 'C:\Drivers\Solios\dcf\XCL\Basler\A404K\mycam.dcf'
FrameResolution: '1300 x 1080'
Timeout: 10
It creates the object and connects it to the Solios XCL with that index number, in this case, the second
one displayed by matroxlist. The DCF file is specified so that the acquisition can use the properties
you have set in your DCF file.
The four properties shown when you create the object are read-only properties that identify the frame
grabber.
1 Use the matroxlist function to ensure that MATLAB is discovering your frame grabber.
matroxlist
ans =
m =
It creates the object and connects it to the Solios XCL with that index number, in this case, the
second one displayed by matroxlist. The DCF file is specified so that the acquisition can use
the properties you have set in your DCF file.
3 Preview the image from the camera.
preview(m)
4 You can leave the Preview window open, or close it any time. To close the preview:
closePreview(m)
5 Acquire a single image using the snapshot function, and assign it to the variable img
img = snapshot(m);
6 Display the acquired image.
imshow(img)
13-7
13 Using the Matrox Interface
clear m
If your DCF file is configured for hardware triggering, then you must provide the trigger to acquire
images. To do that, call the snapshot function as you normally would, as shown in step 5, and then
perform the hardware trigger to acquire the frame.
Note that when you call the snapshot function with hardware triggering set, it will not timeout as it
normally would. Therefore, the MATLAB command-line will be blocked until you perform the
hardware trigger.
13-8
14
You use the imaq.VideoDevice function to create the System object. It supports the same adaptors
and hardware that the videoinput object supports; however, it has different functions and
properties associated with it. For example, the System object uses the step function to acquire single
frames.
14-2
Creating the VideoDevice System Object
Constructor Purpose
obj = imaq.VideoDevice Creates a VideoDevice System object, obj, that
acquires images from a specified image
acquisition device. When you specify no
parameters, by default, it selects the first
available device for the first adaptor returned by
imaqhwinfo.
obj = imaq.VideoDevice(adaptorname) Creates a VideoDevice System object, obj, using
the first device of the specified adaptorname.
adaptorname is a character vector that specifies
the name of the adaptor used to communicate
with the device. Use the imaqhwinfo function to
determine the adaptors available on your system.
obj = imaq.VideoDevice(adaptorname, Creates a VideoDevice System object, obj, with
deviceid) the default format for specified adaptorname
and deviceid. deviceid is a numeric scalar
value that identifies a particular device available
through the specified adaptorname. Use the
imaqhwinfo(adaptorname) syntax to
determine the devices available and
corresponding values for deviceid.
obj = imaq.VideoDevice(adaptorname, Creates a VideoDevice System object, obj, where
deviceid, format) format is a character vector that specifies a
particular video format supported by the device
or a device configuration file (also known as a
camera file).
obj = imaq.VideoDevice(adaptorname, Creates a VideoDevice System object, obj, with
deviceid, format, P1, V1, ...) the specified property values. If an invalid
property name or property value is specified, the
object is not created.
Specifying properties at the time of object creation is optional. They can also be specified after the
object is created. See “Using Properties on a VideoDevice System Object” on page 14-8 for a list of
applicable properties.
14-3
14 Using the VideoDevice System Object
Function Purpose
frame = step(obj);
Note that the first time you call step, it acquires exclusive use of the
hardware and will start streaming data.
release Release VideoDevice resources and allow property value changes.
release(obj)
L = isLocked(obj)
preview(obj)
creates a Video Preview window that displays live video data for the
VideoDevice System object, obj. The Video Preview window displays the
video data at 100% magnification. The size of the preview image is
determined by the value of the VideoDevice System object ROI property. If
not specified, it uses the default resolution for the device.
closepreview Close live image preview window.
closepreview(obj)
closes the live preview window for VideoDevice System object, obj.
imaqhwinfo Returns information about the object.
imaqhwinfo(obj)
14-4
Using VideoDevice System Object to Acquire Frames
The basic workflow for using the VideoDevice System object is to create the object, preview the
image, set any properties, acquire a frame, and clear the object, as shown here.
1 Construct a VideoDevice System object associated with the Winvideo adaptor with device ID of 1.
vidobj.ReturnedColorSpace = 'grayscale';
vidobj.DeviceProperties.Brightness = 150;
Note that the syntax for setting a device-specific property is to list the object name, the
DeviceProperties object, and the property name using dot notation, and then make it equal to
the property value.
4 Preview the image.
preview(vidobj)
5 Acquire a single frame using the step function.
frame = step(vidobj);
6 Display the acquired frame.
imshow(frame)
7 Release the hardware resource.
release(vidobj);
8 Clear the VideoDevice System object.
clear vidobj;
Typically in the Image Acquisition Toolbox, each camera or image device has one device ID. Because
the Kinect for Windows camera has two separate sensors, the Color sensor and the Depth sensor, the
toolbox lists two device IDs. The Kinect Color sensor is device 1 and the Kinect depth sensor is device
2.
This example uses a Kinect V1 device. The toolbox also supports Kinect V2. For information on the
properties and metadata of Kinect V2 devices, install the Image Acquisition Toolbox Support Package
for Kinect For Windows Sensor and see the “Acquire Images with Kinect V2” section in the
documentation.
14-5
14 Using the VideoDevice System Object
The Depth sensor returns skeleton metadata. To access this, you need to add a second output
argument for the step function. The Color sensor works the same way as other devices. So acquiring
a frame using the Kinect Color sensor is done as shown here:
imageData = step(vidobjcolor);
where imageData is the frame acquired if vidobjcolor is a System object created with Device 1,
the Kinect Color sensor.
The Kinect Depth sensor requires a second output argument, as shown here:
where depthData is the frame acquired if vidobjdepth is a System object created with Device 2,
the Kinect Depth sensor, and metadata is the skeleton metadata returned with the frame.
These metadata fields are related to tracking the skeletons. The metadata is returned as a structure
that contains these parameters:
IsPositionTracked
IsSkeletonTracked
JointDepthIndices
JointImageIndices
JointTrackingState
JointWorldCoordinates
PositionDepthIndices
PositionImageIndices
PositionWorldCoordinates
SegmentationData
SkeletonTrackingID
You can then look at both outputs. To see the image frame:
imshow(imageData)
metadata
Note The Kinect for Windows Depth sensor may take some seconds to be ready to begin acquiring
skeletal metadata. In order to see values in the metadata output, you need to acquire multiple frames
using the step function repeatedly. You can do this by using a for loop.
Note By default the System object returns data as single precision values with the range 0.0 to 1.0.
The value represents the fraction through the sensor’s dynamic range. The Kinect depth sensor has a
range of 0 to 8192 mm.
“Acquiring Image and Skeletal Data Using Kinect” on page 12-7 is an example that shows how to
access the skeletal metadata using the videoinput object (not the VideoDevice System object), and
it contains information about the properties you can set on both the Color and Depth sensors, and
14-6
Using VideoDevice System Object to Acquire Frames
descriptions of all the metadata fields. The property names and values are the same as they would be
for the System object, but you would then need to set the properties as shown in step 3 of the above
example (in the current topic) for use with the VideoDevice System object.
14-7
14 Using the VideoDevice System Object
Properties that can be used with the VideoDevice System object include:
Property Description
Device Device from which to acquire images.
obj.Device
Specify the video format to use while acquiring the frame. The
default value of VideoFormat is the default format returned by
imaqhwinfo for the selected device. To specify a Video Format
using a device file, set the VideoFormat property to 'From
device file' This option exists only if your device supports
device configuration files.
obj.VideoFormat
obj.DeviceProperties.<property_name> =
<property_value>
Note that this differs from the videoinput object, the Image
Acquisition Tool, and the From Video Device block, all of which are
0-based.
14-8
Using Properties on a VideoDevice System Object
Property Description
HardwareTriggering Turn hardware triggering on/off. Set this property to 'on' to
enable hardware triggering to acquire images. The property is
visible only when the device supports hardware triggering.
TriggerConfiguration Specifies the trigger source and trigger condition before
acquisition. The triggering condition must be met via the trigger
source before a frame is acquired. This property is visible only
when HardwareTriggering is set to 'on'.
obj.TriggerConfiguration
obj.ReturnedColorSpace
obj.BayerSensorAlignment
obj.ReturnedDataType
Note The setting of properties for the System object supports tab completion for enumerated
properties while coding in MATLAB. Using the tab completion is an easy way to see available
property values. After you type the property name, type a comma, then a space, then the first quote
mark for the value, then hit tab to see the possible values.
14-9
14 Using the VideoDevice System Object
Once you have created a VideoDevice System object, you can set either object-level properties or
device-specific properties on it.
vidobj.ReturnedColorSpace = 'grayscale';
You can see that the syntax for setting an object-level property is to use
<object_name>.<property_name> = <property_value>, where the value may be a character
vector or a numeric.
Another example of an object-level property is setting the region-of-interest, or ROI, to change the
dimensions of the acquired image. The ROI format is specified in pixels in a 1-by-4 element vector [x
y width height].
Note This ROI value is 1-based. This differs from the videoinput object, the Image Acquisition
Tool, and the From Video Device block, all of which are 0-based.
vidobj.DeviceProperties.Brightness = 150;
You can see that the syntax for setting a device-specific property is to use dot notation with the object
name, the DeviceProperties object, and the property name and then make it equal to the property
value.
Another example of a device-specific property is setting the frame rate for a device that supports it.
vidobj.DeviceProperties.FrameRate = '30';
Note Once you have done a step, in order to change a property or set a new one, you need to release
the object using the release function, before setting the new property.
14-10
Code Generation with VideoDevice System Object
In this section...
“Using the codegen Function” on page 14-11
“Shared Library Dependencies” on page 14-11
“Usage Rules for System Objects in Generated MATLAB Code” on page 14-12
“Limitations on Using System Objects in Generated MATLAB Code” on page 14-12
Note The MATLAB Compiler software supports System objects for use inside MATLAB functions. The
MATLAB Compiler does not support System objects for use in MATLAB scripts.
Note If you use the codegen command to generate a MEX function on a Windows platform, you
need to perform imaqreset before running the generated MEX file. However, MEX does not work
correctly if the Kinect for Windows Sensor support package is installed.
After running the generated MEX file, if you run some MATLAB code that includes a VideoDevice
System object with a camera adaptor that is also used in the generated MEX file, you need to perform
clear mex first.
Note The codegen command can be used to generate executable files on non-Windows platforms.
However, generation of the MEX function is not supported on Linux and macOS platforms.
For more information see the documentation for the MATLAB codegen function.
For example, your path may look like this on a Windows system and using release R2018a:
C:\ProgramData\MATLAB\SupportPackages\R2018a\toolbox\imaq\supportpackages\genericvideo
\adaptor\win64
14-11
14 Using the VideoDevice System Object
You will need to add the above folders to your system path before running the generated executable
outside of MATLAB.
• get
• getNumInputs
• getNumOutputs
• reset
• step
Do not set System objects to become outputs from the MATLAB Function block.
Do not pass a System object as an example input argument to a function being compiled with
codegen.
14-12
Code Generation with VideoDevice System Object
Do not pass a System object to functions declared as extrinsic (i.e., functions called in interpreted
mode) using the coder.extrinsic function. Do not return System objects from any extrinsic
functions.
14-13
15
The Image Acquisition Toolbox Adaptor Kit is a C++ framework that you can use to implement an
adaptor. An adaptor is a dynamic link library (DLL) that implements the connection between the
Image Acquisition Toolbox engine and a device driver via the vendor's SDK API. When you use the
Adaptor Kit framework, you can take advantage of many prepackaged toolbox features such as disk
logging, multiple triggering modes, and a standardized interface to the image acquisition device.
After you create your adaptor DLL and register it with the toolbox using the imaqregister function,
you can create a video input object to connect with a device through your adaptor. In this way,
adaptors enable the dynamic loading of support for hardware without requiring recompilation and
linking of the toolbox.
To build an adaptor requires familiarity with C++, knowledge of the application programming
interface (API) provided by the manufacturer of your hardware, and familiarity with Image
Acquisition Toolbox concepts, functionality, and terminology. To learn more about creating an adaptor,
see “Creating Custom Adaptors”. For detailed information about the adaptor kit framework classes,
see the Image Acquisition Toolbox Adaptor Kit Class Reference, which is available in
matlabroot\toolbox\imaq\imaqadaptors\kit\doc\adaptorkit.chm
15-2
16
Troubleshooting
This chapter provides information about solving common problems you might encounter with the
Image Acquisition Toolbox software and the video acquisition hardware it supports.
Troubleshooting Overview
If, after installing the Image Acquisition Toolbox software and using it to establish a connection to
your image acquisition device, you are unable to acquire data or encounter other problems, try these
troubleshooting steps first. They might help fix the problem.
The following sections describe how to perform these steps for the vendors and categories of devices
supported by the Image Acquisition Toolbox software.
If you are encountering problems with the preview window, see “Video Preview Window
Troubleshooting” on page 16-32.
Note To see the full list of hardware that the toolbox supports, visit the Image Acquisition Toolbox
product page at the MathWorks website www.mathworks.com/products/image-acquisition.
Note With previous versions of the Image Acquisition Toolbox, the files for all of the adaptors were
included in your installation. Starting with version R2014a, each adaptor is available separately
through support packages. In order to use the Image Acquisition Toolbox, you must install the
adaptor that your camera uses. See “Image Acquisition Support Packages for Hardware Adaptors” on
page 4-2 for information about installing the adaptors using MATLAB Add-Ons.
16-2
DALSA Sapera Hardware
If you are having trouble using the Image Acquisition Toolbox software with a supported DALSA
Sapera frame grabber, try the following:
1 Install the Image Acquisition Toolbox Support Package for DALSA Sapera Hardware.
Starting with version R2014a, each adaptor is available separately through MATLAB Add-Ons.
See “Image Acquisition Support Packages for Hardware Adaptors” on page 4-2 for information
about installing the adaptors.
2 Install the third-party hardware device driver separately and verify that it is compatible with
Image Acquisition Toolbox. For the correct driver information, check the list of supported drivers
for your MATLAB release under Third-Party Requirements on Teledyne DALSA Support from
Image Acquisition Toolbox.
Note Image Acquisition Toolbox is compatible only with specific driver versions provided with
the DALSA Sapera hardware and is not guaranteed to work with any other versions.
Find out the driver version you are using on your system. To learn how to get this information,
see “Determining the Driver Version for DALSA Sapera Devices” on page 16-4.
If you discover that you are using an unsupported driver version, visit the Teledyne DALSA
website to download the correct driver.
3 Use the imaqhwinfo function to verify if 'dalsa' is listed.
imaqhwinfo
ans =
struct with fields:
InstalledAdaptors: {'dalsa' 'winvideo'}
MATLABVersion: '9.4 (R2018a)'
ToolboxName: 'Image Acquisition Toolbox'
ToolboxVersion: '5.3 (R2018a)'
In the example shown, the DALSA and WinVideo adaptors are present, meaning the DALSA and
OS Generic Video Interface support packages are installed.
4 Verify that you have a supported frame grabber. See a list of supported frame grabbers on
Teledyne DALSA Support from Image Acquisition Toolbox.
5 Verify that your image acquisition hardware is functioning properly.
For DALSA Sapera devices, run the application that came with your hardware, the Sapera
CamExpert, and verify that you can view a live video stream from your camera.
16-3
16 Troubleshooting
If you are using a camera file to configure the device, verify that the toolbox can locate your
camera file. Make sure that your camera appears in the Camera list in the Sapera CamExpert.
To test the camera, select the camera in the list, and click the Grab button.
6 Make sure no other application is using the camera.
16-4
DCAM IEEE 1394 (FireWire) Hardware on Windows
1 Install the Image Acquisition Toolbox Support Package for DCAM Hardware.
Starting with version R2014a, each adaptor is available separately through MATLAB Add-Ons.
See “Image Acquisition Support Packages for Hardware Adaptors” on page 4-2 for information
about installing the adaptors.
2 When installing the CMU 1394 Digital Camera Driver Setup, on the first page of the installation
wizard, under Select components to install, select these three items in the installation list, and
click Next.
Verify that your IEEE 1394 (FireWire) camera can be accessed through the dcam adaptor.
• Make sure the camera is compliant with the IIDC 1394-based Digital Camera (DCAM)
specification. Vendors typically include this information in documentation that comes with the
camera. If your digital camera is not DCAM compliant, you might be able to use the
winvideo adaptor. See “Windows Video Hardware” on page 16-25 for information.
• Make sure the camera outputs data in uncompressed format. Cameras that output data in
Digital Video (DV) format, such as digital camcorders, cannot use the dcam adaptor. To access
these devices, use the winvideo adaptor. See “Windows Video Hardware” on page 16-25 for
information.
• Make sure you specified the dcam adaptor when you created the video input object. Some
IEEE 1394 (FireWire) cameras can be accessed through either the dcam or winvideo
adaptors. If you can connect to your camera from the toolbox but cannot access some camera
features, such as hardware triggering, you might be accessing the camera through a DirectX®
driver. See “Creating a Video Input Object” on page 5-7 for more information about specifying
adaptors.
• If the demo application does not recognize the camera, install the CMU DCAM driver. See
“Manually Installing the CMU DCAM Driver on Windows” on page 16-6 for instructions.
16-5
16 Troubleshooting
6 Verify that your CMU 1394 Digital Camera driver is version 6.4.6. The "CMU 1394 Digital
Camera Driver" installed version can be found from Windows Control Panel > Programs >
Programs and Features.
7 Confirm that the IEEE 1394 (FireWire) card is detected and drivers are correctly installed in
Windows Device Manager.
Note The CMU DCAM driver is the only DCAM driver supported by the toolbox. You cannot use
vendor-supplied drivers, even if they are compliant with the DCAM specification.
To install the CMU DCAM driver on your system, follow this procedure:
1 Obtain the CMU DCAM driver files. The Image Acquisition Toolbox software includes the CMU
DCAM installation file, 1394camera646.exe, in the directory
matlabroot\toolbox\imaq\imaqextern\drivers\win64\dcam
You can also download the DCAM driver directly from CMU. Go to the website
www.cs.cmu.edu/~iwan/1394 and click the download link.
2 Start the installation by double-clicking the .exe file.
On the first page of the installation wizard under Select components to install, select these
three items in the installation list and click Next.
On the second page of the wizard, accept the default location or browse to a new one, and click
Install.
Note You may need to have your camera recognized after installing the driver. If this happens, open
Device Manager and select the camera software. Right-click it, and choose Update Driver
Software. Browse for the vendor driver software, and install it.
16-6
DCAM IEEE 1394 (FireWire) Hardware on Windows
You can use this demo application to verify whether your camera is using the CMU DCAM driver. To
access a camera through this demo application:
1 Select Start > Programs > CMU 1394 Camera > 1394 Camera Demo.
2 The application opens a window, shown in the following figure.
3 From the Camera Demo application, select Camera > Check Link. This option causes the demo
application to look for DCAM-compatible cameras that are available through the IEEE 1394
(FireWire) connection.
The demo application displays the results of this search in a pop-up message box. In the following
example, the demo application found a camera. Click OK to continue.
4 Select Camera > Select Camera, and select the camera you want to use. The Select Camera
option is not enabled until after the Check Link option has successfully found cameras.
5 Select Camera > Init Camera. In this step, the demo application checks the values of various
camera properties. The demo application might resize itself to fit the video format of the
specified camera. If you see the following dialog box message, click Yes.
16-7
16 Troubleshooting
16-8
DCAM IEEE 1394 (FireWire) Hardware on Windows
7 To exit, select Stop Camera from the Camera menu, and click Exit.
16-9
16 Troubleshooting
Matrox Hardware
In this section...
“Troubleshooting Matrox Devices” on page 16-10
“Determining the Driver Version for Matrox Devices” on page 16-10
If you are having trouble using the Image Acquisition Toolbox software with a supported Matrox
frame grabber, try the following:
1 Install the Image Acquisition Toolbox Support Package for Matrox Hardware.
Starting with version R2014a, each adaptor is available separately through MATLAB Add-Ons.
See “Image Acquisition Support Packages for Hardware Adaptors” on page 4-2 for information
about installing the adaptors.
2 Install the Matrox Imaging Library (MIL) separately and verify that it is the supported version for
your MATLAB release. Check the supported versions in the Third-Party Requirements section
on Matrox Frame Grabber Support from Image Acquisition Toolbox. If you discover that you are
using an unsupported driver version, visit the Matrox website to download the correct drivers.
Note that the matroxcam and matroxlist functions that are available with the matroxcam
object starting in release R2015a work with MIL or MIL-Lite 10.x only. Use of the matrox
adaptor with the videoinput object can use either MIL 9 or 10.
Find out the driver version you are using on your system. To learn how to get this information,
see “Determining the Driver Version for Matrox Devices” on page 16-10.
3 Verify that you are using supported image acquisition hardware and that it is functioning
properly. See a list of supported Matrox frame grabbers on Matrox Frame Grabber Support from
Image Acquisition Toolbox.
For Matrox devices, run the application that came with your hardware, Matrox Intellicam, and
verify that you can receive live video.
4 Make sure no other application is accessing the frame grabber.
Note There is no difference between MIL and MIL-Lite software inside of MATLAB. They both work
with Matrox Imaging devices.
Select Start > Programs > Matrox Imaging Products > MIL Configuration.
16-10
Matrox Hardware
Note As of version R2014b, the Image Acquisition Toolbox supports MIL 10, and that is the
recommended version to use.
16-11
16 Troubleshooting
1 Install the Image Acquisition Toolbox Support Package for National Instruments Hardware.
Starting with version R2014a, each adaptor is available separately through MATLAB Add-Ons.
See “Image Acquisition Support Packages for Hardware Adaptors” on page 4-2 for information
about installing the adaptors.
2 Verify that your image acquisition hardware is functioning properly.
For National Instruments devices, run the application that came with your hardware, NI MAX
(Measurement & Automation Explorer), and verify that you can receive live video.
Launch NI MAX from Start > Programs > National Instruments > NI MAX.
To test that the hardware is working properly, in Measurement & Automation Explorer, expand
Devices and Interfaces, and then expand NI-IMAQ Devices, and expand the node that
represents the board you want to use.
With the board expanded, select the channel or port that you have connected a camera to.
3 Click the Grab button to verify that your camera is working. If it is not, see the National
Instruments device documentation.
4 Confirm that the NI-IMAQ vendor driver version installed is supported by Image Acquisition
Toolbox.
Note The Image Acquisition Toolbox software is compatible only with specific driver versions
provided with the National Instruments software and is not guaranteed to work with any other
versions.
• Find out the driver version you are using on your system. To learn how to get this information,
see Determining the Driver Version on page 16-13.
• Verify that the version is compatible with the Image Acquisition Toolbox software. Supported
NI-IMAQ driver versions are listed on the following hardware support web page: (https://
www.mathworks.com/hardware-support/national-instruments.html).
5 If you discover that you are using an unsupported driver version:
• Uninstall any existing NI-IMAQ driver version from Windows Control Panel > Programs
and Features > National Instruments Software.
• Reinstall the Image Acquisition Toolbox Support Package for National Instruments Hardware.
16-12
National Instruments Hardware
Select Help > System Information, and then see the NI-IMAQ Software field for the driver
version number.
16-13
16 Troubleshooting
Device Discovery
1 Install the Image Acquisition Toolbox Support Package for Point Grey Hardware.
Starting with version R2014a, each adaptor is available separately through MATLAB Add-Ons.
See “Image Acquisition Support Packages for Hardware Adaptors” on page 4-2 for information
about installing the adaptors.
2 Verify that you have the correct FlyCapture version for the release.
Check the list of supported drivers on the Point Grey Camera Support from Image Acquisition
Toolbox product page.
Uninstall any unsupported versions of FlyCapture SDK/Viewer using Control Panel > Add/
Remove Programs.
3 Verify that the camera is working with the FlyCapture Viewer.
Select Start > Programs > Point Grey FlyCapture2 > Point Grey FlyCap2.
In FlyCapture, select your device, and click OK to open the dialog box that shows the video feed
to test that the hardware is working properly.
4 If you are using a Point Grey camera that is a GigE Vision device, do not try to use both the Point
Grey adaptor and the GigE Vision adaptor at the same time. You should use the Point Grey
adaptor.
5 Ensure that the FlyCapture driver is on the PATH Environment variable. To view the PATH:
• Click the Start button, right-click the Computer option in the Start menu, and select
Properties.
• Click the Advanced System Settings link in the left column.
• In the System Properties window, click on the Advanced tab, and click the Environment
Variables button near the bottom of that tab.
• In the Environment Variables window, highlight the Path variable in the System variables
section, and click the Edit button. Ensure that '%FC2PATH%' and '%FC2PATH\vs2013' are
on the path.
• If you are having trouble detecting your device in MATLAB, put both variables in the start of
your PATH.
6 Make sure that the FlyCapture DirectShow drivers are not installed.
7 If you are using a GigE Point Grey camera, you must configure it using FlyCapture Viewer before
using it with MATLAB.
16-14
Point Grey Hardware
• USB 3
• FireWire
• GigE Vision
• USB 2
• Bumblebee 2
If you are having trouble using the Image Acquisition Toolbox software with a supported Point Grey
camera, try the following:
For Point Grey devices, run the application that came with your hardware, FlyCapture, and verify
that you can receive live video.
2 Select Start > Programs > Point Grey FlyCapture2 > Point Grey FlyCap2.
3 In FlyCapture, select your device, and click OK to open the dialog box that shows the video feed
to test that the hardware is working properly.
4 Install the Image Acquisition Toolbox Support Package for Point Grey Hardware.
5 If your hardware is functioning properly, verify that you are using a hardware device driver that
is compatible with the toolbox.
Note The Image Acquisition Toolbox software is compatible only with specific driver versions
provided with the Point Grey software and is not guaranteed to work with any other versions.
• Find out the driver version you are using on your system. To learn how to get this information,
see “Determining the Driver Version for Point Grey Devices” on page 16-16.
• Verify that the version is compatible with the Image Acquisition Toolbox software. For the
correct driver information, check the list of supported drivers on the Point Grey Camera
Support from Image Acquisition Toolbox
If you discover that you are using an unsupported driver version, visit the FlyCapture SDK
website to download the correct drivers.
Note If you are using a Point Grey camera that is a GigE Vision device, do not try to use both the
Point Grey adaptor and the GigE Vision adaptor at the same time. You should use the Point Grey
adaptor.
Note When using the Bumblebee 2 cameras, certain video formats may be suppressed. To see the
available video formats for your Bumblebee camera, open the Image Acquisition Tool (using the
16-15
16 Troubleshooting
imaqtool function), and check the format list shown in the Hardware Browser under your device
node.
1 Select Start > Programs > Point Grey FlyCapture2 > Point Grey FlyCap2 to open
FlyCapture.
2 The driver number appears on the banner of the FlyCapture dialog box.
16-16
Kinect for Windows Hardware
Device Discovery
If you are having trouble using the Image Acquisition Toolbox software with a supported Kinect for
Windows sensor, try the following:
1 Install the Image Acquisition Toolbox Support Package for Kinect for Windows Sensor.
Starting with version R2014a, each adaptor is available separately through MATLAB Add-Ons.
See “Image Acquisition Support Packages for Hardware Adaptors” on page 4-2 for information
about installing the adaptors.
If you have problems installing the Kinect support package, see the section below,
“Troubleshooting the Support Package Installation.”
2 Verify that you are using one of the supported Kinect hardware models. See Microsoft Kinect for
Windows Support from Image Acquisition Toolbox for a list of supported hardware. The Kinect
for Xbox 360 is not supported.
3 Verify that you have the supported Kinect driver version. You must have version 1.6 to use V1
devices, and version 2.0 for Kinect V2 and Xbox One devices.
4 Verify that your image acquisition hardware is functioning properly.
For Kinect for Windows devices, you can run the Kinect Explorer application for Kinect V1
devices, and verify that you can receive live video. You can run Kinect Evolution for Kinect V2 or
Xbox One and adapter.
5 For Kinect V1, make sure no other application is accessing the Kinect v1.
6 For Kinect V2, make sure it is plugged into a USB 3.0 port, and you are using an OS that is
Windows 8 or later.
If the setup fails, it could be caused by an internet security setting. If you get an error message such
as “KINECT Setup Failed – An error occurred while installing,” try the following, and then run the
installer again.
16-17
16 Troubleshooting
16-18
GigE Vision Hardware
1 Install the Image Acquisition Toolbox Support Package for GigE Vision Hardware.
Starting with version R2014a, each adaptor is available separately through MATLAB Add-Ons.
See “Image Acquisition Support Packages for Hardware Adaptors” on page 4-2 for information
about installing the adaptors.
2 Go through the configuration steps of the “GigE Vision Image Acquisition Quick Start Guide”
(Image Acquisition Toolbox Support Package for GigE Vision Hardware).
The installed Ethernet network adapter driver is provided by the network adapter manufacturer
(and is not a custom high-performance driver installed for use with a third-party imaging
application).
Any packet filtering drivers from a third-party imaging application or an antivirus program are
disabled (unchecked) in the camera connection Network Settings.
imaqreset
imaqhwinfo('gige')
When using the gigecam interface, use the gigecamlist command to show a list of the
detected GigE Vision cameras:
gigecamlist
5 Confirm that the camera is detected with other GigE Vision compliant imaging applications.
6 Confirm that there are no issues with the GenICam runtime libraries installation (such as a
conflict with a third-party imaging application) by executing the imaqsupport command and
checking for any error messages in the GENICAM section.
imaqsupport
7 Certain camera vendor software setup programs also install DirectShow drivers for use with
GigE Vision cameras. Uninstall the DirectShow drivers by using the vendor's software setup
program, as these DirectShow drivers might cause issues with the camera being detected when
using the gige adaptor.
16-19
16 Troubleshooting
Starting with version R2014a, each adaptor is available separately through MATLAB Add-Ons.
See “Image Acquisition Support Packages for Hardware Adaptors” on page 4-2 for information
about installing the adaptors.
2 Verify that the adaptor loads. You can use the imaqhwinfo command to list installed adaptors.
The gige adaptor should be included on the list.
If it does not load, make sure that GenICam is configured correctly using the imaqsupport
function.
If your camera requires a GenICam XML file on a local drive (most do not), and the adaptor loads
but no devices are shown, check the MWIMAQ_GENICAM_XML_FILES environment variable, and
make sure it contains the directory where your camera’s XML file is located.
For information on installing GenICam and checking your environment variables, see “Software
Configuration” on page 10-10.
3 Make sure you did not install your camera vendor's filtering or performance networking driver. If
you did, you should uninstall it.
4 Make sure the Ethernet card is configured properly.
For more information on this, see “Network Hardware Configuration Notes” on page 10-3 and
“Network Adaptor Configuration Notes” on page 10-4.
Also, if you have multiple cameras connected to multiple Ethernet cards, you cannot have them
all set to automatic IP configuration. You must specify the IP address for each card and each card
must be on a different subnet.
5 Examine the connectivity of your device separately from using the Image Acquisition Toolbox.
You may find using ping -b, arp, route, and ifconfig helpful with this.
6 If your acquisition stops due to a dropped frame, you can set the IgnoreDroppedFrames
property to 'on' to continue your acquisition with dropped frames. When this property is 'on',
the NumDroppedFrames property keeps track of the number of frames dropped while the
acquisition is running.
7 You might receive an error message such as:
If it does not mention buffer size, it is likely that packets are being dropped due to overload of
the CPU. To lower the CPU load, raise the value of the PacketSize device-specific (source)
property. In order to do this, you must be using hardware that supports jumbo frames.
You might also want to calculate and set the PacketDelay device-specific (source) property.
Also, if you are using a CPU that is older than an Intel® Core™ 2 Quad or equivalent AMD®, you
might experience this type of error.
If you have a slower computer and experience packet loss using the GigE Vision adaptor, you can
set a packet delay to avoid overloading the computer. This action is useful in solving the
16-20
GigE Vision Hardware
performance issue if you cannot achieve your camera’s frame rate. The PacketDelay property is
initially set to use your camera’s default value. You can then adjust the value if needed. The
TimeStampTickFrequency property is read-only, but is available for calculating the actual
packet delay value is being used.
For more information on the new PacketDelay property and how to calculate packet delay, see
this solution:
https://www.mathworks.com/support/solutions/en/data/1-F36R0R/index.html
8 If you are able to start acquisition without error but do not receive any frames, and if you are
using a larger PacketSize, make sure that your hardware and the network between the
computer and the camera support jumbo frames, and also that your Ethernet interface is set to
allow them at the size that you are attempting to use.
9 If you receive an error that says a block or frame is being dropped because a packet is
unavailable for resend, one likely cause is that the buffer size of the socket could not be set to the
reported value, for example 1000000.
See your system administrator about using sysctl for net.core.rmem_max. For example, the
system administrator could try:
sysctl -w net.inet.udp.recvspace=1000000
10 If your camera does not start a new acquisition at block 1, the toolbox attaches the block ID
(frame ID) as metadata to the frame. If you want to know if you lost initial frames, you can check
the metadata. If the first frame's block ID is not 1, you may have some missing frames. For
example, use this command in MATLAB:
[d t m]=getdata(vid,2);
m(1)
1 Install the Image Acquisition Toolbox Support Package for GigE Vision Hardware.
Starting with version R2014a, each adaptor is available separately through MATLAB Add-Ons.
See “Image Acquisition Support Packages for Hardware Adaptors” on page 4-2 for information
about installing the adaptors.
2 Verify that the adaptor loads. You can use the imaqhwinfo command to list installed adaptors.
The gige adaptor should be included on the list.
If it does not load, make sure that GenICam is installed and the environment variables exist. You
can check this using the imaqsupport function.
If your camera requires a GenICam XML file on a local drive (most do not), and the adaptor loads
but no devices are shown, check the MWIMAQ_GENICAM_XML_FILES environment variable, and
make sure it contains the directory where your camera’s XML file is located.
16-21
16 Troubleshooting
For information on installing GenICam and checking your environment variables, see “Software
Configuration” on page 10-10.
3 Make sure you did not install your camera vendor's filtering or performance networking driver. If
you did, uninstall it.
4 Make sure the Ethernet card is configured properly.
For more information on this, see “Network Hardware Configuration Notes” on page 10-3 and
“Network Adaptor Configuration Notes” on page 10-4.
Also, if you have multiple cameras connected to multiple Ethernet cards, you cannot have them
all set to automatic IP configuration. You must specify the IP address for each card, and each
card must be on a different subnet.
5 Examine the connectivity of your device separately from using the Image Acquisition Toolbox.
You may find using ping -b, arp, route, and ifconfig helpful with this process.
6 If your acquisition stops due to a dropped frame, you can set the IgnoreDroppedFrames
property to 'on' to continue your acquisition with dropped frames. When this property is 'on',
the NumDroppedFrames property keeps track of the number of frames dropped while the
acquisition is running.
7 You might receive an error message such as the following:
If it does not mention buffer size, it is likely that packets are being dropped due to overload of
the CPU. To lower the CPU load, raise the value of the PacketSize device-specific (source)
property. In order to do this, you must be using hardware that supports jumbo frames.
You might also want to calculate and set the PacketDelay device-specific (source) property.
Also, if you are using a CPU that is older than an Intel Core 2 Quad or equivalent AMD, you might
experience this type of error.
If you have a slower computer and experience packet loss using the GigE Vision adaptor, you can
set a packet delay to avoid overloading the computer. This setting is useful in solving the
performance issue if you cannot achieve your camera’s frame rate. The PacketDelay property is
initially set to use your camera’s default value. You can then adjust the value if needed. The
TimeStampTickFrequency property is read-only but is available for calculating the actual
packet delay value is being used.
For more information on the new PacketDelay property and how to calculate packet delay, see
this solution:
https://www.mathworks.com/support/solutions/en/data/1-F36R0R/index.html
8 If you are able to start acquisition without error but do not receive any frames, and if you are
using a larger PacketSize, make sure that your hardware and the network between the
computer and the camera support jumbo frames, and also that your Ethernet interface is set to
allow them at the size that you are attempting to use.
9 If you receive an error that says a block or frame is being dropped because a packet is
unavailable for resend, one likely cause is that the buffer size of the socket could not be set to the
reported value, for example 1000000.
See your system administrator about using sysctl for net.core.rmem_max. For example, the
system administrator could try:
16-22
GigE Vision Hardware
sysctl -w net.inet.udp.recvspace=1000000
10 If your camera does not start a new acquisition at block 1, the toolbox attaches the block ID
(frame ID) as metadata to the frame. If you want to know if you lost initial frames, you can check
the metadata – if the first frame's block ID is not 1, you may have some missing frames. For
example, use this command in MATLAB:
[d t m]=getdata(vid,2);
m(1)
16-23
16 Troubleshooting
Device Discovery
If you are having trouble using the Image Acquisition Toolbox with a GenICam GenTL camera driver
using the toolbox's gentl adaptor, try the following:
1 Install the Image Acquisition Toolbox Support Package for GenICam Interface.
Starting with version R2014a, each adaptor is available separately through MATLAB Add-Ons.
See “Image Acquisition Support Packages for Hardware Adaptors” on page 4-2 for information
about installing the adaptors.
2 Verify that the gentl adaptor loads. You can use the imaqhwinfo command to list installed
adaptors. The gentl adaptor should be included on the list.
If it does not load, make sure that GenICam is configured correctly using the imaqsupport
function.
16-24
Windows Video Hardware
1 Install the Image Acquisition Toolbox OS Generic Video Interface Support Package. It includes
the necessary files to use the winvideo adaptor.
Starting with version R2014a, each adaptor is available separately through MATLAB Add-Ons.
See “Image Acquisition Support Packages for Hardware Adaptors” on page 4-2 for information
about installing the adaptors.
2 Confirm that another imaging application is not connected to the camera.
3 Confirm that the camera and its supported video formats are detected by the standalone
hardware detection utility detectDevices.exe (64-bit), which is installed in
MATLABROOT\toolbox\imaq\imaqextern\utilities\detectDevices\win64
C:Program Files\MATLAB\R2018a\toolbox\imaq\imaqextern\utilities\detectDevices\win64
The following is sample output when you run detectDevices.exe in this folder:
Hardware detection application for the R2018a version of Image Acquisition Toolbox.
Detecting hardware for the winvideo adaptor.
16-25
16 Troubleshooting
16-26
Linux Video Hardware
Starting with version R2014a, each adaptor is available separately through MATLAB Add-Ons.
See “Image Acquisition Support Packages for Hardware Adaptors” on page 4-2 for information
about installing the adaptors.
2 Verify that you have a supported GStreamer version on the Third-Party Requirements section on
the MathWorks website (www.mathworks.com/hardware-support/gstreamer.html).
To find the version of GStreamer drivers installed on your system, run this command:
dpkg -l libgst* | grep ^i
If you had the GStreamer 1.0 libraries, it would return results similar to the following examples.
Note The Linux Video driver is a generic interface, and you should only use it if you do not have a
more specific driver to use with your device. If your device is an IEEE 1394 IIDC compliant device,
16-27
16 Troubleshooting
use the DCAM (dcam) adaptor. If your device is GigE Vision compliant, use the GigE (gige) adaptor.
If your device manufacturer provides a GenTL producer, use the GenTL (gentl) adaptor. Use the
Linux Video driver only if there is no more specific option for your device.
16-28
Linux DCAM IEEE 1394 Hardware
1 Verify that your IEEE 1394 (FireWire) camera can be accessed through the dcam adaptor.
• Make sure the camera is compliant with the IIDC 1394-based Digital Camera (DCAM)
specification. Vendors typically include this information in documentation or data sheet that
comes with the camera. If your digital camera is not DCAM compliant, you should be able to
use the Linux Video adaptor.
2 Install the Image Acquisition Toolbox Support Package for DCAM Hardware.
Starting with version R2014a, each adaptor is available separately through MATLAB Add-Ons.
See “Image Acquisition Support Packages for Hardware Adaptors” on page 4-2 for information
about installing the adaptors.
3 Verify that your image acquisition hardware is functioning properly and that you have permission
to access it.
Be sure that your system and login have the proper permissions to access the hardware. See your
system administrator if you need help.
You can verify that your hardware is functioning properly by running Coriander. See your system
administrator if you need help installing Coriander.
If you can start the utility, run the utility, and close the utility without encountering any errors,
the toolbox should be able to operate with your image acquisition device. If you encounter errors,
resolve them before attempting to use the toolbox with the device.
4 To use DCAM on Linux, you need to have installed the libdc1394-22 package, as well as the
libraw1394-11.
16-29
16 Troubleshooting
If you are having trouble using the Image Acquisition Toolbox with a supported Macintosh video
acquisition device, try the following:
1 Install the Image Acquisition Toolbox OS Generic Video Interface Support Package. It includes
the necessary files to use the macvideo adaptor.
Starting with version R2014a, each adaptor is available separately through MATLAB Add-Ons.
See “Image Acquisition Support Packages for Hardware Adaptors” on page 4-2 for information
about installing the adaptors.
2 Verify that you have supported libraries that work with the camera. Make sure you have Apple AV
Foundation Libraries installed on your computer. If you do not have it installed, you can download
it.
The OS Generic Video Adaptor on Mac (macvideo) uses Apple AV Foundation Libraries starting
with R2016b. The use of the Macintosh Video adaptor previously required that you have
QuickTime installed on your computer. QuickTime is no longer required.
3 Verify that your camera can be detected and images can be acquired by running applications that
come with Macintosh OS, such as Photo Booth, iMovie or FaceTime. If you can start the utility,
run the utility, and close the utility without encountering any errors, then the toolbox should be
able to operate with your image acquisition device. If you encounter errors, resolve them before
attempting to use the toolbox with the device.
Note The Macintosh Video Adaptor is a generic interface and should only be used if you do not
have a more specific adaptor to use with your device. If your device is a DCAM or FireWire IIDC
device, use the DCAM (dcam) adaptor. If your device is GigE Vision compliant, use the GigE
(gige) adaptor. Use the Macintosh Video Adaptor only if there is no more specific option for your
device.
16-30
Macintosh DCAM IEEE 1394 Hardware
1 Verify that your IEEE 1394 (FireWire) camera can be accessed through the dcam adaptor.
• Make sure the camera complies with the IIDC 1394-based Digital Camera (DCAM)
specification. Vendors typically include this information in documentation that comes with the
camera. If your digital camera is not DCAM compliant, you might be able to use the
Macintosh Video Adaptor.
2 Install the Image Acquisition Toolbox Support Package for DCAM Hardware.
Starting with version R2014a, each adaptor is available separately through MATLAB Add-Ons.
See “Image Acquisition Support Packages for Hardware Adaptors” on page 4-2 for information
about installing the adaptors.
3 Verify that your image acquisition hardware is functioning properly.
You can verify that your hardware is functioning properly by running an external webcam
application, for example, Photo Booth or iMovie.
If you can start the utility, run the utility, and close the utility without encountering any errors,
then the toolbox should be able to operate with your image acquisition device. If you encounter
errors, resolve them before attempting to use the toolbox with the device.
16-31
16 Troubleshooting
16-32
Contacting MathWorks and Using the imaqsupport Function
Before contacting MathWorks, you should run the imaqsupport function. This function returns
diagnostic information such as:
The output from imaqsupport is automatically saved to a text file, imaqsupport.txt, which you
can use to help troubleshoot your problem.
imaqsupport
16-33
17
The imaqhwinfo function provides a structure with an InstalledAdaptors field that lists all
adaptors on the current system that the toolbox can access.
imaqInfo = imaqhwinfo
imaqInfo =
imaqInfo.InstalledAdaptors
ans =
'dcam' 'winvideo'
Calling imaqhwinfo with an adaptor name returns a structure that provides information on all
accessible image acquisition devices.
hwInfo = imaqhwinfo('winvideo')
hwInfo =
hwInfo.DeviceInfo
ans =
17-2
Identifying Available Devices
SupportedFormats
Information on a specific device can be obtained by simply indexing into the device information
structure array.
device1 = hwInfo.DeviceInfo(1)
device1 =
DefaultFormat: 'RGB555_320x240'
DeviceFileSupported: 0
DeviceName: 'Veo PC Camera'
DeviceID: 1
VideoInputConstructor: 'videoinput('winvideo', 1)'
VideoDeviceConstructor: 'imaq.VideoDevice('winvideo', 1)'
SupportedFormats: {1x30 cell}
device1.DeviceName
ans =
Veo PC Camera
device1.DeviceID
ans =
The DefaultFormat field contains the image acquisition device's default video format.
device1.DefaultFormat
ans =
RGB555_320x240
The SupportedFormats field contains a cell array of all valid video formats supported by the image
acquisition device.
device1.SupportedFormats
ans =
Columns 1 through 4
17-3
17 Image Acquisition Toolbox Examples
Columns 5 through 8
Columns 9 through 11
Columns 12 through 14
Columns 15 through 17
Columns 18 through 21
Columns 22 through 25
Columns 26 through 29
Column 30
'YV12_352x288'
17-4
Accessing Devices and Video Sources
A video input object represents the connection between MATLAB® and an image acquisition device.
To create a video input object, use the VIDEOINPUT function and indicate what device the object is to
be associated with.
% Access an image acquisition device.
vidobj = videoinput('dt', 1, 'RS170')
A video source object represents a collection of one or more physical data sources that are treated as
a single entity. For example, one video source object could represent the three physical connections
of an RGB source (red, green, and blue).
The Source property of a video input object provides an array of the device's available video source
objects.
% Access the device's video sources that can be used for acquisition.
sources = vidobj.Source
whos sources
17-5
17 Image Acquisition Toolbox Examples
A video source object can be selected for acquisition by specifying its name.
vidobj.SelectedSourceName = 'VID2'
To obtain the video source object that is currently selected, use the GETSELECTEDSOURCE function.
selectedsrc = getselectedsource(vidobj)
Each video source object provides a list of general and device specific properties.
% List the video source object's properties and their current values.
get(selectedsrc)
General Settings:
Parent = [1x1 videoinput]
Selected = on
SourceName = VID2
Tag =
Type = videosource
UserData = []
Note: Each video source object maintains its own property configuration. Modifying the selected
video source is equivalent to selecting a new video source configuration.
17-6
Accessing Devices and Video Sources
delete(vidobj)
clear vidobj
17-7
17 Image Acquisition Toolbox Examples
Accessing Properties
To access a complete list of an object's properties and their current values, use the get function with
the object.
% List the video input object's properties and their current values.
get(vidobj)
General Settings:
DeviceID = 1
DiskLogger = []
DiskLoggerFrameCount = 0
EventLog = [1x0 struct]
FrameGrabInterval = 1
FramesAcquired = 0
FramesAvailable = 0
FramesPerTrigger = 10
Logging = off
LoggingMode = memory
Name = RGB24_640x480-dcam-1
NumberOfBands = 3
Previewing = off
ReturnedColorSpace = rgb
ROIPosition = [0 0 640 480]
Running = off
Tag =
Timeout = 10
Type = videoinput
UserData = []
VideoFormat = RGB24_640x480
VideoResolution = [640 480]
Trigger Settings:
InitialTriggerTime = []
TriggerCondition = none
TriggerFrameDelay = 0
TriggerRepeat = 0
TriggersExecuted = 0
TriggerSource = none
TriggerType = immediate
17-8
Working with Properties
Acquisition Sources:
SelectedSourceName = input1
Source = [1x1 videosource]
% List the video source object's properties and their current values.
get(src)
General Settings:
Parent = [1x1 videoinput]
Selected = on
SourceName = input1
Tag =
Type = videosource
To access a specific property value, use the get function with the object and property name.
framesPerTriggerValue = vidobj.FramesPerTrigger;
framesPerTriggerValue =
10
brightnessValue = src.Brightness;
brightnessValue =
304
Alternatively, we can access a specific property value by using the dot (.) notation.
framesPerTriggerValue = vidobj.FramesPerTrigger
framesPerTriggerValue =
17-9
17 Image Acquisition Toolbox Examples
10
brightnessValue = src.Brightness
brightnessValue =
304
Configuring Properties
Enumerated properties have a defined set of possible values. To list the enumerated values of a
property, use the set function with the object and property name. The property's default value is
listed in braces.
set(vidobj, 'LoggingMode')
To access a complete list of an object's configurable properties, use the set function with the object.
General Settings:
DiskLogger
FrameGrabInterval
FramesPerTrigger
LoggingMode: [ {memory} | disk | disk&memory ]
Name
ReturnedColorSpace: [ {rgb} | grayscale | YCbCr ]
ROIPosition
Tag
Timeout
UserData
Trigger Settings:
TriggerFrameDelay
TriggerRepeat
Acquisition Sources:
SelectedSourceName: [ {input1} ]
17-10
Working with Properties
General Settings:
Tag
To configure an object's property value, use the set function with the object, property name, and
property value.
vidobj.TriggerRepeat = 2;
src.Saturation = 100;
Alternatively, we can configure a specific property value by using the dot (.) notation.
vidobj.TriggerRepeat = 2;
src.Saturation = 100;
To obtain a property's description, use the imaqhelp function with the object and property name.
imaqhelp can also be used for function help.
imaqhelp(vidobj, 'LoggingMode')
When logging to memory, you must extract the data in a timely manner with the
GETDATA function. If the data is not extracted in a timely manner, memory
resources may be used up.
17-11
17 Image Acquisition Toolbox Examples
To obtain information on a property's attributes, use the propinfo function with the object and
property name.
propinfo(vidobj, 'LoggingMode')
ans =
Type: 'string'
Constraint: 'enum'
ConstraintValue: {'memory' 'disk' 'disk&memory'}
DefaultValue: 'memory'
ReadOnly: 'whileRunning'
DeviceSpecific: 0
When an image acquisition object is no longer needed, remove it from memory and clear the
MATLAB® workspace of the associated variable.
delete(vidobj);
clear vidobj
17-12
Managing Video Input Objects
To find video input objects in memory, use the IMAQFIND function. IMAQFIND returns an array of
video input objects.
objects = imaqfind
objects =
[]
To delete a video input object from memory, use the DELETE function with the object.
objects = imaqfind
Using the DELETE function with the object will remove the object from memory but not from the
MATLAB® workspace. To remove an object from the MATLAB workspace use the CLEAR function. To
see what objects are in the MATLAB workspace, use the WHOS function.
17-13
17 Image Acquisition Toolbox Examples
vidobj1
clear vidobj1
whos
To remove all video input objects from memory and to reset the toolbox to its initial state, use the
IMAQRESET function.
Note: Using the IMAQRESET function will only delete objects from memory, not clear them from the
MATLAB workspace.
imaqreset
objects = imaqfind
objects =
[]
whos
17-14
Managing Video Input Objects
17-15
17 Image Acquisition Toolbox Examples
Previewing Data
Before logging data, images from an image acquisition device can be previewed live using the
PREVIEW function. Calling the PREVIEW function, will open a preview window. To close the preview
window, use the CLOSEPREVIEW function.
snapshot = getsnapshot(vidobj);
17-16
Logging Data To Memory
Multi-Frame Acquisition
To specify the number of frames to log upon triggering, configure the video input object's
FramesPerTrigger property.
An image acquisition object must be running before data can be logged. To initiate an acquisition, use
the START function.
start(vidobj)
numAvail =
numAvail =
14
numAvail =
21
To retrieve logged data from memory, use the GETDATA function with the video input object and the
number of frames to retrieve.
numAvail =
20
17-17
17 Image Acquisition Toolbox Examples
To acquire data continuously, configure the FramesPerTrigger property to infinity. Upon triggering,
data will be logged until the video input object stops running. To stop an object from running, use the
STOP function.
vidobj.FramesPerTrigger = inf;
numAvail =
17-18
Logging Data To Memory
% View the total number of frames that were logged before stopping.
numAcquired = vidobj.FramesAcquired;
numAcquired =
10
To view the most recently logged image data without extracting it from memory, use the PEEKDATA
function with the video input object and the number of frames to view. Viewing logged data using
PEEKDATA will not remove any logged data from memory.
% Configure the number of frames to log upon triggering.
vidobj.FramesPerTrigger = 35;
17-19
17 Image Acquisition Toolbox Examples
numAvail =
35
numFramesAvailable =
35
Once the video input object is no longer needed, delete and clear the associated variable.
delete(vidobj)
clear vidobj
17-20
Logging Data To Disk
Data acquired from an image acquisition device may be logged to memory, to disk, or both. By
default, data is logged to memory. To change the logging mode, configure the video input object's
LoggingMode property.
% Access an image acquisition device, using a grayscale video format with
% 10 bits per pixel.
vidobj = videoinput('gige', 1, 'Mono10');
currentLoggingMode =
memory
currentLoggingMode =
disk
Logging to disk requires a MATLAB® VideoWriter object. VideoWriter is a MATLAB function, not
a toolbox function. After you create and configure a VideoWriter object, provide it to the video
input object's DiskLogger property.
VideoWriter provides a number of different profiles that log the data in different formats. This
example uses the Motion JPEG 2000 profile which can log single-banded (grayscale) data as well as
multi-byte data. The complete list of profiles provided by VideoWriter can be found in the
documentation.
% Create a |VideoWriter| object.
logfile = VideoWriter('logfile.mj2', 'Motion JPEG 2000')
VideoWriter
17-21
17 Image Acquisition Toolbox Examples
General Properties:
Filename: 'logfile.mj2'
Path: 'C:\Temp'
FileFormat: 'mj2'
Duration: 0
Video Properties:
ColorChannels:
Height:
Width:
FrameCount: 0
FrameRate: 30
VideoBitsPerPixel:
VideoFormat:
VideoCompressionMethod: 'Motion JPEG 2000'
CompressionRatio: 10
LosslessCompression: 0
MJ2BitDepth:
Now that the video input object is configured for logging data to a Motion JPEG 2000 file, initiate the
acquisition.
When logging large amounts of data to disk, disk writing sometimes lags behind the acquisition. To
determine whether all frames have been written to disk, use the DiskLoggerFrameCount property.
Verify that the FramesAcquired property and the DiskLoggerFrameCount property have the same
value.
vidobj.FramesAcquired
ans =
10
vidobj.DiskLoggerFrameCount
ans =
17-22
Logging Data To Disk
10
17-23
17 Image Acquisition Toolbox Examples
To obtain a list of configurable trigger settings, use the TRIGGERINFO function with the video input
object. TRIGGERINFO will return all possible trigger configurations supported by the image
acquisition device associated with the video input object. Possible configurations consists of a valid
trigger type, trigger condition, and trigger source combination.
Note: All image acquisition devices support immediate and manual trigger types. A hardware trigger
type is available only if it is supported by the image acquisition device.
To configure the trigger settings for an image acquisition device, use the TRIGGERCONFIG function
with the desired trigger type, trigger condition, and trigger source.
currentConfiguration =
TriggerType: 'hardware'
TriggerCondition: 'fallingEdge'
TriggerSource: 'optoTrigger'
Note: Configuring trigger settings requires a unique configuration to be specified. If specifying the
trigger type uniquely identifies a configuration, no further arguments need to be provided to
TRIGGERCONFIG.
Hardware triggers are the only trigger type that typically have multiple valid configurations.
17-24
Working With Triggers
Immediate Triggering
By default, a video input object's trigger type is configured for immediate triggering. Immediate
triggering indicates that data logging is to begin as soon as the START function is issued.
frameslogged =
10
Manual Triggering
Manual triggering requires that the TRIGGER function be issued before data logging is to begin.
frameslogged =
frameslogged =
10
Hardware Triggering
Hardware triggering begins logging data as soon as a trigger condition has been met via the trigger
source.
17-25
17 Image Acquisition Toolbox Examples
In this example, we have connected an opto-isolated trigger source from a function generator to our
image acquisition device. The image acquisition device will begin logging data upon detecting a
falling edge signal from the source.
Initially, no signal is sent from the source to the image acquisition device.
frameslogged =
A square wave signal will now be sent from the trigger source to the image acquisition device.
frameslogged =
10
17-26
Acquiring a Single Image in a Loop
Most interaction with Image Acquisition Toolbox is done through a video input object. These objects
are created with the VIDEOINPUT command. This example uses a webcam that is accessed through
the toolbox's "winvideo" adaptor.
vidobj = videoinput('winvideo');
Acquire a Frame
snapshot = getsnapshot(vidobj);
A common task is to repeatedly acquire a single image, process it, and then store the result. To do
this, GETSNAPSHOT can be called in a loop.
17-27
17 Image Acquisition Toolbox Examples
for i = 1:5
snapshot = getsnapshot(vidobj);
imagesc(snapshot);
end
Timing Implications
The GETSNAPSHOT function performs a lot of work when it is called. It must connect to the device,
configure it, start the acquisition, acquire one frame, stop the acquisition, and then close the device.
This means that the acquisition of one frame can take significantly longer than would be expected
based on the frame rate of the camera. To illustrate this, call GETSNAPSHOT in a loop.
% Measure the time to acquire 20 frames.
tic
for i = 1:20
snapshot = getsnapshot(vidobj);
end
elapsedTime = toc
elapsedTime =
21.2434
17-28
Acquiring a Single Image in a Loop
timePerFrame =
1.0622
effectiveFrameRate =
0.9415
The next example illustrates a more efficient way to perform the loop.
You can avoid the overhead of GETSNAPSHOT described in the previous setting by using the manual
triggering mode of the videoinput object. Manual triggering mode allows the toolbox to connect to
and configure the device a single time without logging data to memory. This means that frames can
be returned to MATLAB® with less of a delay.
% Now that the device is configured for manual triggering, call START.
% This will cause the device to send data back to MATLAB, but will not log
% frames to memory at this point.
start(vidobj)
elapsedTime = toc
elapsedTime =
1.4811
timePerFrame =
0.0741
effectiveFrameRate =
17-29
17 Image Acquisition Toolbox Examples
13.5031
You can see that the elapsed time using manual triggering is much smaller than the previous
example.
Cleanup
Once the video input object is no longer needed, delete the associated variable.
delete(vidobj)
17-30
Configuring Callback Properties
Callback functions are executed when an associated event occurs. To configure a callback to execute
for a particular event, configure one of the video input object's callback properties:
• ErrorFcn
• FramesAcquiredFcn
• StartFcn
• StopFcn
• TimerFcn
• TriggerFcn
• MATLABROOT\toolbox\imaq\imaqdemos\helper
directory, that displays the N'th frame, where N is provided as an input argument to the callback
function.
Select a device to use for acquisition and configure it to acquire data upon executing a manual
trigger.
Configure the video input object to execute a callback function when the acquisition is stopped.
Upon triggering the image acquisition device, a tennis ball is dropped within the camera's view.
% Trigger the object for logging and acquire data for a few seconds.
trigger(vidobj)
pause(5);
When the acquisition is stopped, it will cause the callback function to execute and display the N'th
frame.
17-31
17 Image Acquisition Toolbox Examples
Once the video input object is no longer needed, delete it and clear it from the workspace.
delete(vidobj)
clear vidobj
17-32
Viewing Events
Viewing Events
Events occur during an acquisition at a particular time when a condition is met. These events include:
• Error
• FramesAcquired
• Start
• Stop
• Timer
• Trigger
Executing an Acquisition
% Trigger the object to start logging and allow the acquisition to run for
% couple of seconds.
trigger(vidobj)
pause(2);
To view event information for the acquisition, access the EventLog property of the video input
object. Events are recorded in chronological order.
% View the event log.
events = vidobj.EventLog
events =
Each event provides information on the state of the object at the time the event occurred.
17-33
17 Image Acquisition Toolbox Examples
event1 =
Type: 'Start'
Data: [1x1 struct]
data1 = events(1).Data
data1 =
event2 =
Type: 'Trigger'
Data: [1x1 struct]
data2 = events(2).Data
data2 =
event3 =
Type: 'Stop'
Data: [1x1 struct]
data3 = events(3).Data
data3 =
17-34
Viewing Events
17-35
17 Image Acquisition Toolbox Examples
The result is an alpha blend of two images, one a stationary pendulum, the other a pendulum in
motion, making moving features appear transparent.
17-36
Alpha Blending Streamed Image Pairs
Using the acquired image data, perform on-line image processing, and display the processed images
in a figure window.
For each streamed image frame, calculate the linear combination between that frame and the
background image. The linear combination effectively alpha blends the two images so any moving
features appear transparent.
17-37
17 Image Acquisition Toolbox Examples
17-38
Alpha Blending Streamed Image Pairs
17-39
17 Image Acquisition Toolbox Examples
The result is an alpha blend of two images, one a stationary pendulum, the other a pendulum in
motion, making moving features appear transparent.
Using the acquired image data, perform on-line image processing, and display the processed images
in a figure window.
For each streamed image frame, calculate the linear combination between that frame and the
background image. The linear combination effectively alpha blends the two images so any moving
features appear transparent.
17-40
Alpha Blending Streamed Image Pairs
17-41
17 Image Acquisition Toolbox Examples
17-42
Averaging Images Over Time
For some advanced applications, the acquisition process may require that images be processed as
they are acquired, while your processing results are recorded to disk.
Using Image Acquisition Toolbox™ callbacks, triggering, and logging features, this example
illustrates how to accomplish the following task:
The experimental setup consists of an hourglass with white sand trickling down over time. The
example uses a callback function, found in the
• MATLABROOT\toolbox\imaq\imaqdemos\html\averaging
directory, that averages acquired image frames using Image Processing Toolbox™ functions.
To control the rate at which frames will be logged, there are 2 options available:
First, a solution using the device's frame rate will be shown, followed by an alternative solution using
a timer callback.
Using the frame rate option will provide acquisition results that are most closely aligned with the
device's actual video stream rate, whereas using the timer approach provides acquisition results
independent of the device's streaming rate.
17-43
17 Image Acquisition Toolbox Examples
The device frame rate can only be configured if it is supported by the device. As it is a device specific
property, it can be found on the video source object.
General Settings:
Parent = [1x1 videoinput]
Selected = on
SourceName = input1
Tag =
Type = videosource
UserData = []
% Using the FrameRate property, one can configure the acquisition source
% to provide the toolbox 30 frames per second.
fps = 30;
src.FrameRate = num2str(fps);
frameDelay =
300
% If the trigger is delayed by this value, the toolbox will not buffer
% any frames until the 300'th frame is provided by the device.
vid.TriggerFrameDelay = frameDelay;
% To ensure the acquisition does not come close to timing out, configure
% the time out value slightly above the expected acquisition duration.
17-44
Averaging Images Over Time
totalTrigs = nAdditionalTrigs + 1;
acqDuration = (acqPeriod * totalTrigs) + 3
acqDuration =
103
vid.Timeout = acqDuration;
Image Averaging
In order to save processed images to disk, a VIDEOWRITER object is used. Each set of acquired
frames is averaged using Image Processing Toolbox functions, and then written to disk.
framesWritten1 =
10
17-45
17 Image Acquisition Toolbox Examples
An alternative solution for this task is to use a TimerFcn. The TimerFcn can be executed every 10
seconds, at which point 5 frames are acquired and averaged. In order to initiate the acquisition at the
correct moment, manual triggers are used.
17-46
Averaging Images Over Time
framesWritten2 =
10
17-47
17 Image Acquisition Toolbox Examples
17-48
Calculating the Length of a Pendulum in Motion
This example captures images of a pendulum in motion. The pendulum consists of a blue ball
attached to a nylon string. Image data is captured as the pendulum is swung. Once captured, the
images are processed to determine the length of the pendulum.
Images are acquired using the Image Acquisition Toolbox™ and analyzed with the Image Processing
Toolbox™.
Acquire Images
17-49
17 Image Acquisition Toolbox Examples
17-50
Calculating the Length of a Pendulum in Motion
Since the pendulum's motion is confined to the upper half of the image series. Create a new series of
frames that contain the region of interest.
To crop a series of frames using imcrop, first perform imcrop on one frame and store its output.
Then create a series of frames having the size of the previous output.
Since the pendulum is much darker than the background, the pendulum can be segmented in each
frame by converting the frame to grayscale, thresholding it, and removing background structures.
17-51
17 Image Acquisition Toolbox Examples
The shape of the segmented pendulum in each frame is not a serious issue because the pendulum's
center is the only characteristic needed to determine the length of the pendulum.
17-52
Calculating the Length of a Pendulum in Motion
By fitting a circle through the pendulum centers, the pendulum length can be calculated. Rewrite the
basic equation of a circle:
where:
• a = -2*xc
• b = -2*yc
• c = xc^2 + yc^2 - radius^2.
Solving for parameters a, b, and c using the least squares method, the above equation can be
rewritten as:
17-53
17 Image Acquisition Toolbox Examples
pendulumLength =
253
17-54
Color-based Segmentation of Fabric Using the L*a*b Color Space
17-55
17 Image Acquisition Toolbox Examples
imageWidth = imageRes(1);
imageHeight = imageRes(2);
Step 2: Calculate Sample Colors in L*a*b Color Space for Each Region
Count the number of major colors visible in the image. Notice how easily you can visually distinguish
these colors from one another. The L*a*b color space (also known as CIELAB or CIE L*a*b) enables
you to quantify these visual differences.
The L*a*b color space is derived from the CIE XYZ tristimulus values. The L*a*b space consists of a
luminosity ('L') or brightness layer, chromaticity layer 'a' indicating where color falls along the red-
green axis, and chromaticity layer 'b' indicating where the color falls along the blue-yellow axis.
Your approach is to choose a small sample region for each color and to calculate each sample region's
average color in 'a*b' space. You will use these color markers to classify each pixel.
17-56
Color-based Segmentation of Fabric Using the L*a*b Color Space
% Calculate the mean 'a' and 'b' value for each area extracted.
% These values serve as your color markers in 'a*b' space.
a = lab_fabric(:,:,2);
b = lab_fabric(:,:,3);
color_markers = repmat(0, [nColors, 2]);
% For example, the average color of the second sample region in 'a*b' space is:
disp( sprintf('[%0.3f,%0.3f]', color_markers(2,1), color_markers(2,2)) );
[105.956,147.867]
17-57
17 Image Acquisition Toolbox Examples
Each color marker now has an 'a' and a 'b' value. You can classify each pixel in the image by
calculating the Euclidean distance between that pixel and each color marker. The smallest distance
will tell you that the pixel most closely matches that color marker. For example, if the distance
between a pixel and the second color marker is the smallest, then the pixel would be labeled as that
color.
% Perform classification.
for count = 1:nColors
distance(:,:,count) = ( (a - color_markers(count,1)).^2 + ...
(b - color_markers(count,2)).^2 ).^0.5;
end
The label matrix contains a color label for each pixel in the fabric image. Use the label matrix to
separate objects in the original fabric image by color.
imshow(segmented_images(:,:,:,1));
title([colorNames{1} ' objects'] );
17-58
Color-based Segmentation of Fabric Using the L*a*b Color Space
imshow(segmented_images(:,:,:,2));
title([colorNames{2} ' objects'] );
17-59
17 Image Acquisition Toolbox Examples
imshow(segmented_images(:,:,:,3));
title([colorNames{3} ' objects'] );
17-60
Color-based Segmentation of Fabric Using the L*a*b Color Space
imshow(segmented_images(:,:,:,4));
title([colorNames{4} ' objects'] );
17-61
17 Image Acquisition Toolbox Examples
imshow(segmented_images(:,:,:,5));
title([colorNames{5} ' objects'] );
17-62
Color-based Segmentation of Fabric Using the L*a*b Color Space
You can see how well the nearest neighbor classification separated the different color populations by
plotting the 'a' and 'b' values of pixels that were classified into separate colors. For display purposes,
label each point with its color label.
figure
for count = 1:nColors
h(count) = plot(a(label==count-1),b(label==count-1),'.','MarkerEdgeColor', ...
plot_labels{count}, 'MarkerFaceColor', plot_labels{count});
hold on;
end
17-63
17 Image Acquisition Toolbox Examples
17-64
Determining the Rate of Acquisition
Create a video input object and access its video source object to configure the desired acquisition
rate. The acquisition rate is determined by the value of the device specific FrameRate property of the
video source object.
Note, since FrameRate is a device specific property, not all devices may support it.
% Access an image acquisition device.
vidobj = videoinput('winvideo', 1);
% Determine the device specific frame rates (frames per second) available.
frameRates = set(src, 'FrameRate')
frameRates =
'30.0000'
'24.0000'
'8.0000'
actualRate =
30
Initiate the acquisition and retrieve the logged frames and timestamps.
% Start the acquisition.
start(vidobj)
% Wait for data logging to end before retrieving data. Set the wait time
% to be equal to the expected time to acquire the number of frames
% specified plus a little buffer time to accommodate overhead.
waittime = actualRate * (vidobj.FramesPerTrigger + vidobj.TriggerFrameDelay) + 5;
wait(vidobj, waittime);
17-65
17 Image Acquisition Toolbox Examples
By plotting each frame's timestamp, one can verify that the rate of acquisition is constant.
The average time difference can also be determined to compare to the expected acquisition rate.
17-66
Determining the Rate of Acquisition
avgTime =
0.0333
expRate =
30.0245
Comparing the time difference between the experimental and the known frame rate, the percent
error can be calculated. Since a generic USB web camera is being used as the acquisition device, it is
to be expected that the actual device frame rate will fluctuate.
% Determine the percent error between the determined and actual frame rate.
diffRates = abs(actualRate - expRate)
diffRates =
0.0245
17-67
17 Image Acquisition Toolbox Examples
percentError =
0.0817
17-68
Laser Tracking
Laser Tracking
This example shows how to track a moving laser dot.
Using the Image Acquisition Toolbox™, image data streams from a camera are acquired directly into
MATLAB®. These images are used to track objects in the camera's view. For this example, the object
being tracked is the dot produced by a laser pointer.
The monitor of a computer running MATLAB is placed in the camera's view while a laser pointer
shines a red dot on a MATLAB figure window. The camera is used to acquire images of the MATLAB
figure window while the laser pointer moves around. By tracking the movement of the laser dot, the
laser pointer can be used as a pointer device similar to a mouse.
The first task involves calibrating the data to establish a relationship between the acquired image
pixels and the MATLAB figure axes. Once this relationship is established, the laser dot can be tracked
as it moves around within the MATLAB figure window.
This example uses a set of utility functions that aid in the processing of images. These utility
functions require the Image Processing Toolbox™ and can be found in the following directory:
• MATLABROOT\examples\imaq\main
Physical Setup
Focus a camera onto the screen of a computer that MATLAB is running on.
It is best to have the ambient light in the room minimized. This example has been successfully run in
auditoriums using a projector.
17-69
17 Image Acquisition Toolbox Examples
An image acquisition device will be used to acquire image data to perform the calibration and laser
tracking. The device used will be a generic Windows® video WebCam.
The calibration screen created is purposely set to black to get the best contrast for the laser pointer.
Some systems work better when the window's colors are set dark for the title bars.
Since a red laser will be used, the red plane of the image is the only color plane of interest. The
calibration square is made blue in order to make it appear "invisible" in the red plane.
17-70
Laser Tracking
Position the camera such that only the blue square is visible.
17-71
17 Image Acquisition Toolbox Examples
% Using the preview window, request that the camera be positioned such
% that the view is of the blue box and little else.
preview(vid)
smallFigPos = laserFig.Position;
laserFig.Position = get(0, 'ScreenSize');
disp('Waiting for camera to be positioned...press any key to continue.')
pause
Now that the camera is focused on the right area, a target is drawn at each of the four corners of the
box. The calibration is performed by aiming the laser on each corner of the blue square, allowing a
relationship to be established between the camera pixel coordinates (the image) and MATLAB axis
coordinates (the square). For each target displayed:
The laser position is determined by thresholding the red plane and looking for high intensity values.
Some additional processing is performed to make sure the laser is not obscured by ghost images
caused by poor optics in some WebCams. It is also verified that a laser dot was actually present on
the screen.
17-72
Laser Tracking
% Target 1...
figure(laserFig);
hTarget.XData = 0;
hTarget.YData = 0;
sound(1), pause(2)
sound(1), trigger(vid);
acqResults{1} = getdata(vid, 1);
17-73
17 Image Acquisition Toolbox Examples
% Target 2...
figure(laserFig);
hTarget.XData = 0;
hTarget.YData = 1;
sound(1), pause(2)
sound(1), trigger(vid);
acqResults{2} = getdata(vid, 1);
17-74
Laser Tracking
% Target 3...
figure(laserFig);
hTarget.XData = 1;
hTarget.YData = 1;
sound(1), pause(2)
sound(1), trigger(vid);
acqResults{3} = getdata(vid, 1);
17-75
17 Image Acquisition Toolbox Examples
% Target 4...
figure(laserFig);
hTarget.XData = 1;
hTarget.YData = 0;
sound(1), pause(2)
sound(1), trigger(vid);
acqResults{4} = getdata(vid, 1);
17-76
Laser Tracking
Calibration Results
Plot the acquired image and the calculated laser pointer coordinates for each target. Since the yellow
crosshairs are positioned at the proper location in each image, the processing results are validated.
% Target 1 results...
calibFig = figure;
util_plotpos(acqResults{1}, xCalib(1), yCalib(1));
17-77
17 Image Acquisition Toolbox Examples
% Target 2 results...
util_plotpos(acqResults{2}, xCalib(2), yCalib(2));
% Target 3 results...
util_plotpos(acqResults{3}, xCalib(3), yCalib(3));
17-78
Laser Tracking
% Target 4 results...
util_plotpos(acqResults{4}, xCalib(4), yCalib(4));
Laser Tracking
Start the acquisition and process the acquired data a set number of times. The processing consists of
locating the laser in the acquired image and determining the laser positions in pixel and MATLAB axis
coordinates.
In order to make things interesting, using the laser pointer, attempt to "draw" the letter 'M' (for
MATLAB) within the blue box.
17-79
17 Image Acquisition Toolbox Examples
17-80
Laser Tracking
17-81
17 Image Acquisition Toolbox Examples
In certain applications, it may not be necessary to log every frame provided by an image acquisition
device. In fact, it may be more practical and resourceful to log frames at certain intervals.
To log frames at a constant interval, configure the video input object's FrameGrabInterval property.
Configuring the property to an integer value N specifies that every Nth frame should be logged,
starting with the first frame.
Note, specifying a FrameGrabInterval value does not modify the rate at which a device is providing
frames (device frame rate). It only specifies the interval at which frames are logged.
Create a video input object and configure the desired logging interval. The logging interval is
determined by the value of the FrameGrabInterval property.
% Access the device's video source and configure the device's frame rate.
% FrameRate is a device specific property, therefore, it may not be supported by
% some devices.
frameRate = 30;
src = getselectedsource(vidobj);
src.FrameRate = num2str(frameRate);
Initiate the acquisition of images and retrieve the logged frames and their timestamps.
Knowing the device's actual frame rate and the grab interval at which frames were logged, the
number of frames logged per second can be calculated.
17-82
Logging Data at Constant Intervals
loggedPerSec =
Knowing the number of frames logged per second, the expected time interval between each logged
frame can be calculated and compared.
loggingRate =
0.3333
ans =
avgDiff =
0.3332
percentError =
0.0125
17-83
17 Image Acquisition Toolbox Examples
The Image Acquisition Toolbox™ together with the Image Processing Toolbox™ can be used to display
a video feed with a live histogram. This can be useful when calibrating camera settings such as
aperture using manual controls. This example shows how to use the PREVIEW function, its associated
custom update function and the IMHIST function to place a video preview window adjacent to a live
histogram. The techniques here can be used to display other live information too. For example, a live
video feed can be placed next to a filtered version of the video.
• MATLABROOT\toolbox\imaq\imaqdemos\helper
An image object of the same size as the video is used to store and display incoming frames.
subplot(1,2,1);
hImage = imshow(zeros(imageRes));
% Set the axis of the displayed image to maintain the aspect ratio of the
% incoming frame.
axis image;
Specify the UpdatePreviewWindowFcn callback function that is called each time a new frame is
available. The callback function is responsible for displaying new frames and updating the histogram.
It can also be used to apply custom processing to the frames. More details on how to use this callback
can be found in the documentation for the PREVIEW function. This callback function itself is defined
in the file update_livehistogram_display.m
setappdata(hImage,'UpdatePreviewWindowFcn',@update_livehistogram_display);
17-84
Video Display with Live Histogram
1 function update_livehistogram_display(obj,event,hImage)
2 % This callback function updates the displayed frame and the histogram.
3
4 % Copyright 2007-2017 The MathWorks, Inc.
5 %
6
7 % Display the current image frame.
8 set(hImage, 'CData', event.Data);
9
10 % Select the second subplot on the figure for the histogram.
11 subplot(1,2,2);
12
13 % Plot the histogram. Choose 128 bins for faster update of the display.
14 imhist(event.Data, 128);
15
16 % Refresh the display.
17 drawnow
Start Previewing
% The PREVIEW function starts the camera and display. The image on which to
% display the video feed is also specified.
preview(vidobj, hImage);
17-85
17 Image Acquisition Toolbox Examples
Once the video input object is no longer needed, delete and clear the associated variable.
delete(vidobj)
clear vidobj
17-86
Live Motion Detection Using Optical Flow
Introduction
This example streams images from an image acquisition device to detect motion in the live video. It
uses the optical flow estimation technique to estimate the motion vectors in each frame of the live
video sequence. Once the motion vectors are determined, we draw it over the moving objects in the
video sequence.
Initialization
Create a System object to estimate direction and speed of object motion from one video frame to
another using optical flow.
opticFlow = opticalFlowHS;
Create a processing loop to perform motion detection in the input video. This loop uses the System
objects you instantiated above.
imshow(frameRGB)
hold on
plot(flow,'DecimationFactor',[5 5],'ScaleFactor',25)
hold off
17-87
17 Image Acquisition Toolbox Examples
Summary
In the figure window, you can see that the example detected the motion of the black file. The moving
objects are represented using the vector field lines as seen in the image.
Release
Here you call the release method on the System objects to close any open files and devices.
release(vidDevice);
17-88
Synchronizing Two National Instruments® Frame Grabbers
It is often necessary to synchronize two or more frame grabbers very closely. For example, you could
record synchronized video during an experiment that is costly or impossible to duplicate. Because of
the nature of the experiment, it would be beneficial to use RTSI to ensure the most reliable
connection between your National Instruments PCI-1409 and PCIe-1430 frame grabbers.
Using the Image Acquisition Toolbox, create the video input object to record video and set up the
parameters for acquisition.
You can use either card to trigger the other, but this example uses the PCIe-1430 to trigger the
PCI-1409. See what triggering settings are available for the PCI-1409.
Set the video input object for hardware triggering off of RTSI line 1 upon a rising edge.
17-89
17 Image Acquisition Toolbox Examples
Create the video input object to record video and set up the parameters for acquisition and for
driving RTSI1 high when the acquisition starts.
% Create the object.
vid1430 = videoinput('ni', 2);
% Set to acquire approximately one second of frames per trigger.
vid1430.FramesPerTrigger = 30;
In order to drive the PCI-1409's RTSI line, you need to set the correct line and polarity on the
PCIe-1430. In addition, you need to determine what frame grabber event will drive the RTSI line. You
can see a list of events that are available by looking at the device-specific source properties that end
in “DriveLine” and “DrivePolarity”:
% Get the currently selected source.
src = getselectedsource(vid1430);
% Display the properties and their possible settings.
set(src)
General Settings:
Tag
In this case, you want to drive RTSI line 1 high when the acquisition is in progress. This ensures that
the line is driven high as soon as the acquisition begins. To do this, you need to set the acquisition in
progress drive line to 'rtsi1':
% Set to drive RTSI1 high when the acquisition begins.
src.AcquisitionInProgressDriveLine = 'rtsi1';
Looking at the output above, you can see that the polarity for the acquisition in progress event is
already set to 'activeHigh', so you do not need to set it.
Note that the maximum number of lines that you can drive is hardware dependent and will possibly
vary between devices.
At this point you are set to acquire approximately one second of frames from each device when the
PCIe-1430 is started.
17-90
Synchronizing Two National Instruments® Frame Grabbers
You can now start the PCI-1409 video input object and see that it is waiting for a hardware trigger.
start(vid1409);
vid1409
You can now display a summary of the PCIe-1430 video input object and see that it is set up to trigger
immediately upon start.
vid1430
When you start the PCIe-1430 video input object, it will immediately be triggered and begin
acquiring. At that moment, the frame grabber will send a signal to the other frame grabber across
RTSI line 1, which will cause the PCI-1409 to begin nearly synchronously.
start(vid1430)
% Wait on both objects until you are done acquiring.
wait(vid1430), wait(vid1409)
17-91
17 Image Acquisition Toolbox Examples
If you now display a summary you will see that both devices have acquired frames.
vid1409
and:
vid1430
Once the video input objects are no longer needed, delete them and clear them and the reference to
the source from the workspace.
delete(vid1430)
delete(vid1409)
clear vid1430 vid1409 src
17-92
Synchronizing an NI® Frame Grabber And Data Acquisition Card
It is often necessary to synchronize two or more acquisition boards very closely. For example, you can
record the voltage from an analog sensor, such as a strain gauge, as well as synchronized video
during an experiment. For the synchronization/triggering signal, you can use an RTSI cable for a
reliable connection between your NI PCI-6229 data acquisition card and PCIe-1430 frame grabber.
Using the Data Acquisition Toolbox, create the analog input object to record voltage from the strain
gage and set up the parameters for acquisition.
Next, configure the DataAcquisition object for hardware-triggered acquisition using the RTSI1
terminal as the external trigger source.
addtrigger(d,'Digital','StartTrigger','External','Dev1/RTSI1');
d.DigitalTriggers(1).Condition = 'RisingEdge';
Using the Image Acquisition Toolbox, create the video input object to record video and set up the
parameters for acquisition and for driving RTSI1 high when the acquisition starts.
In order to drive the data acquisition card's RTSI line, you need to set the correct line and polarity on
the frame grabber. In addition, you need to determine what frame grabber event will drive the RTSI
line. You can see a list of events that are available by looking at the device-specific source properties
that end in “DriveLine” and “DrivePolarity”:
General Settings:
Tag
17-93
17 Image Acquisition Toolbox Examples
In this case, you want to drive RTSI line 1 high when the acquisition is in progress. This ensures that
the line is driven high as soon as the acquisition begins. To do this, you need to set the acquisition in
progress drive line to 'rtsi1':
Looking at the output above, you can see that the polarity for the acquisition in progress event is
already set to 'activeHigh', so you do not need to set it.
Note that the maximum number of lines that you can drive is hardware dependent and will possibly
vary between devices.
At this point you are set to acquire approximately one second of data from each device when the
image acquisition device is started.
You can now start the analog input object, which acquires one second of data by default. See that it is
waiting for a hardware trigger.
start(d)
d.WaitingForDigitalTrigger
ans =
logical
You can now display a summary of the video input object and see that it is set up to trigger
immediately upon start.
vid
17-94
Synchronizing an NI® Frame Grabber And Data Acquisition Card
When you start the video input object, it will immediately be triggered and begin acquiring. At that
moment, the frame grabber will send a signal to the data acquisition card across RTSI line 1, which
will cause the data acquisition to begin nearly synchronously.
start(vid)
% Wait on both objects until you are done acquiring.
wait(vid), wait(d,2)
If you now display a summary you will see that both devices have acquired data.
d.NumScansAcquired
ans =
10000
and:
vid
Once the video input and analog input objects are no longer needed, delete them and clear them and
the reference to the source from the workspace.
17-95
17 Image Acquisition Toolbox Examples
delete(vid)
clear vid d src
17-96
Using the Kinect® for Windows® V1 from Image Acquisition Toolbox™
Utility Functions
In order the keep this example as simple as possible, some utility functions for working with the
Kinect for Windows metadata have been created. These utility functions include the skeletalViewer
function which accepts the skeleton data, color image and number of skeletons as inputs and displays
the skeleton overlaid on the color image
See What Kinect for Windows Devices and Formats are Available
The Kinect for Windows has two sensors, an color sensor and a depth sensor. To enable independent
acquisition from each of these devices, they are treated as two independent devices in the Image
Acquisition Toolbox. This means that separate VIDEOINPUT object needs to be created for each of
the color and depth(IR) devices.
% The Kinect for Windows Sensor shows up as two separate devices in IMAQHWINFO.
hwInfo = imaqhwinfo('kinect')
hwInfo =
hwInfo.DeviceInfo(1)
ans =
DefaultFormat: 'RGB_640x480'
DeviceFileSupported: 0
DeviceName: 'Kinect Color Sensor'
DeviceID: 1
VideoInputConstructor: 'videoinput('kinect', 1)'
VideoDeviceConstructor: 'imaq.VideoDevice('kinect', 1)'
SupportedFormats: {1x7 cell}
hwInfo.DeviceInfo(2)
ans =
DefaultFormat: 'Depth_640x480'
DeviceFileSupported: 0
DeviceName: 'Kinect Depth Sensor'
DeviceID: 2
VideoInputConstructor: 'videoinput('kinect', 2)'
17-97
17 Image Acquisition Toolbox Examples
In order to acquire synchronized color and depth data, we must use manual triggering instead of
immediate triggering. The default immediate triggering suffers from a lag between streams while
performing synchronized acquisition. This is due to the overhead in starting of streams sequentially.
depthVid = videoinput('kinect',2)
Set the FramesPerTrigger property of the VIDEOINPUT objects to '100' to acquire 100 frames per
trigger. In this example 100 frames are acquired to give the Kinect for Windows sensor sufficient time
to start tracking a skeleton.
17-98
Using the Kinect® for Windows® V1 from Image Acquisition Toolbox™
colorVid.FramesPerTrigger = 100;
depthVid.FramesPerTrigger = 100;
% Start the color and depth device. This begins acquisition, but does not
% start logging of acquired data.
start([colorVid depthVid]);
The Kinect for Windows sensor provides different modes to track skeletons. These modes can be
accessed and configured from the VIDEOSOURCE object of the depth device. Let's see how to enable
skeleton tracking.
% Get the VIDEOSOURCE object from the depth device's VIDEOINPUT object.
depthSrc = getselectedsource(depthVid)
General Settings:
Parent = [1x1 videoinput]
Selected = on
SourceName = Depth Source
Tag =
Type = videosource
The properties on the depth source object that control the skeletal tracking features are
TrackingMode, SkeletonToTrack and BodyPosture properties on the VIDEOSOURCE.
TrackingMode controls whether or not skeletal tracking is enabled and, if it is enabled, whether all
joints are tracked, ‘Skeleton’, or if just the hip position is tracked, ‘Position’. Setting TrackingMode to
‘off’ (default) disables all tracking and reduces the CPU load.
The ‘BodyPosture’ property determines how many joints are tracked. If ‘BodyPosture’ is set to
‘Standing’, twenty joints are tracked. If it is set to ‘Seated’, then ten joints are tracked.
17-99
17 Image Acquisition Toolbox Examples
The SkeletonToTrack property can be used to selectively track one or two skeletons using the
'SkeletonTrackingID'. The currently valid values for 'SkeletonTrackingID' are returned as a part of
the metadata of the depth device.
The skeleton data that the Kinect for Windows produces is accessible from the depth device as a part
of the metadata returned by GETDATA. The Kinect for Windows can track the position of up to six
people in view and can actively track the joint locations of two of the six skeletons. It also supports
two modes of tracking people based on whether they are standing or seated. In standing mode, the
full 20 joint locations are tracked and returned; in seated mode the 10 upper body joints are
returned. For more details on skeletal data, see the MATLAB documentation on Kinect for Windows
adaptor.
start([colorVid depthVid]);
trigger([colorVid depthVid]);
metaDataDepth =
AbsTime
FrameNumber
IsPositionTracked
IsSkeletonTracked
JointDepthIndices
JointImageIndices
JointTrackingState
JointWorldCoordinates
PositionDepthIndices
PositionImageIndices
PositionWorldCoordinates
RelativeFrame
SegmentationData
SkeletonTrackingID
TriggerIndex
We randomly choose the 95th frame to visualize the image and skeleton data.
17-100
Using the Kinect® for Windows® V1 from Image Acquisition Toolbox™
anyPositionsTracked =
anySkeletonsTracked =
The results above show that at least one skeleton is being tracked. If tracking is enabled but no IDs
are specified with the TrackingID property, the Kinect for Windows software automatically chooses up
to two skeletons to track. Use the IsSkeletonTracked metadata to determine which skeletons are
being tracked.
trackedSkeletons =
Display skeleton's joint coordinates. Note that if the 'BodyPosture' property is set to 'Seated', the
'JointCoordinates' and 'JointIndices' will still have a length of 20, but indices 2-11(upper-body joints)
alone will be populated.
jointCoordinates =
17-101
17 Image Acquisition Toolbox Examples
jointIndices =
318 256
317 240
318 143
319 92
271 177
239 243
219 303
216 323
363 177
399 243
421 303
424 322
296 277
286 387
288 492
286 520
340 277
342 384
347 493
350 522
17-102
Using the Kinect® for Windows® V1 from Image Acquisition Toolbox™
17-103
17 Image Acquisition Toolbox Examples
The Image Acquisition Toolbox™ makes it easy to produce time-lapse video. The most efficient way to
do time-lapse acquisition is to use the Image Acquisition Toolbox's built-in ability to log frames
directly to an AVI file, and its ability to perform time decimation by retaining only a fraction of all the
frames acquired by the camera.
Before acquiring images using the Image Acquisition Toolbox, create a video input object.
Most devices do not allow you to control their frame rate. It is best to determine the frame rate
experimentally by acquiring frames and analyzing the time stamps of the frames.
vid.FramesPerTrigger = 100;
start(vid);
wait(vid,Inf);
framerate =
17.5296
The FrameGrabInterval property specifies how often frames are stored from the video stream. For
instance, if we set it to 5, then only 1 in 5 frames is kept -- the other 4 frames will be discarded.
To determine how many frames to acquire in total, calculate the total number of frames that would be
acquired at the device's frame rate, and then divide by the FrameGrabInterval.
17-104
Creating Time-Lapse Video Using a Noncontiguous Acquisition
capturetime = 30;
interval = vid.FrameGrabInterval;
numframes = floor(capturetime * framerate / interval)
numframes =
52
vid.FramesPerTrigger = numframes;
Due to the large number of frames that will be acquired, it is more practical to log the images to disk
rather than to memory. Using the Image Acquisition Toolbox, you can log images directly to an AVI
file. We configure this using the LoggingMode property.
vid.LoggingMode = 'disk';
Create a VideoWriter object to log to, using the VideoWriter command. We must specify the
filename to use, and then set frame rate that the AVI file should be played back at. Then, set the
DiskLogger property of the video input object to the VideoWriter object.
Start the time-lapse acquisition, and wait for the acquisition to complete. Note that the Image
Acquisition Toolbox does not tie up MATLAB® while it is acquiring. You can start the acquisition and
keep working in MATLAB.
start(vid);
17-105
17 Image Acquisition Toolbox Examples
Once the capture is completed, retrieve the AVI file object, and use the close function to release the
resources associated with it.
vwObj = vid.DiskLogger;
close(vwObj);
To play back the time-lapse AVI sequence, right-click on the filename in the MATLAB Current Folder
browser, and choose Open Outside MATLAB from the context menu.
Clean Up
When you are done with the video input object, you should use the delete function to free the
hardware resources associated with it, and remove it from the workspace using the clear function.
delete(vid);
clear vid;
17-106
Creating Time-Lapse Video Using Timer Events
The Image Acquisition Toolbox™ makes it easy to produce time-lapse video. In this example, we will
use timer events to acquire frames to an AVI file. This technique of time decimation has the
advantage that it allows you to make a decision about each frame before storing it. An application of
this would be to only store frames that meet certain illumination levels, have motion relative to the
previous frame, etc.
Before acquiring images using the Image Acquisition Toolbox, create a video input object.
To generate timer events, we specify two things: what happens when it occurs, and how often it
should occur. The TimerFcn property specifies the callback function to execute when a timer event
occurs. A timer event occurs when the time period specified by the TimerPeriod property expires.
The callback function is responsible for triggering the acquisition and storing the frame into the AVI
file. More details on how to use this callback can be found in the documentation. This callback
function itself is defined in the file timelapse_timer.m
set(vid,'TimerPeriod',1);
vid.TimerFcn = @timelapse_timer;
Store the VideoWriter object in the UserData property of the video input object so that it will be
available inside the callback.
17-107
17 Image Acquisition Toolbox Examples
triggerconfig(vid, 'manual');
vid.FramesPerTrigger = 1;
vid.TriggerRepeat = 9;
vid
Now, start the time lapse acquisition, and wait for up to 20 seconds for the acquisition to complete.
start(vid);
wait(vid,20);
Once the capture is completed, retrieve the VideoWriter object stored in the UserData property, and
use the close function to release the resources associated with it.
avi = vid.UserData;
avi = close(avi);
To play back the time-lapse AVI sequence, right-click on the filename in the MATLAB® Current Folder
browser, and choose Open Outside MATLAB from the context menu.
Clean Up
When you are done with the video input object, you should use the delete function to free the
hardware resources associated with it, and remove it from the workspace using the clear function.
delete(vid);
clear vid;
The following is a description of the callback function executed for each timer event
17-108
Creating Time-Lapse Video Using Timer Events
type timelapse_timer
function timelapse_timer(vid,~)
% This callback function triggers the acquisition and saves frames to an AVI file.
if(~keepframe)
return;
end
end
17-109
17 Image Acquisition Toolbox Examples
The Image Acquisition Toolbox™ makes it easy to produce time-lapse photography. In this example,
we will use postprocessing to delete frames from an acquired sequence of images. This is ideal for
situations where you are not sure which frames are relevant during capture, or where your
processing would take too long to occur during the acquisition. A possible application would be to
delete frames that have no motion relative to the previous frame. The primary disadvantage of this
method of time decimation is that it requires large amounts of memory to store all the frames. This
example acquires to memory, but you would likely acquire to an AVI file, and then use the
VideoReader command to postprocess the frames.
Before acquiring images using the Image Acquisition Toolbox, create a video input object.
vid.FramesPerTrigger = 100;
vid
start(vid);
wait(vid);
framesavailable = vid.FramesAvailable;
17-110
Creating Time-Lapse Video Using Postprocessed Data
framesavailable =
100
frames = getdata(vid,framesavailable);
Here, simply remove every other frame. However, you could do processing that is much more
complex.
toberemoved_index = [2:2:framesavailable];
frames(:,:,:,toberemoved_index) = [];
numframes = size(frames,4)
numframes =
50
Render the frames to an AVI file. To do this, create a |VideoWriter| object, call the |writeVideo|
function to add all the frames to the AVI file, and then use the |close| function to release the
resources associated with the AVI file.
To play back the time-lapse AVI sequence, right-click on the filename in the MATLAB® Current Folder
browser, and choose Open Outside MATLAB from the context menu.
Clean Up
When you are done with the video input object, you should use the delete function to free the
hardware resources associated with it, and remove it from the workspace using the clear function.
Also delete and clear the VideoWriter object.
delete(vid);
delete(vwObj);
clear vid vwObj;
17-111
17 Image Acquisition Toolbox Examples
Image Acquisition Toolbox™ provides a Simulink® block to acquire live image data from image
acquisition devices into Simulink models.
This example uses the From Video Device block to acquire live image data from the Point Grey Flea®
2 camera into Simulink. The example uses the Computer Vision Toolbox™ to create an image
processing system which can recognize and interpret a GTIN-13 barcode. The GTIN-13 barcode,
formally known as EAN-13, is an international barcode standard. It is a superset of the widely used
UPC standard.
This example requires Simulink, Computer Vision Toolbox and the Point Grey Flea® 2 camera to open
and run the model.
Example Model
The following figure shows the example model using the From Video Device block.
The input video is acquired live from a DCAM image acquisition device (Point Grey Flea® 2). In this
example, the block acquires RGB frames from the camera and outputs them into the Simulink model
at every simulation time step.
Algorithm
The barcode recognition example performs a search on some selected rows of the input image, called
scan lines. The scan lines are analyzed per pixel and marked by feature. Once all pixels are marked
with a feature value, the sequences of patterns are analyzed. The example identifies the guard
patterns and symbols by sequence and location. The symbols are up sampled and compared with the
codebook to determine a corresponding code.
17-112
Barcode Recognition Using Live Video Acquisition
To compensate for various barcode orientations, the example analyzes from left to right and from
right to left and chooses the better match. If the check sum is correct and a matching score against
the codebook is higher than a set threshold, the code is considered valid and is displayed.
You can change the number and location of the scan lines by changing the value of the "Row Positions
Of Scan Lines" parameter.
Results
The scan lines that have been used to detect barcodes are displayed in red. When a GTIN-13 is
correctly recognized and verified, the code is displayed in yellow.
Even though a Flea® 2 camera was used for this example, this model can be easily updated to
connect your models to other supported image acquisition devices. This provides you the flexibility to
use the same Simulink model with different image acquisition hardware.
17-113
17 Image Acquisition Toolbox Examples
Image Acquisition Toolbox™ provides a Simulink block to acquire live image data from image
acquisition devices into Simulink models. This example uses the From Video Device block to acquire
live image data from the Logitech® Quickcam® Zoom webcam into Simulink. This example uses the
Histogram block in DSP System Toolbox™ to calculate the histograms of R, G, and B values in each
video frame.
This example requires Simulink, DSP System Toolbox and Computer Vision Toolbox™ to open and run
the model.
Example Model
The following figure shows the example model using the From Video Device block.
open_system('demoimaqsl_rgbhistogram_win');
close_system('demoimaqsl_rgbhistogram_win');
The input video is acquired live from a winvideo image acquisition device (Logitech Quickcam Zoom).
In this example, the block acquires RGB frames from the Logitech camera and outputs them into the
Simulink model at every simulation time step. The data type output from the block is uint8.
The example displays the histograms of R, G, and B values in the RGB Histogram window and
displays the original RGB video in the viewer window.
17-114
Live Image Acquisition and Histogram Display
Even though a winvideo Logitech webcam was used for this example, this model can be easily
updated to connect your models to other supported image acquisition devices. This provides you the
flexibility to use the same Simulink model with different image acquisition hardware.
Windows-only example model contains the To Video Display block (supported only on Windows) from
Computer Vision System Toolbox and supports code generation. The platform independent version
consists of Video Viewer block and does not support code generation.
17-115
17 Image Acquisition Toolbox Examples
Image Acquisition Toolbox™ provides a Simulink® block to acquire live image data from image
acquisition devices into Simulink models.
This example uses the From Video Device block to acquire live image data from a Hamamatsu C8484
camera into Simulink. The Prewitt method is applied to find the edges of objects in the input video
stream.
This example requires Simulink, and Computer Vision Toolbox™ to open and run the model.
Example Model
The following figure shows the example model using the From Video Device block.
open_system('demoimaqsl_edgedetection_win');
close_system('demoimaqsl_edgedetection_win');
The input video is acquired live from a Hamamatsu image acquisition device (C8484). In this
example, the block acquires intensity data from the camera and outputs it into the Simulink model at
every simulation time step. The data type output from the block is single.
This example uses Computer Vision Toolbox to find the edges of objects in the video input. When you
run the model, you can double-click the Edge Detection block and adjust the threshold parameter
17-116
Edge Detection on Live Video Stream
while the simulation is running. The higher you make the threshold, the smaller the amount of edges
the example finds in the video stream.
Even though a Hamamatsu camera was used for this example, this model can be easily updated to
connect your models to other supported image acquisition devices. This provides you the flexibility to
use the same Simulink model with different image acquisition hardware.
Windows-only example model contains the To Video Display block (supported only on Windows) from
Computer Vision Toolbox and supports code generation. The platform independent version consists of
Video Viewer block and does not support code generation.
17-117
17 Image Acquisition Toolbox Examples
Doing so allows you to perform other operations in the main MATLAB worker with minimal impact on
the image acquisition. As a result, your image acquisition is more consistent and you can run more
CPU-intensive operations in parallel on multicore CPUs.
This example uses the parfeval function from the Parallel Computing Toolbox to asynchronously
execute a specified function. You can run the specified function in the background of your main
MATLAB worker without waiting for it to complete. For more information about this function, see
parfeval (Parallel Computing Toolbox).
Create a function called captureVideo to execute on the parallel MATLAB worker. This function
creates a new videoinput object and sets the FramesAcquiredFcn property. It then configures the
frames to acquire and starts the acquisition. See the captureVideo function at the end of this
example. You can modify captureVideo to fit your image acquisition needs and setup.
The captureVideo function sets the FramesAcquiredFcn property to a handle to the saveImages
callback function. The saveImages function saves acquired images to a folder in your current
working directory. See the saveImages function at the end of this example.
Before you start the image acquisition, create a folder called data, where the parallel worker saves
your acquired images.
mkdir("data\")
Create a parallel pool with one worker on your local machine using parpool.
parpool('local',1);
Run the captureVideo function in your parallel worker using parfeval. Since captureVideo
does not have any output, specify the number of output arguments as 0.
f = parfeval(@captureVideo,0)
f =
FevalFuture with properties:
ID: 4
Function: @captureVideo
CreateDateTime: 13-Jan-2021 11:58:35
StartDateTime: 13-Jan-2021 11:58:35
Running Duration: 0 days 0h 0m 0s
State: running
Error: none
The output shows that the status of the parfeval future is running.
17-118
Acquire Images Using Parallel Workers
If you want to block MATLAB until parfeval completes, you can use the wait function on the future
f. Using the wait function is useful when subsequent code depends on the completion of parfeval.
wait(f);
ID: 4
Function: @captureVideo
CreateDateTime: 13-Jan-2021 11:58:35
StartDateTime: 13-Jan-2021 11:58:35
Running Duration: 0 days 0h 0m 15s
State: finished (unread)
Error: none
Your images are acquired and saved to the data folder in your current working directory.
When you finish working with the parallel pool, clear the future and shut down the parallel pool.
clear f
delete(gcp("nocreate"))
Helper Functions
The parallel worker executes the captureVideo function. This function creates a videoinput
object and sets the appropriate properties. You can modify it to fit your image acquisition needs.
function captureVideo()
% Create videoinput object.
v = videoinput('winvideo');
% Start recording.
start(v);
The captureVideo function sets the saveImages callback as the FramesAcquiredFcn for the
videoinput object. This function reads the number of frames specified by the
17-119
17 Image Acquisition Toolbox Examples
FramesAcquiredFcnCount from the buffer and saves them to the data folder. You can modify this
callback to fit your needs.
function saveImages(src,obj)
% Calculate the total frame number for each frame,
% in order to save the files in order.
currframes = src.FramesAcquired - src.FramesAcquiredFcnCount;
17-120
18
Functions
18 Functions
Description
The Image Acquisition Tool enables you to explore, configure, and acquire data from your installed
and supported image acquisition devices. You connect directly to your hardware and can preview and
acquire image data.
Using this app, you can log the acquired image data to MATLAB in several formats, and also generate
a VideoWriter file. The Image Acquisition Tool provides a desktop environment that integrates a
preview and acquisition area with acquisition parameters so that you can change settings and see the
changes dynamically applied to your image data.
Examples
• “Getting Started with the Image Acquisition Tool” on page 3-5
• “Selecting Your Device in Image Acquisition Tool” on page 3-7
• “Setting Acquisition Parameters in Image Acquisition Tool” on page 3-9
• “Previewing and Acquiring Data in Image Acquisition Tool” on page 3-22
• “Exporting Data in the Image Acquisition Tool” on page 3-26
18-2
Image Acquisition Tool
Programmatic Use
imaqtool opens the Image Acquisition Tool, which enables you to explore, configure, and acquire
data from image acquisition devices.
imaqtool(file) starts the tool and then immediately reads an Image Acquisition Tool configuration
file, where file is the name of an IAT-file that you previously saved.
See Also
Topics
“Getting Started with the Image Acquisition Tool” on page 3-5
“Selecting Your Device in Image Acquisition Tool” on page 3-7
“Setting Acquisition Parameters in Image Acquisition Tool” on page 3-9
“Previewing and Acquiring Data in Image Acquisition Tool” on page 3-22
“Exporting Data in the Image Acquisition Tool” on page 3-26
“The Image Acquisition Tool Desktop” on page 3-2
Introduced in R2007b
18-3
18 Functions
clear
Clear image acquisition object from MATLAB workspace
Syntax
clear obj
Description
clear obj removes the image acquisition object obj from the MATLAB workspace. obj can be
either a video input object or a video source object.
It is important to note that if you clear a video input object that is running (the Running property is
set to 'on'), the object continues executing.
You can restore cleared objects to the MATLAB workspace with the imaqfind function.
To remove an image acquisition object from memory, use the delete function.
See Also
delete | imaqfind | isvalid
18-4
closepreview
closepreview
Close Video Preview window
Syntax
closepreview(obj)
closepreview
Description
closepreview(obj) stops the image acquisition object obj from previewing and, if the default
Video Preview window was used, closes the window.
closepreview stops all image acquisition objects from previewing and, for all image acquisition
objects that used the default Video Preview window, closes the windows.
Note that if the preview window was created with a user-specified image object handle as the target,
closepreview does not close the figure window.
See Also
preview | stoppreview
18-5
18 Functions
commands
List of commands available for GigE Vision camera
Syntax
commands(g)
Description
commands(g) lists the available commands for the GigE Vision camera g, where g is the object
created using the gigecam function. The output depends on the commands that are supported by
your specific hardware.
Examples
The commands function tells you what commands are available for your camera to use.
Use the gigecamlist function to ensure that MATLAB is discovering your cameras.
gigecamlist
ans =
Use the gigecam function to create the object and connect it to the camera.
g = gigecam
18-6
commands
Get the list of supported commands from the camera. You can click Show Commands in the property
list that is displayed when you create the object, or you can use the function:
commands(g)
The list shows the commands that the camera supports. You can then use the executeCommand
function to execute any of these commands.
See Also
executeCommand | gigecam | gigecamlist | snapshot
Introduced in R2014b
18-7
18 Functions
delete
Remove image acquisition object from memory
Syntax
delete(obj)
Description
delete(obj) removes obj, an image acquisition object or array of image acquisition objects, from
memory. Use delete to free memory at the end of an image acquisition session.
If obj is an array of image acquisition objects and one of the objects cannot be deleted, the delete
function deletes the objects that can be deleted and returns a warning.
When obj is deleted, it becomes invalid and cannot be reused. Use the clear command to remove
invalid image acquisition objects from the MATLAB workspace.
If multiple references to an image acquisition object exist in the workspace, deleting the image
acquisition object invalidates the remaining references. Use the clear command to delete the
remaining references to the object from the workspace.
If the image acquisition object obj is running or being previewed, the delete function stops the
object and closes the preview window before deleting it.
Examples
Create a video object, preview the object, then delete the object:
See Also
imaqfind | isvalid | videoinput
18-8
disp
disp
Display method for image acquisition objects
Syntax
obj
disp(obj)
Description
obj displays summary information for image acquisition object obj.
If obj is an array of image acquisition objects, disp outputs a table of summary information about
the image acquisition objects in the array.
In addition to the syntax shown above, you can display summary information for obj by excluding the
semicolon when:
Examples
This example illustrates the summary display of a video input object.
vid = videoinput('winvideo')
This example shows the summary information displayed for an array of video input objects.
vid2 = videoinput('winvideo');
[vid vid2]
18-9
18 Functions
See Also
videoinput
18-10
executeCommand
executeCommand
Execute command on GigE Vision camera
Syntax
executeCommand(g, 'commandname')
Description
executeCommand(g, 'commandname') executes the specified command for the GigE Vision
camera g, where g is the object created using the gigecam function, and 'commandname' is the
name of the command to execute.
Use the commands function to get the list of available commands for your camera.
Examples
Use executeCommand to execute any of the commands found by the commands function, which tells
you what commands are available for your camera to use.
Use the gigecamlist function to ensure that MATLAB is discovering your camera.
gigecamlist
ans =
Use the gigecam function to create the object and connect it to the camera.
g = gigecam
18-11
18 Functions
Get the list of supported commands from the camera. You can click Show Commands in the property
list that is displayed when you create the object, or you can use the function:
commands(g)
18-12
executeCommand
executeCommand(g, 'Correction_CalibrateGrey');
Input Arguments
commandname — Name of GigE Vision camera command to execute
character vector
Name of command you want to execute on your GigE Vision camera, specified as a character vector.
Use the commands function to get the list of available commands for your camera. Then use
executeCommand to execute any of the available commands.
Example: executeCommand(g, 'AutoFocus')
Data Types: char | string
See Also
commands | gigecam | gigecamlist | snapshot
Introduced in R2014b
18-13
18 Functions
flushdata
Remove data from memory buffer used to store acquired image frames
Syntax
flushdata(obj)
flushdata(obj,mode)
Description
flushdata(obj) removes all the data from the memory buffer used to store acquired image frames.
obj can be a single video input object or an array of video input objects.
flushdata(obj,mode) removes all the data from the memory buffer used to store acquired image
frames, where mode can have either of the following values:
Mode Description
'all' Removes all the data from the memory buffer and sets the FramesAvailable
property to 0 for the video input object obj. This is the default mode when
none is specified, flushdata(obj).
'triggers' Removes data from the memory buffer that was acquired during the oldest
trigger executed. TriggerRepeat must be greater than 0 and
FramesPerTrigger must not be set to inf.
Note To get a list of options you can use on a function, press the Tab key after entering a function on
the MATLAB command line. The list expands, and you can scroll to choose a property or value. For
information about using this advanced tab completion feature, see “Using Tab Completion for
Functions” on page 5-17.
See Also
getdata | imaqhelp | peekdata | propinfo | videoinput
18-14
get
get
Return image acquisition object properties
Syntax
get(obj)
V = get(obj)
V = get(obj,PropertyName)
Description
get(obj) displays all property names and their current values for image acquisition object obj.
V = get(obj) returns a structure, V, in which each field name is the name of a property of obj and
each field contains the value of that property.
If PropertyName is a 1-by-N or N-by-1 cell array of character vectors containing property names, V
is a 1-by-N cell array of values. If obj is a vector of image acquisition objects, V is an M-by-N cell
array of property values where M is equal to the length of obj and N is equal to the number of
properties specified.
Examples
Create video object, then get values of two frame-related properties, then display all properties of the
object:
Instead of using get to query individual property values, you should use dot notation. So for example,
instead of this:
get(vid, 'FramesPerTrigger')
vid.FramesPerTrigger
See Also
set | videoinput
18-15
18 Functions
getdata
Acquired image frames to MATLAB workspace
Syntax
data = getdata(obj)
data = getdata(obj,n)
data = getdata(obj,n,type)
data = getdata(obj,n,type,format)
[data,time] = getdata(...)
[data, time, metadata] = getdata(...)
Description
data = getdata(obj) returns data, which contains the number of frames specified in the
FramesPerTrigger property of the video input object obj. obj must be a 1-by-1 video input object.
data is returned to the MATLAB workspace in its native data type using the color space specified by
the ReturnedColorSpace property.
You can use the MATLAB image or imagesc functions to view the returned data. Use imaqmontage
to view multiple frames at once.
data = getdata(obj,n) returns n frames of data associated with the video input object obj.
data = getdata(obj,n,type) returns n frames of data associated with the video input object
obj, where type is one of the character vectors in the following table that specify the data type used
to store the returned data.
If type is not specified, 'native' is used as the default. If there is no MATLAB data type that
matches the object's native data type, getdata chooses a MATLAB data type that preserves
18-16
getdata
numerical accuracy. For example, the components of 12-bit RGB color data would each be returned as
uint8 data.
data = getdata(obj,n,type,format) returns n frames of data associated with the video input
object obj, where format is one of the character vectors in the following table that specify the
MATLAB format of data.
Note To get a list of options you can use on a function, press the Tab key after entering a function on
the MATLAB command line. The list expands, and you can scroll to choose a property or value. For
information about using this advanced tab completion feature, see “Using Tab Completion for
Functions” on page 5-17.
[data,time] = getdata(...) returns time, an F-by-1 matrix, where F is the number of frames
returned in data. Each element of time indicates the relative time, in seconds, of the corresponding
frame in data, relative to the first trigger.
time = 0 is defined as the point at which data logging begins. When data logging begins, the
object's Logging property is set to 'On'. time is measured continuously with respect to 0 until the
acquisition stops. When the acquisition stops, the object's Running property is set to 'Off'.
In addition to the fields in the above table, some adaptors may choose to add other adaptor-specific
metadata as well.
getdata is a blocking function that returns execution control to the MATLAB workspace after the
requested number of frames becomes available within the time period specified by the object's
Timeout property. The object's FramesAvailable property is automatically reduced by the number
of frames returned by getdata. If the requested number of frames is greater than the frames to be
acquired, getdata returns an error.
It is possible to issue a Ctrl+C while getdata is blocking. This does not stop the acquisition but does
return control to MATLAB.
18-17
18 Functions
Examples
Construct a video input object associated with a Matrox device at ID 1.
start(obj);
data = getdata(obj);
imaqmontage(data);
delete(obj);
See Also
getsnapshot | imaqhelp | imaqmontage | peekdata | propinfo
18-18
getselectedsource
getselectedsource
Return currently selected video source object
Syntax
src = getselectedsource(obj)
Description
src = getselectedsource(obj) searches all the video source objects associated with the video
input object obj and returns the video source object, src, that has the Selected property value set
to 'on'.
To select a source for acquisition, use the SelectedSourceName property of the video input object.
See Also
get | imaqhelp | videoinput
18-19
18 Functions
getsnapshot
Immediately return single image frame
Syntax
frame = getsnapshot(obj)
[frame, metadata] = getsnapshot(obj)
Description
frame = getsnapshot(obj) immediately returns one single image frame, frame, from the video
input object obj. The frame of data returned is independent of the video input object
FramesPerTrigger property and has no effect on the value of the FramesAvailable or
FramesAcquired property.
frame is returned to the MATLAB workspace in its native data type using the color space specified by
the ReturnedColorSpace property.
You can use the MATLAB image or imagesc function to view the returned data.
Note If obj is running but not logging, and has been configured with a hardware trigger, a timeout
error will occur.
To interrupt the getsnapshot function and return control to the MATLAB command line, issue the
^C (Ctrl+C) command.
Note To get a list of options you can use on a function, press the Tab key after entering a function on
the MATLAB command line. The list expands, and you can scroll to choose a property or value. For
information about using this advanced tab completion feature, see “Using Tab Completion for
Functions” on page 5-17.
18-20
getsnapshot
Examples
Create a video input object.
frame = getsnapshot(obj);
image(frame);
delete(obj);
For an example of using getsnapshot, see the Image Acquisition Toolbox example Acquiring a
Single Image in a Loop in the Examples list at the top of the Image Acquisition Toolbox main
Documentation Center page, or open the file demoimaq_GetSnapshot.m in the MATLAB Editor.
See Also
getdata | imaqhelp | peekdata
18-21
18 Functions
gigecam
Create gigecam object to acquire images from GigE Vision cameras
Syntax
g = gigecam
g = gigecam('IPAddress')
g = gigecam(devicenumber)
g = gigecam('serialnumber')
Description
g = gigecam creates the gigecam object g and connects to the single GigE Vision camera on your
system. If you have multiple cameras and you use the gigecam function with no input argument, then
it creates the object and connects it to the first camera it finds listed in the output of the
gigecamlist function.
When the gigecam object is created, it connects to the camera and establishes exclusive access. You
can then preview the data and acquire images using the snapshot function.
Examples
Use the gigecam function with no input arguments to connect to the single GigE Vision camera on
your system. If you have multiple cameras and you use the gigecam function with no input argument,
it creates the object and connects it to the first camera it finds listed in the output of the
gigecamlist function.
Use the gigecamlist function to ensure that MATLAB is discovering your camera.
gigecamlist
ans =
18-22
gigecam
Create an object, g.
g = gigecam
Use the gigecam function with the IP address or serial number of the camera as the input argument
to create the object and connect it to the camera with that address or number.
Use the gigecamlist function to ensure that MATLAB is discovering your cameras.
gigecamlist
ans =
Create an object, g, using the IP address of the camera. You can also create the object in this same
way using the serial number. You use the same syntax, but use a serial number instead of the IP
address, also as a character vector.
g = gigecam('169.254.242.122')
g =
DeviceModelName: 'mvBlueCOUGER-X120aG'
SerialNumber: 'GX000818'
IPAddress: '169.254.242.122'
PixelFormat: 'Mono8'
18-23
18 Functions
It creates the object and connects it to the Matrix Vision camera with that IP address.
Use the gigecam function with an index as the input argument to create the object corresponding to
that index and connect it to that camera. The index corresponds to the order of cameras in the table
returned by gigecamlist when you have multiple cameras connected.
Use the gigecamlist function to ensure that MATLAB is discovering your cameras.
gigecamlist
ans =
g = gigecam(2)
g =
DeviceModelName: 'mvBlueCOUGER-X120aG'
SerialNumber: 'GX000818'
IPAddress: '169.254.242.122'
PixelFormat: 'Mono8'
AvailablePixelFormats: {'Mono8' 'Mono12' 'Mono14' 'Mono16' 'Mono12Packed'
'BayerGR8' 'BayerGR10' 'BayerGR12' 'BayerGR16' 'BayerGR12Packed'
'YUV422Packed' 'YUV422_YUYVPacked' 'YUV444Packed'}
Height: 1082
Width: 1312
18-24
gigecam
It creates the object and connects it to the Matrix Vision camera with that index number, in this case,
the second one displayed by gigecamlist. If you only have one camera, you do not need to use the
index.
Input Arguments
IPAddress — IP address of your camera
character vector
IP address of your camera, specified as a character vector. This argument creates a gigecam object g
where IPAddress is a character vector value that identifies a particular camera by its IP address.
When you use the gigecam function with the IP address of the camera as the input argument, it
creates the object and connects it to the camera with that address. You can see the IP address for
your camera in the list returned by the gigecamlist function.
Example: g = gigecam('169.254.192.165')
Data Types: char | string
Device number of your camera, specified as a numeric scalar. This number identifies a particular
camera by its index order. It creates the object corresponding to that index and connects it to that
camera. The index corresponds to the order of cameras in the table returned by gigecamlist when
you have multiple cameras connected.
Example: g = gigecam(2)
Data Types: double
Serial number of your camera, specified as a character vector. This argument creates a gigecam
object g where serialnumber is a character vector value that identifies a particular camera by its
serial number. When you use the gigecam function with the serial number of the camera as the input
argument, it creates the object and connects it to the camera with that number. You can see the serial
number for your camera in the list returned by the gigecamlist function.
Example: g = gigecam('022600017445')
Data Types: char | string
Tips
• When the gigecam object is created, it connects to the camera and establishes exclusive access.
You can then preview the data and acquire images using the snapshot function.
• You cannot create more than one object connected to the same device, and trying to do that
generates an error.
Note To get a list of options you can use on a function, press the Tab key after entering a function on
the MATLAB command line. The list expands, and you can scroll to choose a property or value. For
information about using this advanced tab completion feature, see “Using Tab Completion for
Functions” on page 5-17.
18-25
18 Functions
See Also
commands | executeCommand | gigecamlist | snapshot
Introduced in R2014b
18-26
gigecamlist
gigecamlist
List of GigE Vision cameras connected to your system
Syntax
gigecamlist
Description
gigecamlist returns a list of available GigE Vision Compliant cameras connected to your system,
with model, manufacturer, IP address, and serial number. If the camera has a user-defined name, that
name is displayed. If you plug in different cameras during the same MATLAB session, then the
gigecamlist function returns an updated list of cameras.
Examples
The output of gigecamlist shows any GigE Vision cameras connected to your system.
gigecamlist
ans =
See Also
commands | executeCommand | gigecam | snapshot
Introduced in R2014b
18-27
18 Functions
imaqfind
Find image acquisition objects
Syntax
imaqfind
out = imaqfind
out = imaqfind(PropertyName, Value, PropertyName2, Value2,...)
out = imaqfind(S)
out = imaqfind(obj, PropertyName, Value, PropertyName2, Value2,...)
Description
imaqfind returns an array containing all the video input objects that exist in memory. If only a single
video input object exists in memory, imaqfind displays a detailed summary of that object.
out = imaqfind returns an array, out, of all the video input objects that exist in memory.
out = imaqfind(S) returns a cell array, out, of image acquisition objects whose property values
match those defined in the structure S. The field names of S are image acquisition object property
names and the field values are the requested property values.
Note When searching for properties with specific values, imaqfind performs case-sensitive
searches. For example, if the value of an object's Name property is 'MyObject', imaqfind does not
find a match if you specify 'myobject'. Note, however, that searches for properties that have an
enumerated list of possible values are not case sensitive. For example, imaqfind will find an object
with a Running property value of 'Off' or 'off'. Use the get function to determine the exact
spelling of a property value.
Note To get a list of options you can use on a function, press the Tab key after entering a function on
the MATLAB command line. The list expands, and you can scroll to choose a property or value. For
information about using this advanced tab completion feature, see “Using Tab Completion for
Functions” on page 5-17.
Examples
To illustrate various imaqfind syntaxes, first create two video input objects.
18-28
imaqfind
obj1 = videoinput('matrox',1,'M_RS170','Tag','FrameGrabber');
obj2 = videoinput('winvideo',1,'RGB24_320x240','Tag','Webcam');
See Also
get | videoinput
18-29
18 Functions
imaqhelp
Image acquisition object function and property help
Syntax
imaqhelp
imaqhelp(Name)
imaqhelp(obj)
imaqhelp(obj,Name)
out = imaqhelp(...)
Description
imaqhelp provides a complete listing of image acquisition object functions.
imaqhelp(Name) provides online help for the function or property specified by the character vector
Name.
imaqhelp(obj) displays a listing of functions and properties for the image acquisition object obj
along with the online help for the object's constructor. obj must be a 1-by-1 image acquisition object.
imaqhelp(obj,Name) displays the help for the function or property specified by the character
vector Name for the image acquisition object obj.
When property help is displayed, the names in the “See Also” section that contain all uppercase
letters are function names. The names that contain a mixture of upper- and lowercase letters are
property names.
When function help is displayed, the “See Also” section contains only function names.
Examples
Getting general function and property help.
imaqhelp('videoinput')
out = imaqhelp('videoinput');
imaqhelp getsnapshot
imaqhelp LoggingMode
18-30
imaqhelp
See Also
propinfo
18-31
18 Functions
imaqhwinfo
Information about available image acquisition hardware
Syntax
out = imaqhwinfo
out = imaqhwinfo(adaptorname)
out = imaqhwinfo(adaptorname,field)
out = imaqhwinfo(adaptorname, deviceID)
out = imaqhwinfo(obj)
out = imaqhwinfo(obj,field)
Description
out = imaqhwinfo returns out, a structure that contains information about the image acquisition
adaptors available on the system. An adaptor is the interface between MATLAB and the image
acquisition devices connected to the system. The adaptor's main purpose is to pass information
between MATLAB and an image acquisition device via its driver.
out = imaqhwinfo(adaptorname) returns out, a structure that contains information about the
adaptor specified by the character vector adaptorname. The information returned includes adaptor
version and available hardware for the specified adaptor. To get a list of valid adaptor names, use the
imaqhwinfo syntax.
out = imaqhwinfo(adaptorname,field) returns the value of the field specified by the character
vector field for the adaptor specified by the character vector adaptorname. The argument can be a
single character vector or a cell array of character vectors. If field is a cell array, out is a 1-by-n
cell array where n is the length of field. To get a list of valid field names, use the
imaqhwinfo('adaptorname') syntax.
out = imaqhwinfo(obj) returns out, a structure that contains information about the specified
image acquisition object obj. The information returned includes the adaptor name, device name,
video resolution, native data type, and device driver name and version. If obj is an array of device
objects, then out is a 1-by-n cell array of structures where n is the length of obj.
out = imaqhwinfo(obj,field) returns the information in the field specified by field for the
device object obj. field can be a single field name or a cell array of field names. out is an m-by-n
cell array where m is the length of obj and n is the length of field. You can return a list of valid field
names with the imaqhwinfo(obj) syntax.
Note After you call imaqhwinfo once, hardware information is cached by the toolbox. To force the
toolbox to search for new hardware that might have been installed while MATLAB was running, use
imaqreset.
18-32
imaqhwinfo
Note To get a list of options you can use on a function, press the Tab key after entering a function on
the MATLAB command line. The list expands, and you can scroll to choose a property or value. For
information about using this advanced tab completion feature, see “Using Tab Completion for
Functions” on page 5-17.
Examples
This example returns information about all the adaptors available on the system.
imaqhwinfo
ans =
This example returns information about all the devices accessible through a particular adaptor.
info = imaqhwinfo('winvideo')
info =
This example returns information about a specific device accessible through a particular adaptor. You
identify the device by its device ID.
dev_info = imaqhwinfo('winvideo', 1)
dev_info =
DefaultFormat: 'RGB555_128x96'
DeviceFileSupported: 0
DeviceName: 'IBM PC Camera'
DeviceID: 1
VideoInputConstructor: 'videoinput('winvideo', 1)'
VideoDeviceConstructor: 'imaq.VideoDevice('winvideo', 1)'
SupportedFormats: {1x34 cell}
This example gets information about the device associated with a particular video input object.
obj = videoinput('winvideo', 1);
obj_info = imaqhwinfo(obj)
obj_info =
AdaptorName: 'winvideo'
DeviceName: 'IBM PC Camera'
MaxHeight: 96
MaxWidth: 128
NativeDataType: 'uint8'
18-33
18 Functions
TotalSources: 1
VendorDriverDescription: 'Windows WDM Compatible Driver'
VendorDriverVersion: 'DirectX 9.0'
This example returns the value of a particular field in the device information associated with a
particular video input object.
field_info = imaqhwinfo(vid,'adaptorname')
field_info =
winvideo
See Also
imaqhelp | imaqreset
18-34
imaqmontage
imaqmontage
Sequence of image frames as montage
Syntax
imaqmontage(frames)
imaqmontage(obj)
imaqmontage(...,CLIM)
imaqmontage(..., 'CLim', CLIM, 'Parent', PARENT)
h = imaqmontage(...)
Description
imaqmontage(frames) displays a montage of image frames in a MATLAB figure window using the
imagesc function.
imaqmontage(obj) calls the getsnapshot function on video input object obj and displays a single
image frame in a MATLAB figure window using the imagesc function. obj must be a 1-by-1 video
input object.
Note To get a list of options you can use on a function, press the Tab key after entering a function on
the MATLAB command line. The list expands, and you can scroll to choose a property or value. For
information about using this advanced tab completion feature, see “Using Tab Completion for
Functions” on page 5-17.
Examples
Construct a video input object associated with a Matrox device at ID 1.
obj = videoinput('matrox', 1);
18-35
18 Functions
a = axes;
delete(obj);
See Also
getdata | getsnapshot | imaqhelp | peekdata
18-36
imaqregister
imaqregister
Register third-party custom adaptor
Syntax
adaptors = imaqregister
adaptors = imaqregister(adaptorpath)
adaptors = imaqregister(adaptorpath,action)
Description
adaptors = imaqregister returns a list of registered third-party adaptors with their full paths. If
there are no registered adaptors, imaqregister returns an empty cell array.
Note The imaqhwinfo and videoinput functions use the adaptor base name, not the full path. For
example, if the adaptor full path is "c:\adaptor.dll" (Windows), "/local/adaptors/
adaptor.so" (Linux), or "/local/adaptors/adaptor.dylib" (macOS), the adaptor base name
is "adaptor".
Registering an adaptor informs Image Acquisition Toolbox of the location of a third-party adaptor
library. If you query the system for available image acquisition hardware by using imaqhwinfo, the
toolbox makes available any previously registered adaptor libraries. The imaqregister function
saves the name of the registered adaptor in the MATLAB preferences directory so that the location
persists across MATLAB sessions. Because imaqhwinfo caches the list of available adaptors, you
might need to call imaqreset after calling imaqregister to make the newly registered adaptor
available.
Note The adaptor shared library (a DLL on Windows) is not the same as the driver shared library
supplied by a hardware vendor (also a DLL on Windows). The adaptor shared library is specific to
Image Acquisition Toolbox and is specified as such by the hardware vendor.
Examples
Register a third-party adaptor in Image Acquisition Toolbox on a Windows system and preview its
data. This example uses the demo adaptor included with Image Acquisition Toolbox, but you can
follow these steps to register any custom third-party adaptor. To view the demo adaptor source files,
navigate to the directory on your system. The exact file path might differ on your computer.
18-37
18 Functions
cd('C:\Program Files\MATLAB\R2019b\toolbox\imaq\imaqadaptors\kit\demo\')
Use imaqregister to inform Image Acquisition Toolbox of your third-party adaptor. For this
example, consider a prebuilt version of the demo adaptor for Windows in the \toolbox\imaq
\imaqadaptors\kit\demo\win64 folder.
imaqregister('C:\Program Files\MATLAB\R2019b\toolbox\imaq\imaqadaptors\kit\demo\win64\mwdemoimaq.
The toolbox caches adaptor information, so you must reload the adaptor libraries registered with the
toolbox for your third-party adaptor to appear.
imaqreset
View a list of installed adaptors in the InstalledAdaptors field. The newly registered third-party
adaptor appears as mwdemoimaq.
imaqhwinfo
vid = videoinput('mwdemoimaq')
preview(vid)
18-38
imaqregister
After you finish working with the preview, close the window.
closepreview(vid)
Input Arguments
adaptorpath — Adaptor path
character vector | string array
Adaptor path, specified as a character vector or string array. You must specify the full absolute path
of the adaptor library file.
Example: imaqregister('c:\temp\thirdpartyadaptor.dll') registers the adaptor
thirdpartyadaptor.
Data Types: char | string
18-39
18 Functions
Tips
• Follow these suggestions when you deploy a custom adaptor to a standalone application using
MATLAB Compiler.
• Call imaqregister in the MATLAB code that you are deploying. This ensures that the
deployed application registers the custom adaptor library for the user running the application.
For more information, see “Creating Custom Adaptors”.
• Package the custom adaptor library with the standalone application. To do this, add the
adaptor DLL file in the Files installed for your end user section of the Application
Compiler app. For more information about creating a standalone application, see “Create
Standalone Application from MATLAB” (MATLAB Compiler).
• The first time you run a deployed application that calls imaqregister, you might need to
execute the application in Run as administrator mode.
See Also
imaqhwinfo
Topics
“Registering a Third-Party Adaptor in the Image Acquisition Tool” on page 3-34
“Looking at the Demo Adaptor”
18-40
imaqreset
imaqreset
Disconnect and delete all image acquisition objects
Syntax
imaqreset
Description
imaqreset deletes any image acquisition objects that exist in memory and unloads all adaptors
loaded by the toolbox. As a result, the image acquisition hardware is reset.
imaqreset is the image acquisition command that returns MATLAB to the known state of having no
image acquisition objects and no loaded image acquisition adaptors.
You can use imaqreset to force the toolbox to search for new hardware that might have been
installed while MATLAB was running.
Note that imaqreset should not be called from any of the callbacks of a videoinput object, such as
the StartFcn or FramesAcquiredFcn.
See Also
delete | videoinput
18-41
18 Functions
imaq.VideoDevice
Acquire one frame at a time from video device
Syntax
obj = imaq.VideoDevice
obj = imaq.VideoDevice(adaptorname)
obj = imaq.VideoDevice(adaptorname, deviceid)
obj = imaq.VideoDevice(adaptorname, deviceid, format)
obj = imaq.VideoDevice(adaptorname, deviceid, format, P1, V1, ...)
frame = step(obj)
[frame metadata] = step(obj)
Description
The VideoDevice System object allows single-frame image acquisition and code generation from
MATLAB. You use the imaq.VideoDevice function to create the System object. It supports the same
adaptors and hardware that the videoinput object supports; however, it has different functions and
properties associated with it. For example, the System object uses the step function to acquire single
frames.
obj = imaq.VideoDevice creates a VideoDevice System object, obj, that acquires images from a
specified image acquisition device. When you specify no parameters, by default, it selects the first
available device for the first adaptor returned by imaqhwinfo.
Specifying properties at the time of object creation is optional. They can also be specified after the
object is created. See the table below for a list of applicable properties.
frame = step(obj) acquires a single frame from the VideoDevice System object, obj.
[frame metadata] = step(obj) acquires a single image frame from the VideoDevice System
object, obj, plus metadata from the Kinect for Windows Depth sensor. You can return Kinect for
18-42
imaq.VideoDevice
Windows skeleton data using the VideoDevice System object on the Kinect Depth sensor. For
information on how to do this, see “Kinect for Windows Metadata” on page 14-5.
Properties
You can specify properties at the time of object creation, or they can be specified and changed after
the object is created. Properties that can be used with the VideoDevice System object include:
Property Description
Device Device from which to acquire images.
Specify the video format to use while acquiring the frame. The
default value of VideoFormat is the default format returned by
imaqhwinfo for the selected device. To specify a Video Format
using a device file, set the VideoFormat property to 'From
device file' This option exists only if your device supports
device configuration files.
DeviceFile Name of file specifying video format. This property is only
visible when VideoFormat is set to 'From device file'.
DeviceProperties Object containing properties specific to the image acquisition
device.
ROI Region-of-interest for acquisition. This is set to the default ROI
value for the specified device, which is the maximum resolution
possible for the specified format. You can change the value to
change the size of the captured image. The format is 1-based,
that is, it is specified in pixels in a 1-by-4 element vector [x y
width height].
Note that this differs from the videoinput object, the Image
Acquisition Tool, and the From Video Device block, all of which
are 0-based.
HardwareTriggering Turn hardware triggering on/off. Set this property to 'on' to
enable hardware triggering to acquire images. The property is
visible only when the device supports hardware triggering.
TriggerConfiguration Specifies the trigger source and trigger condition before
acquisition. The triggering condition must be met via the
trigger source before a frame is acquired. This property is
visible only when HardwareTriggering is set to 'on'.
18-43
18 Functions
Property Description
ReturnedColorSpace Specify the color space of the returned image. The default value
of the property depends on the device and the video format
selected. Possible values are {rgb|grayscale|YCbCr} when
the default returned color space for the device is not
grayscale. Possible values are {rgb|grayscale|YCbCr|
bayer} when the default returned color space for the device is
grayscale
BayerSensorAlignment Character vector indicating the 2x2 sensor alignment. Specifies
Bayer patterns returned by hardware. Specify the sensor
alignment for Bayer demosaicing. The default value of this
property is 'grbg'. Possible values are {grbg|gbrg|rggb|
bggr}. Visible only if ReturnedColorSpace is set to 'bayer'.
ReturnedDataType The returned data type of the acquired frame. The default
ReturnedDataType is single.
ReadAllFrames Specify whether to read one image frame or all available
frames. Set to 'on' to capture all available image frames.
When set to the default of 'off', the system object takes a
snapshot of one frame, which is the equivalent of the
getsnapshot function in the toolbox. When the option is on, all
available image frames are captured, which is the equivalent of
the getdata function in the toolbox.
The setting of properties for the System object supports tab completion for enumerated properties
while coding in MATLAB. Using the tab completion is an easy way to see available property values.
After you type the property name, type a comma, then a space, then the first quote mark for the
value, then hit tab to see the possible values.
You can also use the set function with the object name and property name to get a list of available
values for that property. For example:
set(obj, 'ReturnedColorSpace')
gets the list of available color space settings for the VideoDevice System object, obj.
Note that once you have done a step, in order to change a property or set a new one, you need to
release the object using the release function, before setting the new property.
Note To get a list of options you can use on a function, press the Tab key after entering a function on
the MATLAB command line. The list expands, and you can scroll to choose a property or value. For
information about using this advanced tab completion feature, see “Using Tab Completion for
Functions” on page 5-17.
Functions
You can use these functions with the VideoDevice System object.
Function Purpose
18-44
imaq.VideoDevice
Function Purpose
step Acquire a single frame from the image acquisition device.
frame = step(obj);
Note that the first time you call step, it acquires exclusive use of the
hardware and will start streaming data.
release Release VideoDevice resources and allow property value changes.
release(obj)
L = isLocked(obj)
preview(obj)
creates a Video Preview window that displays live video data for the
VideoDevice System object, obj. The Video Preview window displays the
video data at 100% magnification. The size of the preview image is
determined by the value of the VideoDevice System object ROI property. If
not specified, it uses the default resolution for the device.
closepreview Close live image preview window.
closepreview(obj)
closes the live preview window for VideoDevice System object, obj.
imaqhwinfo Returns information about the object.
imaqhwinfo(obj)
Examples
Construct a VideoDevice System object associated with the Winvideo adaptor with device ID of 1.
18-45
18 Functions
Set an object-level property, such as ReturnedColorSpace. The syntax for an object-level property
uses the object name, property name, and property value.
vidobj.ReturnedColorSpace = 'grayscale';
Set a device-specific property, such as Brightness. The syntax for a device-specific property uses
the DeviceProperties object, the property name, and property value.
vidobj.DeviceProperties.Brightness = 150;
preview(vidobj)
frame = step(vidobj);
imshow(frame)
release(vidobj);
clear vidobj;
Extended Capabilities
C/C++ Code Generation
Generate C and C++ code using MATLAB® Coder™.
Introduced in R2012a
18-46
islogging
islogging
Determine whether video input object is logging
Syntax
bool = islogging(obj)
Description
bool = islogging(obj) returns true if the video input object obj is logging data, otherwise
false. A video input object is logging if the value of its Logging property is set to 'on'.
If obj is an array of video input objects, bool is a logical array where each element in bool
represents the corresponding element in obj. If an object in obj is logging data, islogging sets the
corresponding element in bool to true, otherwise false. If any of the video input objects in obj is
invalid, islogging returns an error.
Examples
Create a video input object.
vid = videoinput('winvideo');
To put the video input object in a logging state, start acquiring data. The example acquires 50 frames
to increase the amount of time that the object remains in logging state.
vid.FramesPerTrigger = 50
start(vid)
When the call to the start function returns, and the object is still acquiring data, use islogging to
check the state of the object.
bool = islogging(vid)
bool =
vid2 = videoinput('winvideo');
Start one of the video input objects again, such as vid, and use islogging to determine which of
the two objects is logging.
start(vid)
bool = islogging([vid vid2])
bool =
1 0
18-47
18 Functions
See Also
isrunning | isvalid | videoinput
18-48
isrunning
isrunning
Determine whether video input object is running
Syntax
bool = isrunning(obj)
Description
bool = isrunning(obj) returns true if the video input object obj is running, otherwise false. A
video input object is running if the value of its Running property is set to 'on'.
If obj is an array of video input objects, bool is a logical array where each element in bool
represents the corresponding element in obj. If an object in obj is running, the isrunning function
sets the corresponding element in bool to true, otherwise false. If the video input objects in obj is
invalid, isrunning returns an error.
Examples
Create a video input object, configure a manual trigger, and then start the object. This puts the object
in running state.
vid = videoinput('winvideo');
triggerconfig(vid,'manual')
start(vid)
bool = isrunning(vid)
bool =
1
vid2 = videoinput('winvideo');
See Also
Running | islogging | isvalid | start | stop | videoinput
18-49
18 Functions
isvalid
Determine whether image acquisition object is associated with image acquisition device
Syntax
bool = isvalid(obj)
Description
bool = isvalid(obj) returns true if the video input object obj is valid, otherwise false. An
object is an invalid image acquisition object if it is no longer associated with any hardware; that is,
the object was deleted using the delete function. If this is the case, obj should be cleared from the
workspace.
If obj is an array of video input objects, bool is a logical array where each element in bool
represents the corresponding element in obj. If an object in obj is valid, the isvalid function sets
the corresponding element in bool to true, otherwise false.
See Also
delete | imaqfind | videoinput
18-50
load
load
Load image acquisition object into MATLAB workspace
Syntax
load filename
load filename obj1 obj2 ...
S = load(filename,obj1,obj2,...)
Description
load filename returns all variables from the MAT-file filename to the MATLAB workspace.
load filename obj1 obj2 ... returns the specified image acquisition objects (obj1, obj2, etc.)
from the MAT-file specified by filename to the MATLAB workspace.
Values for read-only properties are restored to their default values when loaded. For example, the
Running property is restored to 'off'. Use propinfo to determine if a property is read only.
Examples
obj = videoinput('winvideo', 1);
obj.SelectedSourceName = 'input1'
save fname obj
load fname
load('fname', 'obj');
See Also
imaqhelp | propinfo | save
18-51
18 Functions
matroxcam
Create matroxcam object to acquire images from Matrox frame grabbers
Syntax
m = matroxcam(devicenumber, 'DCFfilename')
Description
m = matroxcam(devicenumber, 'DCFfilename') creates a matroxcam object m, where
devicenumber is a numeric scalar value that identifies a particular device by its index number and
DCFfilename is the name and fully qualified path of your Matrox DCF file, and connects it to that
frame grabber.
Examples
Use the matroxcam function with an index as the first input argument to create the object
corresponding to that index and connect it to that frame grabber. The index corresponds to the order
of boards in the cell array returned by matroxlist when you have multiple frame grabbers
connected. If you only have one frame grabber, you must use a 1 as the input argument. The second
argument must be the name and path of your DCF file, entered as a character vector.
Use the matroxlist function to ensure that MATLAB is discovering your frame grabbers.
matroxlist
ans =
Create an object, m, using the index number. In this example, for the Solios XCL at digitizer 1, use a 2
as the index number, since it is the second device on the list. The second argument must be the name
of your DCF file, entered as a character vector. It must contain the fully qualified path to the file as
well. In this example, the DCF file is named mycam.dcf.
m = matroxcam(2, 'C:\Drivers\Solios\dcf\XCL\Basler\A404K\mycam.dcf')
m =
18-52
matroxcam
Input Arguments
devicenumber — Device number of your frame grabber
numeric scalar
Device number of your frame grabber, specified as a numeric scalar. This number identifies a
particular board by its index order. It creates the object corresponding to that index and connects it
to that frame grabber. The index corresponds to the order of frame grabbers in the table returned by
matroxlist when you have multiple boards connected.
If you only have one frame grabber, you must use a 1 as the input argument.
Example: m = matroxcam(2, 'C:\Drivers\Solios\dcf\XCL\Basler\A404K\mycam.dcf')
Data Types: double
Name of your DCF file, specified as a character vector. The Digitizer Configuration File (DCF) is used
to set acquisition properties and is configured using the Matrox Intellicam software. The DCF file
contains properties relating to exposure signal, grab mode, sync signal, camera, video signal, video
timing, and pixel clock. Once you have configured these properties in your DCF file, you create the
matroxcam object using that file as an input argument. It must contain the fully qualified path to the
file as well. In this example, the DCF file is named mycam.dcf.
Example: m = matroxcam(2, 'C:\Drivers\Solios\dcf\XCL\Basler\A404K\mycam.dcf')
Data Types: char | string
See Also
closepreview | matroxlist | preview | snapshot
Topics
“Connect to Matrox Frame Grabbers” on page 13-3
“Set Properties for Matrox Acquisition” on page 13-4
“Acquire Images from Matrox Frame Grabbers” on page 13-6
“Matrox Acquisition – matroxcam Object vs videoinput Object” on page 13-2
Introduced in R2014b
18-53
18 Functions
matroxlist
List of Matrox frame grabbers connected to your system
Syntax
matroxlist
Description
matroxlist returns a list of available Matrox frame grabbers connected to your system, with model
name and digitizer number.
Examples
The output of matroxlist shows any Matrox frame grabbers connected to your system.
matroxlist
ans =
See Also
closepreview | matroxcam | preview | snapshot
Topics
“Connect to Matrox Frame Grabbers” on page 13-3
“Set Properties for Matrox Acquisition” on page 13-4
“Acquire Images from Matrox Frame Grabbers” on page 13-6
“Matrox Acquisition – matroxcam Object vs videoinput Object” on page 13-2
Introduced in R2014b
18-54
obj2mfile
obj2mfile
Convert video input objects to MATLAB code
Syntax
obj2mfile(obj,filename)
obj2mfile(obj,filename,syntax)
obj2mfile(obj,filename,syntax,mode)
obj2mfile(obj,filename,syntax,mode,reuse)
Description
obj2mfile(obj,filename) converts the video input object obj into an M-file with the name
specified by filename. The M-file contains the MATLAB code required to create the object and set
its properties. obj can be a single video input object or an array of objects.
The obj2mfile function simplifies the process of restoring an object with specific property settings
and can be used to create video input objects. obj2mfile also creates and configures the video
source object associated with the video input object.
If filename does not specify an extension or if it has an extension other than the MATLAB M-file
extension (.m), obj2mfile appends .m to the end of filename. To recreate obj, execute the M-file
by calling filename.
If the UserData property of the object is set, or if any of the callback properties is set to a cell array
or to a function handle, obj2mfile writes the data stored in those properties to a MAT-file.
obj2mfile gives the MAT-file the same name as the M-file, but uses the .mat filename extension.
obj2mfile creates the MAT-file in the same directory as the M-file.
Note obj2mfile does not restore the values of read-only properties. For example, if an object is
saved with a Logging property set to 'on', the object is recreated with a Logging property set to
'off' (the default value). Use the propinfo function to determine if a property is read only.
18-55
18 Functions
Note that obj2mfile(obj,filename,reuse) is a valid syntax. If the syntax and mode arguments
are not specified, obj2mfile uses their default values.
Examples
Create a video input object.
vidobj.FramesPerTrigger = 100;
vidobj.FrameGrabInterval = 2;
vidobj.Tag = 'CAM1';
Retrieve the selected video source object associated with the video input object.
src = getselectedsource(vidobj);
src.Contrast = 85;
src.Saturation = 125;
delete(vidobj);
clear vidobj;
18-56
obj2mfile
Execute the M-file to recreate the object. Note that obj2mfile creates and configures the associated
video source object as well.
vidObj = myvidobj;
See Also
getselectedsource | imaqhelp | propinfo | set | videoinput
18-57
18 Functions
peekdata
Most recently acquired image data
Syntax
data = peekdata(obj,frames)
Description
data = peekdata(obj,frames) returns data containing the latest number of frames specified by
frames. If frames is greater than the number of frames currently acquired, all available frames are
returned with a warning message stating that the requested number of frames was not available. obj
must be a 1-by-1 video input object.
data is returned to the MATLAB workspace in its native data type using the color space specified by
the ReturnedColorSpace property.
You can use the MATLAB image or imagesc functions to view the returned data. Use imaqmontage
to view multiple frames at once.
peekdata is a nonblocking function that immediately returns image frames and execution control to
the MATLAB workspace. Not all requested data might be returned.
Note peekdata provides a look at the data; it does not remove data from the memory buffer. The
object's FramesAvailable property value is not affected by the number of frames returned by
peekdata.
The behavior of peekdata depends on the settings of the Running and the Logging properties.
18-58
peekdata
The number of frames available to peekdata is determined by recalling the last frame returned by a
previous peekdata call, and the number of frames that were acquired since then.
peekdata can be used only after the start command is issued and while the object is running.
peekdata can also be called once after obj has stopped running.
Note The peekdata function does not return any data while running if in disk logging mode.
Note To get a list of options you can use on a function, press the Tab key after entering a function on
the MATLAB command line. The list expands, and you can scroll to choose a property or value. For
information about using this advanced tab completion feature, see “Using Tab Completion for
Functions” on page 5-17.
See Also
getdata | getsnapshot | imaqhelp | imaqmontage | propinfo | start
18-59
18 Functions
preview
Preview of live video data
Syntax
preview(obj)
preview(obj,himage)
himage = preview(...)
Description
preview(obj) creates a Video Preview window that displays live video data for video input object
obj. The window also displays the timestamp and video resolution of each frame, the current frame
rate, and the current status of obj. The Video Preview window displays the video data at 100%
magnification. The size of the preview image is determined by the value of the video input object
ROIPosition property.
The Video Preview window remains active until it is either stopped using stoppreview or closed
using closepreview. If you delete the object, by calling delete(obj), the Video Preview window
stops previewing and closes automatically.
18-60
preview
preview(obj,himage) displays live video data for video input object obj in the image object
specified by the handle himage. preview scales the image data to fill the entire area of the image
object but does not modify the values of any image object properties. Use this syntax to preview video
data in a custom GUI of your own design (see Examples).
himage = preview(...) returns himage, a handle to the image object containing the previewed
data. To obtain a handle to the figure window containing the image object, use the ancestor
function. For more information about using image objects, see image. See the Custom Update
Function section for more information about the image object returned.
Notes
The behavior of the Video Preview window depends on the video input object's current state and
trigger configuration.
Note The Image Acquisition Toolbox Preview window and the Preview window that is built into the
Image Acquisition Tool support the display of up to 16-bit image data. The Preview window was
designed to only show 8-bit data, but many cameras return 10-, 12-, 14-, or 16-bit data. The Preview
window display supports these higher bit-depth cameras. However, larger bit data is scaled to 8-bit
for the purpose of displaying previewed data. To capture the image data in the Preview window in its
full bit depth for grayscale images, set the PreviewFullBitDepth property to 'on'.
The 'UpdatePreviewWindowFcn' will not necessarily be called for every frame that is acquired. If
a new frame is acquired and the 'UpdatePreviewWindowFcn' for the previous frame has not yet
finished executing, no update will be generated for the new frame. If you need to execute a function
for every acquired frame, use the FramesAcquiredFcn instead.
You can use this function to define custom processing of the previewed image data. When preview
invokes the function handle you specify, it passes three arguments to your function:
18-61
18 Functions
Field Description
Data Current image frame specified as an H-by-W-by-B matrix where H and W are
the image height and width, respectively, as specified in the ROIPosition
property, and B is the number of color bands, as specified in the
NumberOfBands property.
Resolution Character vector specifying current image width and height, as defined by the
ROIPosition property.
Status Character vector describing the current acquisition status of the video input
object.
Timestamp Character vector specifying the timestamp associated with the current image
frame.
FrameRate Character vector specifying the current frame rate of the video input object in
frames per second.
Examples
Create a customized GUI.
vidRes = obj.VideoResolution;
nBands = obj.NumberOfBands;
hImage = image( zeros(vidRes(2), vidRes(1), nBands) );
preview(obj, hImage);
For more information on customized GUIs, see “Previewing Data in Custom GUIs” on page 2-9.
See Also
ancestor | closepreview | image | imaqhelp | stoppreview
18-62
propinfo
propinfo
Property characteristics for image acquisition objects
Syntax
out = propinfo(obj)
out = propinfo(obj,PropertyName)
Description
out = propinfo(obj) returns the structure out whose field names are the names of all the
properties supported by obj. obj must be a 1-by-1 image acquisition object. The value of each field is
a structure containing the fields shown below.
out = propinfo(obj,PropertyName) returns the structure out for the property specified by
PropertyName. If PropertyName is a cell array of character vectors, propinfo returns a structure
for each property, stored in a cell array.
Note To get a list of options you can use on a function, press the Tab key after entering a function on
the MATLAB command line. The list expands, and you can scroll to choose a property or value. For
information about using this advanced tab completion feature, see “Using Tab Completion for
Functions” on page 5-17.
Examples
Create the video input object vid.
vid = videoinput('winvideo',1);
18-63
18 Functions
out = propinfo(vid);
out1 = propinfo(vid,'LoggingMode');
See Also
imaqhelp
18-64
save
save
Save image acquisition objects to MAT-file
Syntax
save filename
save filename obj1 obj2 ...
save(filename,obj1,obj2,...)
Description
save filename saves all variables in the MATLAB workspace to the MAT-file filename. If
filename does not include a file extension, save appends the .MAT extension to the filename.
save filename obj1 obj2 ... saves the specified image acquisition objects (obj1, obj2, etc.)
to the MAT-file filename.
save(filename,obj1,obj2,...) is the functional form of the command, where the file name and
image acquisition objects must be specified as character vectors. If no objects are specified, then all
variables existing in the MATLAB workspace are saved.
Note that any data associated with the image acquisition object is not stored in the MAT-file. To save
the data, bring it into the MATLAB workspace (using the getdata function), and then save the
variable to the MAT-file.
To return variables from the MAT-file to the MATLAB workspace, use the load command. Values for
read-only properties are restored to their default values upon loading. For example, the Running
property is restored to 'off'. Use the propinfo function to determine if a property is read only.
Examples
obj = videoinput('winvideo', 1);
obj.SelectedSourceName = 'input1'
save fname obj
obj.TriggerFcn = {'mycallback', 5};
save('fname1', 'obj')
See Also
imaqhelp | load | propinfo
18-65
18 Functions
set
Configure or display image acquisition object properties
Syntax
set(obj)
prop_struct = set(obj)
set(obj,PropertyName)
prop_cell = set(obj,PropertyName)
set(obj,PropertyName,PropertyValue,...)
set(obj,S)
set(obj,PN,PV)
Description
set(obj) displays property names and any enumerated values for all configurable properties of
image acquisition object obj. obj must be a single image acquisition object.
prop_struct = set(obj) returns the property names and any enumerated values for all
configurable properties of image acquisition object obj. obj must be a single image acquisition
object. The return value prop_struct is a structure whose field names are the property names of
obj, and whose values are cell arrays of possible property values or empty cell arrays if the property
does not have a finite set of possible character vector values.
set(obj,PropertyName) displays the possible values for the specified property, PropertyName, of
image acquisition object obj. obj must be a single image acquisition object. Use the set(obj)
syntax to get a list of all the properties for a particular image acquisition object that can be set.
prop_cell = set(obj,PropertyName) returns the possible values for the specified property,
PropertyName, of image acquisition object obj. obj must be a single image acquisition object. The
returned array prop_cell is a cell array of possible values or an empty cell array if the property
does not have a finite set of possible character vector values.
set(obj,S) configures the properties of obj with the values specified in S, where S is a structure
whose field names are object property names.
set(obj,PN,PV) configures the properties specified in the cell array of character vectors, PN, to the
corresponding values in the cell array PV, for the image acquisition object obj. PN must be a vector.
If obj is an array of image acquisition objects, PV can be an M-by-N cell array, where M is equal to
the length of the image acquisition object array and N is equal to the length of PN. In this case, each
image acquisition object is updated with a different set of values for the list of property names
contained in PN.
18-66
set
Note Parameter/value character vector pairs, structures, and parameter/value cell array pairs can be
used in the same call to set.
Examples
These examples illustrate the various ways to use the set function to set the values of image
acquisition object properties.
Instead of using set to set individual property values, you should use dot notation. So for example,
instead of this:
vid.FramesPerTrigger = 100;
See Also
get | imaqfind | videoinput
18-67
18 Functions
snapshot
Acquire single image frame from GigE Vision camera
Syntax
img = snapshot(g);
[img, ts] = snapshot(g);
Description
img = snapshot(g); acquires the current frame as a single image from the GigE Vision camera g
and assigns it to the variable img. If you call snapshot in a loop, then it returns a new frame each
time. The returned image is based on the Pixel Format of your camera. snapshot uses the camera’s
default resolution or another resolution that you specify using the Height and Width properties, if
available.
Note The snapshot function is for use only with the gigecam object. To acquire images using the
videoinput object, use the getsnapshot or getdata functions.
[img, ts] = snapshot(g); acquires the current frame as a single image from the GigE Vision
camera g and assigns it to the variable img, and assigns the timestamp to the variable ts.
Examples
Use the snapshot function to acquire one image frame from a GigE Vision camera. You then show it
using a display function such as imshow or image.
Use the gigecamlist function to ensure that MATLAB is discovering your camera.
gigecamlist
ans =
Use the gigecam function to create the object and connect it to the camera.
g = gigecam
18-68
snapshot
preview(g)
The preview window displays live video stream from your camera. If you change a property while
previewing, then the preview dynamically updates, and the image reflects the property change.
closePreview(g)
Acquire a single image from the camera using the snapshot function, and assign it to the variable
img.
img = snapshot(g);
imshow(img)
clear g
See Also
commands | executeCommand | gigecam | gigecamlist
Introduced in R2014b
18-69
18 Functions
snapshot
Acquire single image frame from Matrox frame grabber
Syntax
img = snapshot(m);
[img, ts] = snapshot(m);
Description
img = snapshot(m); acquires the current frame as a single image from the Matrox frame grabber
m and assigns it to the variable img. If you call snapshot in a loop, then it returns a new frame each
time.
Note The snapshot function is for use only with the matroxcam object. To acquire images using the
videoinput object, use the getsnapshot or getdata functions.
[img, ts] = snapshot(m); acquires the current frame as a single image from the Matrox frame
grabber m and assigns it to the variable img, and assigns the timestamp to the variable ts.
Examples
Use the snapshot function to acquire one image frame from a Matrox frame grabber. You then show
it using a display function such as imshow or image.
Use the matroxlist function to ensure that MATLAB is discovering your frame grabber.
matroxlist
ans =
Use the matroxcam function to create the object and connect it to the frame grabber. If you want to
use the second frame grabber in the list, the Solios XCL at digitizer 1, use a 2 as the index number,
since it is the second board on the list. The second argument must be the name and path of your DCF
file, entered as a character vector.
m = matroxcam(2, 'C:\Drivers\Solios\dcf\XCL\Basler\A404K\mycam.dcf');
m =
18-70
snapshot
DCFName: 'C:\Drivers\Solios\dcf\XCL\Basler\A404K\mycam.dcf'
FrameResolution: '1300 x 1080'
Timeout: 10
The DCF file is specified so that the acquisition can use the properties you have set in your DCF file.
preview(m)
You can leave the Preview window open, or close it any time. To close the preview:
closePreview(m)
Acquire a single image from the frame grabber using the snapshot function, and assign it to the
variable img.
img = snapshot(m);
imshow(img)
clear m
Note about Hardware Triggering: If your DCF file is configured for hardware triggering, then you
must provide the trigger to acquire images. To do that, call the snapshot function as you normally
would, as shown in this example, and then perform the hardware trigger to acquire the frame. When
you call the snapshot function with hardware triggering set, it will not timeout as it normally would.
Therefore, the MATLAB command-line will be blocked until you perform the hardware trigger.
See Also
closepreview | matroxcam | matroxlist | preview
Topics
“Connect to Matrox Frame Grabbers” on page 13-3
“Set Properties for Matrox Acquisition” on page 13-4
“Acquire Images from Matrox Frame Grabbers” on page 13-6
“Matrox Acquisition – matroxcam Object vs videoinput Object” on page 13-2
Introduced in R2014b
18-71
18 Functions
start
Obtain exclusive use of image acquisition device
Syntax
start(obj)
Description
start(obj) obtains exclusive use of the image acquisition device associated with the video input
object obj and locks the device's configuration. Starting an object is a necessary first step to acquire
image data, but it does not control when data is logged.
obj can either be a 1-by-1 video input object or an array of video input objects.
If the object's StartFcn errors, the hardware is never started and the object's Running property
remains 'Off'.
An image acquisition object stops running when one of the following conditions is met:
18-72
start
Examples
The start function can be called by a video input object's event callback.
obj.StopFcn = {'start'};
See Also
imaqfind | imaqhelp | propinfo | stop | trigger | triggerconfig
18-73
18 Functions
stop
Stop video input object
Syntax
stop(obj)
Description
stop(obj) halts an acquisition associated with the video input object obj. obj can be either a
single video input object or an array of video input objects.
An image acquisition object can also stop running under one of the following conditions:
Examples
The stop function can be called by a video input object's event callback.
obj.TimerFcn = {'stop'};
See Also
imaqfind | propinfo | start | trigger | videoinput
18-74
stoppreview
stoppreview
Stop previewing video data
Syntax
stoppreview(obj)
Description
stoppreview(obj) stops the previewing of video data from image acquisition object obj.
Examples
Create a video input object and open a Video Preview window.
vid = videoinput('winvideo',1);
preview(vid)
stoppreview(vid);
Restart previewing.
preview(vid)
See Also
closepreview | preview
18-75
18 Functions
trigger
Initiate data logging
Syntax
trigger(obj)
Description
trigger(obj) initiates data logging for the video input object obj. obj can be either a single video
input object or an array of video input objects.
obj must be running and its TriggerType property must be set to 'manual'. To start an object
running, use the start function.
Examples
The trigger function can be called by a video input object's event callback.
obj.StartFcn = @trigger;
See Also
imaqfind | start | stop | videoinput
18-76
triggerconfig
triggerconfig
Configure video input object trigger properties
Syntax
triggerconfig(obj,type)
triggerconfig(obj,type,condition)
triggerconfig(obj,type,condition,source)
config = triggerconfig(obj)
triggerconfig(obj,config)
Description
triggerconfig(obj,type) configures the value of the TriggerType property of the video input
object obj to the value specified by the character vector type. For a list of valid TriggerType
values, use triggerinfo(obj). type must specify a unique trigger configuration.
obj can be either a single video input object or an array of video input objects. If an error occurs, any
video input objects in the array that have already been configured are returned to their original
configurations.
Note To get a list of options you can use on a function, press the Tab key after entering a function on
the MATLAB command line. The list expands, and you can scroll to choose a property or value. For
information about using this advanced tab completion feature, see “Using Tab Completion for
Functions” on page 5-17.
18-77
18 Functions
Examples
Example 1
triggerconfig(vid, 'manual')
start(vid)
trigger(vid)
delete(vid);
Example 2
This example uses a structure returned from triggerinfo to configure trigger parameters.
Use triggerinfo to get all valid configurations for the trigger properties for the object.
config = triggerinfo(vid);
triggerconfig(vid,config(2));
delete(vid);
See Also
imaqhelp | trigger | triggerinfo | videoinput
18-78
triggerinfo
triggerinfo
Provide information about available trigger configurations
Syntax
triggerinfo(obj)
triggerinfo(obj,type)
config = triggerinfo(...)
Description
triggerinfo(obj) displays all available trigger configurations for the video input object obj. obj
can only be a 1-by-1 video input object.
config = triggerinfo(...) returns config, an array of MATLAB structures, containing all the
valid trigger configurations for the video input object obj. Each structure in the array contains these
fields:
Field Description
TriggerType Name of the trigger type
TriggerCondition Condition that must be met before executing a trigger
TriggerSource Hardware source used for triggering
You can pass one of the structures in config to the triggerconfig function to specify the trigger
configuration.
Note To get a list of options you can use on a function, press the Tab key after entering a function on
the MATLAB command line. The list expands, and you can scroll to choose a property or value. For
information about using this advanced tab completion feature, see “Using Tab Completion for
Functions” on page 5-17.
Examples
This example illustrates how to use the triggerinfo function to retrieve valid configurations of the
TriggerType, TriggerSource, and TriggerCondition properties.
vid = videoinput('winvideo');
2 Get information about the available trigger configurations for this object.
config = triggerinfo(vid)
18-79
18 Functions
config =
config(1)
ans =
TriggerType: 'immediate'
TriggerCondition: 'none'
TriggerSource: 'none'
See Also
imaqhelp | triggerconfig
18-80
videoinput
videoinput
Create video input object
Syntax
obj = videoinput(adaptorname)
obj = videoinput(adaptorname,deviceID)
obj = videoinput(adaptorname,deviceID,format)
obj = videoinput(adaptorname,deviceID,format,P1,V1,...)
Description
obj = videoinput(adaptorname) constructs the video input object obj. A video input object
represents the connection between MATLAB and a particular image acquisition device.
adaptorname is a character vector that specifies the name of the adaptor used to communicate with
the device. Use the imaqhwinfo function to determine the adaptors available on your system.
To get a list of the formats supported by a particular device, view the DeviceInfo structure for the
device that is returned by the imaqhwinfo function. Each DeviceInfo structure contains a
SupportedFormats field. If format is not specified, the device's default format is used.
When the video input object is created, its VideoFormat field contains the format name or device
configuration file that you specify.
The property name and property value pairs can be in any format supported by the set function, i.e.,
parameter/value character vector pairs, structures, or parameter/value cell array pairs.
To view a complete listing of video input object functions and properties, use the imaqhelp function.
imaqhelp videoinput
In the documentation, see “Image Acquisition Toolbox Properties” on page 5-31 for links to the
property reference pages.
18-81
18 Functions
Examples
Construct a video input object.
obj.SelectedSourceName = 'input1'
src_obj = getselectedsource(obj);
get(src_obj)
preview(obj);
frame = getsnapshot(obj);
image(frame);
delete(obj);
Tips
The toolbox chooses the first available video source object as the selected source and specifies this
video source object's name in the object's SelectedSourceName property. Use
getselectedsource(obj) to access the video source object that is used for acquisition.
Note To get a list of options you can use on a function, press the Tab key after entering a function on
the MATLAB command line. The list expands, and you can scroll to choose a property or value. For
information about using this advanced tab completion feature, see “Using Tab Completion for
Functions” on page 5-17.
See Also
delete | imaqfind | isvalid | preview
18-82
wait
wait
Wait until image acquisition object stops running or logging
Syntax
wait(obj)
wait(obj,waittime)
wait(obj,waittime,state)
Description
wait(obj) blocks the MATLAB command line until the video input object obj stops running
(Running = 'off'). obj can be either a single video input object or an array of video input objects.
When obj is an array of objects, the wait function waits until all objects in the array stop running. If
obj is not running or is an invalid object, wait returns immediately. The wait function can be useful
when you want to guarantee that data is acquired before another task is performed.
wait(obj,waittime) blocks the MATLAB command line until the video input object or array of
objects obj stops running or until waittime seconds have expired, whichever comes first. By
default, waittime is set to the value of the object's Timeout property.
wait(obj,waittime,state) blocks the MATLAB command line until the video input object or
array of objects obj stops running or logging, or until waittime seconds have expired, whichever
comes first. state can be either of the following character vectors. The default value is enclosed in
braces ({}).
State Description
{'running'} Blocks until the value of the object's Running property is 'off'.
'logging' Blocks until the value of the object's Logging property is 'off'.
Note The video input object's stop event callback function (StopFcn) might not be called before this
function returns.
An image acquisition object stops running or logging when one of the following conditions is met:
Note To get a list of options you can use on a function, press the Tab key after entering a function on
the MATLAB command line. The list expands, and you can scroll to choose a property or value. For
18-83
18 Functions
information about using this advanced tab completion feature, see “Using Tab Completion for
Functions” on page 5-17.
Examples
Create a video input object.
vid = videoinput('winvideo');
Specify an acquisition that should take several seconds. The example sets the FramesPerTrigger
property to 300.
vid.FramesPerTrigger = 300;
Start the object. Because it is configured with an immediate trigger (the default), acquisition begins
when the object is started. The example calls the wait function after calling the start function.
Notice how wait blocks the MATLAB command line until the acquisition is complete.
start(vid), wait(vid);
See Also
imaqhelp | propinfo | start | stop | trigger
18-84
19
Properties
19 Properties
BayerSensorAlignment
Specify sensor alignment for Bayer demosaicing
Description
If the ReturnedColorSpace property is set to 'bayer', then the Image Acquisition Toolbox
software will demosaic Bayer patterns returned by the hardware. This color space setting will
interpolate Bayer pattern encoded images into standard RGB images. If your camera uses Bayer
filtering, the toolbox supports the Bayer pattern and can return color if desired.
In order to perform the demosaicing, the toolbox needs to know the pixel alignment of the sensor.
This is the order of the red, green, and blue sensors and is normally specified by describing the four
pixels in the upper-left corner of the sensor. It is the band sensitivity alignment of the pixels as
interpreted by the camera's internal hardware. You must get this information from the camera's
documentation and then specify the value for the alignment, as described in the following table.
Value Description
'gbrg' The 2-by-2 sensor alignment is
green blue
red green
'grbg' The 2-by-2 sensor alignment is
green red
blue green
'bggr' The 2-by-2 sensor alignment is
blue green
green red
'rggb' The 2-by-2 sensor alignment is
red green
green blue
The value of this property is only used if the ReturnedColorSpace property is set to 'bayer'.
For examples showing how to convert Bayer images, see “Converting Bayer Images” on page 7-15.
Characteristics
Access Read/write
Data type Character vector
Values [ {'grbg'} | 'gbrg' | 'rggb'| 'bggr']
19-2
BayerSensorAlignment
See Also
Functions
Properties
ReturnedColorSpace, VideoFormat
How To’s
19-3
19 Properties
DeviceID
Identify image acquisition device represented by video input object
Description
The DeviceID property identifies the device represented by the video input object.
A device ID is a number, assigned by an adaptor, that uniquely identifies an image acquisition device.
The adaptor assigns the first device it detects the identifier 1, the second device it detects the
identifier 2, and so on.
You must specify the device ID as an argument to the videoinput function when you create a video
input object. The object stores the value in the DeviceID property and also uses the value when
constructing the default value of the Name property.
To get a list of the IDs of the devices connected to your system, use the imaqhwinfo function,
specifying the name of a particular adaptor as an argument.
Characteristics
Access Read only
Data type double
Values Any nonnegative integer
Examples
Use the imaqhwinfo function to determine which adaptors are connected to devices on your system.
imaqhwinfo
ans =
Use the imaqhwinfo function again, specifying the name of the adaptor, to find out how many
devices are available through that adaptor. The imaqhwinfo function returns the device IDs for all
the devices in the DeviceIds field.
info = imaqhwinfo('winvideo')
info =
19-4
DeviceID
See Also
Functions
imaqhwinfo, videoinput
Properties
Name
19-5
19 Properties
DiskLogger
Specify MATLAB VideoWriter file used to log data
Description
The DiskLogger property specifies the VideoWriter file object used to log data when the
LoggingMode property is set to 'disk' or 'disk&memory'. For the best performance, VideoWriter
is the recommended file type.
VideoWriter File
For the best performance, logging to disk requires a MATLAB VideoWriter object, which is a
MATLAB object, not an Image Acquisition Toolbox object. After you create and configure a
VideoWriter object, you provide it to the DiskLogger property.
A MATLAB VideoWriter object specifies the file name and other characteristics. For example, you can
use VideoWriter properties to specify the profile used for data compression and the desired quality of
the output. For complete information about the VideoWriter object and its properties, see the
VideoWriter documentation.
Note Do not use the variable returned by the VideoWriter function to perform any operation on a
VideoWriter file while it is being used by a video input object for data logging. For example, do not
change any of the VideoWriter file properties, add frames, or close the object. Your changes could
conflict with the video input object.
After Logging and Running are off, it is possible that the DiskLogger might still be writing data to
disk. When the DiskLogger finishes writing data to disk, the value of the DiskLoggerFrameCount
property should equal the value of the FramesAcquired property. Do not close or modify the
DiskLogger until this condition is met.
For more information about logging image data using a VideoWriter file, see “Logging Image Data to
Disk” on page 6-32.
Note The peekdata function does not return any data while running if in disk logging mode.
Characteristics
Access Read only while running
Data type VideoWriter object
Values The default value is [].
Examples
Using VideoWriter
19-6
DiskLogger
Create a video input object that accesses a GigE Vision image acquisition device and uses grayscale
format at 10 bits per pixel.
You can log acquired data to memory, to disk, or both. By default, data is logged to memory. To
change the logging mode to disk, configure the video input object's LoggingMode property.
vidobj.LoggingMode = 'disk'
Create a VideoWriter object with the profile set to Motion JPEG 2000. Motion JPEG 2000 allows
writing the full 10 bits per pixel data to the file.
Now that the video input object is configured for logging data to a Motion JPEG 2000 file, initiate the
acquisition.
start(vidobj)
wait(vidobj)
When logging large amounts of data to disk, disk writing occasionally lags behind the acquisition. To
determine whether all frames are written to disk, you can optionally use the
DiskLoggerFrameCount property.
You can verify that the FramesAcquired and DiskLoggerFrameCount properties have identical
values by using these commands and comparing the output.
vidobj.FramesAcquired
vidobj.DiskLoggerFrameCount
When the video input object is no longer needed, delete it and clear it from the workspace.
delete(vidobj)
clear vidobj
See Also
Functions
videoinput
Properties
19-7
19 Properties
DiskLoggerFrameCount
Specify number of frames written to disk
Description
The DiskLoggerFrameCount property indicates the current number of frames written to disk by the
DiskLogger. This value is only updated when the LoggingMode property is set to 'disk' or
'disk&memory'.
After Logging and Running are off, it is possible that the DiskLogger might still be writing data to
disk. When the DiskLogger finishes writing data to disk, the value of the DiskLoggerFrameCount
property should equal the value of the FramesAcquired property. Do not close or modify the
DiskLogger until this condition is met.
Characteristics
Access Read only
Data type double
Values Any nonnegative integer
See Also
Functions
videoinput
Properties
19-8
ErrorFcn
ErrorFcn
Specify callback function to execute when run-time error occurs
Description
The ErrorFcn property specifies the function to execute when an error event occurs. A run-time
error event is generated immediately after a run-time error occurs.
Run-time errors include hardware errors and timeouts. Run-time errors do not include configuration
errors such as setting an invalid property value.
Run-time error event information is stored in the EventLog property. You can retrieve any error
message with the Data.Message field of EventLog.
Note Callbacks, including ErrorFcn, are executed only when the video object is in a running state.
If you need to use the ErrorFcn callback for error handling during previewing, you must start the
video object before previewing. To do that without logging data, use a manual trigger.
Characteristics
Access Read only while running
Data type Character vector, function handle, or cell array
Values imaqcallback is the default callback function.
See Also
Properties
EventLog, Timeout
19-9
19 Properties
EventLog
Store information about events
Description
The EventLog property is an array of structures that stores information about events. Each structure
in the array represents one event. Events are recorded in the order in which they occur. The first
EventLog structure reflects the first event recorded, the second EventLog structure reflects the
second event recorded, and so on.
Each event log structure contains two fields: Type and Data.
The Type field stores a character array that identifies the event type. The Image Acquisition Toolbox
software defines many different event types, listed in this table. Note that not all event types are
logged.
The Data field stores information associated with the specific event. For example, all events return
the absolute time the event occurred in the AbsTime field. Other event-specific fields are included in
Data. For more information, see “Retrieving Event Information” on page 8-7.
EventLog can store a maximum of 1000 events. If this value is exceeded, then the most recent 1000
events are stored.
Characteristics
Access Read only
Data type Structure array
Values Default is empty structure array.
Examples
Create a video input object.
vid = videoinput('winvideo');
19-10
EventLog
start(vid)
elog = vid.EventLog;
{elog.Type}
ans =
'Start' 'Trigger' 'Stop'
elog(2).Data
ans =
See Also
Properties
Logging
19-11
19 Properties
FrameGrabInterval
Specify how often to acquire frame from video stream
Description
The FrameGrabInterval property specifies how often the video input object acquires a frame from
the video stream. By default, objects acquire every frame in the video stream, but you can use this
property to specify other acquisition intervals.
Note Do not confuse the frame grab interval with the frame rate. The frame rate describes the rate
at which an image acquisition device provides frames, typically measured in seconds, such as 30
frames per second. The frame grab interval is measured in frames, not seconds. If a particular
device's frame rate is configurable, the video source object might include the frame rate as a device-
specific property.
For example, when you specify a FrameGrabInterval value of 3, the object acquires every third
frame from the video stream, as illustrated in this figure. The object acquires the first frame in the
video stream before applying the FrameGrabInterval.
You specify the source of the video stream in the SelectedSourceName property.
Characteristics
Access Read only while running
Data type double
Values Any positive integer. The default value is 1 (acquire every frame).
See Also
Functions
videoinput
19-12
FrameGrabInterval
Properties
SelectedSourceName
19-13
19 Properties
FramesAcquired
Indicate total number of frames acquired
Description
The FramesAcquired property indicates the total number of frames that the object has acquired,
regardless of how many frames have been extracted from the memory buffer. The video input object
continuously updates the value of the FramesAcquired property as it acquires frames.
Note When you issue a start command, the video input object resets the value of the
FramesAcquired property to 0 (zero) and flushes the buffer.
To find out how many frames are available in the memory buffer, use the FramesAvailable
property.
Characteristics
Access Read only
Data type double
Values Any nonnegative integer. The default value is 0 (zero).
See Also
Functions
start
Properties
19-14
FramesAcquiredFcn
FramesAcquiredFcn
Specify MATLAB file executed when specified number of frames have been acquired
Description
The FramesAcquiredFcn specifies the MATLAB file function to execute every time a predefined
number of frames have been acquired.
A frames acquired event is generated immediately after the number of frames specified by the
FramesAcquiredFcnCount property is acquired from the selected video source. This event executes
the MATLAB file specified for FramesAcquiredFcn.
Use the FramesAcquiredFcn callback if you must access each frame that is acquired. If you do not
have this requirement, you might want to use the TimerFcn property.
Characteristics
Access Read/write
Data type Character vector, function handle, or cell array
Values The default value is an empty matrix ([]).
See Also
Properties
19-15
19 Properties
FramesAcquiredFcnCount
Specify number of frames that must be acquired before frames acquired event is generated
Description
The FramesAcquiredFcnCount property specifies the number of frames to acquire from the
selected video source before a frames acquired event is generated.
The object generates a frames acquired event immediately after the number of frames specified by
FramesAcquiredFcnCount is acquired from the selected video source.
Characteristics
Access Read only while running
Data type double
Values Any positive integer. The default value is 0 (zero).
See Also
Properties
FramesAcquiredFcn
19-16
FramesAvailable
FramesAvailable
Indicate number of frames available in memory buffer
Description
The FramesAvailable property indicates the total number of frames that are available in the
memory buffer. When you extract data, the object reduces the value of the FramesAvailable
property by the appropriate number of frames. You use the getdata function to extract data and
move it into the MATLAB workspace.
Note When you issue a start command, the video input object resets the value of the
FramesAvailable property to 0 (zero) and flushes the buffer.
To view the total number of frames that have been acquired since the last start command, use the
FramesAcquired property.
Characteristics
Access Read only
Data type double
Values Any nonnegative integer. The default value is 0 (zero).
See Also
Functions
getdata, start
Properties
FramesAcquired
19-17
19 Properties
FramesPerTrigger
Specify number of frames to acquire per trigger using selected video source
Description
The FramesPerTrigger property specifies the number of frames the video input object acquires
each time it executes a trigger using the selected video source.
When the value of the FramesPerTrigger property is set to Inf, the object keeps acquiring frames
until an error occurs or you issue a stop command.
Note When the FramesPerTrigger property is set to Inf, the object ignores the value of the
TriggerRepeat property.
Characteristics
Access Read only while running
Data type double
Values Any positive integer. The default value is 10.
See Also
Functions
stop
Properties
TriggerRepeat
19-18
IgnoreDroppedFrames
IgnoreDroppedFrames
Continue acquiring frames when acquisition drops a frame
Description
Note This property is only supported on videoinput objects using the gige adaptor and is not
supported on gigecam objects.
If this property is set to 'off', the acquisition stops when it encounters a dropped frame. If this
property is set to 'on', the acquisition continues by ignoring the dropped frames.
When this property is 'on', keep track of the number of frames dropped while the acquisition is
running with the NumDroppedFrames property.
Characteristics
Default value is enclosed in braces ({}).
Access Read/write
Data type Character vector
Values [ {'off'} | 'on' ]
See Also
NumDroppedFrames
19-19
19 Properties
InitialTriggerTime
Record absolute time of first trigger
Description
The InitialTriggerTime property records the absolute time of the first trigger. The absolute time
is recorded as a MATLAB clock vector.
For all trigger types, InitialTriggerTime records the time when the Logging property is set to
'on'.
To find the time when a subsequent trigger executed, view the Data.AbsTime field of the EventLog
property for the particular trigger.
Characteristics
Access Read only
Data type Six-element vector of doubles (MATLAB clock vector)
Values The default value is [].
Examples
Create an image acquisition object, vid, for a USB-based webcam.
vid = videoinput('winvideo',1);
Start the object. Because the TriggerType property is set to 'immediate' by default, the trigger
executes immediately. The object records the time of the initial trigger.
start(vid)
abstime = vid.InitialTriggerTime
abstime =
1.0e+003 *
t = fix(abstime);
sprintf('%d:%d:%d', t(4),t(5),t(6))
ans =
13:26:20
19-20
InitialTriggerTime
See Also
Functions
clock, getdata
Properties
19-21
19 Properties
Logging
Indicate whether object is currently logging data
Description
The Logging property indicates whether the video input object is currently logging data.
When a trigger occurs, the object sets the Logging property to 'on' and logs data to memory, a disk
file, or both, depending on the value of the LoggingMode property.
The object sets the Logging property to 'off' when it acquires the requested number of frames, an
error occurs, or you issue a stop command.
To acquire data when the object is running but not logging, use the peekdata function. Note,
however, that the peekdata function does not guarantee that all the requested image data is
returned. To acquire all the data without gaps, you must have the object log the data to memory or to
a disk file.
Characteristics
Default value is enclosed in braces ({}).
See Also
Functions
Properties
LoggingMode, Running
19-22
LoggingMode
LoggingMode
Specify destination for acquired data
Description
The LoggingMode property specifies where you want the video input object to store the acquired
data. You can specify any of the following values:
Value Description
'disk' Log acquired data to a disk file.
'disk&memory' Log acquired data to both a disk file and to a memory buffer.
'memory' Log acquired data to a memory buffer.
If you select 'disk' or 'disk&memory', you must specify the AVI file object used to access the disk
file as the value of the DiskLogger property.
Note When logging data to memory, you must extract the acquired data in a timely manner with the
getdata function to avoid using up all the memory that is available on your system.
Note The peekdata function does not return any data while running if in disk logging mode.
Characteristics
Access Read only while running
Data type Character vector
Values [ 'disk' | 'disk&memory' | {'memory'} ]
See Also
Functions
getdata
Properties
DiskLogger, Logging
19-23
19 Properties
Name
Specify name of image acquisition object
Description
The Name property specifies a descriptive name for the image acquisition object.
Characteristics
Access Read/write
Data type Character vector
Values Any character vector. The toolbox creates the default name by combining the
values of the VideoFormat and DeviceID properties with the adaptor name in
this format:
VideoFormat + '-' + adaptor name + '-' + DeviceID
Examples
Create an image acquisition object.
vid = videoinput('winvideo');
vid.Name
ans =
RGB555_128x96-winvideo-1
See Also
Functions
videoinput
Properties
DeviceID, VideoFormat
19-24
NumberOfBands
NumberOfBands
Indicate number of color bands in data to be acquired
Description
The NumberOfBands property indicates the number of color bands in the data to be acquired. The
toolbox defines band as the third dimension in a 3-D array, as shown in this figure.
The value of the NumberOfBands property indicates the number of color bands in the data returned
by getsnapshot, getdata, and peekdata.
Characteristics
Access Read only
Data type double
Values Any positive integer. The default value is defined at object creation time based on
the video format.
Examples
Create an image acquisition object.
vid = videoinput('winvideo');
vid.NumberOfBands
ans =
If you retrieve the value of the VideoFormat property, you can see that the video data is in RGB
format.
vid.VideoFormat
ans =
RGB24_320x240
19-25
19 Properties
See Also
Functions
19-26
NumDroppedFrames
NumDroppedFrames
Number of frames dropped while acquisition is running
Description
Note This property is only supported on videoinput objects using the gige adaptor and is not
supported on gigecam objects.
The NumDroppedFrames property indicates the number of frames dropped while the acquisition is
running if the IgnoreDroppedFrames property is set to 'on'.
This property does not keep track of the number of frames dropped while previewing.
Characteristics
Access Read only
Data type double
Values Any nonnegative integer. The default value is 0 (zero).
See Also
IgnoreDroppedFrames
19-27
19 Properties
Parent
Identify video input object that is parent of video source object
Description
The Parent property identifies the video input object that is the parent of a video source object.
The parent of a video source object is defined as the video input object owning the video source
object.
Characteristics
Access Read only
Data type Video input object
Values Defined at object creation time
See Also
Functions
videoinput
19-28
PreviewFullBitDepth
PreviewFullBitDepth
Configure preview data to display in full bit depth
Description
The PreviewFullBitDepth property indicates whether the image data in the Preview window is
being displayed in full bit depth.
Note The Image Acquisition Toolbox Preview window and the Preview window that is built into the
Image Acquisition Tool support the display of up to 16-bit image data. The Preview window was
designed to only show 8-bit data, but many cameras return 10-, 12-, 14-, or 16-bit data. The Preview
window display supports these higher bit-depth cameras. However, larger bit data is scaled to 8-bit
for the purpose of displaying previewed data. To capture the image data in the Preview window in its
full bit depth for grayscale images, set the PreviewFullBitDepth property to 'on'.
If you set this property to 'off', image data in the preview window is scaled down from its bit depth
to 8-bit. If you set this property to 'on', the image data in the preview window is being captured in
its full bit depth.
This property can be set to 'on' only when the value of the ReturnedColorspace property is set to
'grayscale' and for video formats higher than 8-bit depth.
Characteristics
Default value is enclosed in braces ({}).
Access Read/write
Data type Character vector
Values [ {'off'} | 'on' ]
See Also
ReturnedColorSpace | preview
19-29
19 Properties
Previewing
Indicate whether object is currently previewing data in separate window
Description
The Previewing property indicates whether the object is currently previewing data in a separate
window.
The object sets the Previewing property to 'on' when you call the preview function.
The object sets the Previewing property to 'off' when you close the preview window using the
closepreview function or by clicking the Close button in the preview window title bar.
Characteristics
Default value is enclosed in braces ({}).
See Also
Functions
closepreview, preview
19-30
ReturnedColorSpace
ReturnedColorSpace
Specify color space used in MATLAB
Description
The ReturnedColorSpace property specifies the color space you want the toolbox to use when it
returns image data to the MATLAB workspace. This is only relevant when you are accessing acquired
image data with the getsnapshot, getdata, and peekdata functions.
Value Description
'grayscale' MATLAB grayscale color space.
'rgb' MATLAB RGB color space.
'YCbCr' MATLAB YCbCr color space.
Note that YCbCr is often imprecisely referred to as YUV. (YUV is similar, but not
identical. They differ by the scaling factor applied to the result. YUV refers to a
particular scaling factor used in composite NTSC and PAL formats. In most
cases, you can specify the YCbCr color space for devices that support YUV.)
'bayer' Convert grayscale Bayer color patterns to RGB images. The bayer color space
option is only available if your camera's default returned color space is
grayscale.
Note For some adaptors, such as GigE and GenTL, if you use a format that starts with Bayer, e.g.
BayerGB8_640x480, we automatically convert the raw Bayer pattern to color – the
ReturnedColorSpace is RGB. If you set the ReturnedColorSpace to 'grayscale', you'll get the
raw pattern.
For an example showing how to determine the default color space and change the color space setting,
see “Specifying the Color Space” on page 7-14.
Characteristics
Access Read/write
Data type Character vector
Values Defined at object creation time and depends on the video format selected
19-31
19 Properties
See Also
Functions
Properties
BayerSensorAlignment, VideoFormat
How To’s
19-32
ROIPosition
ROIPosition
Specify region-of-interest (ROI) window
Description
The ROIPosition property specifies the region-of-interest acquisition window. The ROI defines the
actual size of the frame logged by the toolbox, measured with respect to the top left corner of an
image frame.
where
YOffset Position of the upper left corner of the ROI, measured in pixels.
XOffset Position of the upper left corner of the ROI, measured in pixels.
Height Height of the ROI, measured in pixels. The sum of YOffset and Height cannot
exceed the height specified in VideoResolution.
Width Width of the ROI, measured in pixels. The sum of XOffset and Width cannot
exceed the width specified in VideoResolution.
Note The Width does not include both end points as well as the width between the pixels. It includes
one end point, plus the width between pixels. For example, if you want to capture an ROI of pixels 20
through 30, including both end pixels 20 and 30, set an XOffset of 19 and a Width of 11. The same
rule applies to height.
In the figure shown above, the width of the captured ROI contains pixels 51 through 170, including
both end points, because the XOffset is set to 50 and the Width is set to 120.
Characteristics
Access Read only while running
19-33
19 Properties
See Also
Properties
VideoResolution
19-34
Running
Running
Indicate whether video input object is ready to acquire data
Description
The Running property indicates if the video input object is ready to acquire data.
Along with the Logging property, Running reflects the state of a video input object. The Running
property indicates that the object is ready to acquire data, while the Logging property indicates that
the object is acquiring data.
The object sets the Running property to 'on' when you issue the start command. When Running
is 'on', you can acquire data from a video source.
The object sets the Running property to 'off' when any of the following conditions is met:
When Running is 'off', you cannot acquire image data. However, you can acquire one image frame
with the getsnapshot function.
Characteristics
Default value is enclosed in braces ({}).
See Also
Properties
Properties
Logging
19-35
19 Properties
Selected
Indicate whether video source object will be used for acquisition
Description
The Selected property indicates if the video source object will be used for acquisition. You select a
video source object by specifying its name as the value of the video input object's
SelectedSourceName property. The video input object Source property is an array of all the video
source objects associated with the video input object.
If Selected is 'on', the video source object is selected. If the value is 'off', the video source
object is not selected.
A video source is defined to be a collection of one or more physical data sources that are treated as a
single entity. For example, hardware supporting multiple RGB sources, each of which is made up of
three physical connections (red, green, and blue), is treated as a single video source object.
Characteristics
Default value is enclosed in braces ({}).
Examples
Create an image acquisition object.
vid = videoinput('winvideo');
vid.SelectedSourceName
ans =
input1
src = getselectedsource(vid);
src.SourceName
ans =
input1
19-36
Selected
src.Selected
ans =
on
See Also
Functions
getselectedsource
Properties
SelectedSourceName
19-37
19 Properties
SelectedSourceName
Specify name of currently selected video source
Description
The SelectedSourceName property specifies the name of the video source object from which the
video input object acquires data. The name is specified as a character vector. By default, the video
input object selects the first available video source object stored in the Source property.
The toolbox defines a video source as one or more hardware inputs that are treated as a single entity.
For example, hardware supporting multiple RGB sources, each of which is made up of three physical
connections (red-green-blue), is treated as a single video source object.
Characteristics
Access Read only while running
Data type Character vector
Values The video input object assigns a name to each video source object it creates.
Names are defined at object creation time and are vendor specific.
Examples
To see a list of all available sources, create a video input object.
vid = videoinput('matrox');
src_names = vid.SelectedSourceName;
See Also
Functions
set
Properties
Source
19-38
Source
Source
Indicate video source objects associated with video input object
Description
The Source property is a vector of video source objects that represent the physical data sources
connected to a device. When a video input object is created, the toolbox creates a vector of video
source objects associated with the video input object.
Each video source object created is provided a unique source name. You can use the source name to
select the desired acquisition source by configuring the SelectedSourceName property of the video
input object.
A video source object's name is stored in its SourceName property. If a video source object's
SourceName is equivalent to the video input object's SelectedSourceName, the video source
object's Selected property has a value of 'on'.
The video source object supports a set of common properties, such as SourceName. Each video
source object can also support device-specific properties that control characteristics of the physical
device such as brightness, hue, and saturation. Different image acquisition devices expose different
sets of properties.
A video source is defined to be a collection of one or more physical data sources that are treated as a
single entity. For example, hardware supporting multiple RGB sources, each of which is made up of
three physical connections (red-green-blue), is treated as a single video source object.
The Source property encapsulates one or more video sources. To reference a video source, you use a
numerical integer to index into the vector of video source objects.
Characteristics
Access Read only
Data type Vector of video source objects
Values Defined at object creation time
Examples
Create an image acquisition object.
vid = videoinput('matrox');
To access all the video source objects associated with a video input object, use the Source property
of the video input object. (To view only the currently selected video source object, use the
getselectedsource function.)
sources = vid.Source;
src = sources(1);
To view the properties of the video source object src, use the get function.
19-39
19 Properties
get(src)
General Settings:
Parent = [1x1 videoinput]
Selected = on
SourceName = CH1
Tag =
Type = videosource
See Also
Functions
videoinput, getselectedsource
Properties
SelectedSourceName
19-40
SourceName
SourceName
Indicate name of video source object
Description
The SourceName property indicates the name of a video source object.
SourceName is one of the values in the video input object's SelectedSourceName property.
Characteristics
Access Read only
Data type Character vector
Values Defined at object creation time
See Also
Functions
getselectedsource
Properties
SelectedSourceName, Source
19-41
19 Properties
StartFcn
Specify MATLAB file executed when start event occurs
Description
The StartFcn property specifies the MATLAB file function to execute when a start event occurs. A
start event occurs immediately after you issue the start command.
The StartFcn callback executes synchronously. The toolbox does not set the object's Running
property to 'on' until the callback function finishes executing. If the callback function encounters an
error, the object never starts running.
Characteristics
Access Read/write
Data type Character vector, function handle, or cell array
Values The default value is an empty matrix ([]).
See Also
Properties
EventLog, Running
19-42
StopFcn
StopFcn
Specify MATLAB file executed when stop event occurs
Description
The StopFcn property specifies the MATLAB file function to execute when a stop event occurs. A
stop event occurs immediately after you issue the stop command.
The StopFcn callback executes synchronously. Under most circumstances, the image acquisition
object will be stopped and the Running property will be set to 'off' by the time the MATLAB file
completes execution.
Characteristics
Access Read/write
Data type Character vector, function handle, or cell array
Values The default value is an empty matrix ([]).
See Also
Properties
EventLog, Running
19-43
19 Properties
Tag
Specify descriptive text to associate with image acquisition object
Description
The Tag property specifies any descriptive text that you want to associate with an image acquisition
object.
The Tag property can be useful when you are constructing programs that would otherwise need to
define the image acquisition object as a global variable, or pass the object as an argument between
callback routines.
You can use the value of the Tag property to search for particular image acquisition objects when
using the imaqfind function.
Characteristics
Access Read/Write
Data type Character vector
Values Any character vector
See Also
Functions
imaqfind
19-44
Timeout
Timeout
Specify how long to wait for image data
Description
The Timeout property specifies the amount of time (in seconds) that the getdata and getsnapshot
functions wait for data to be returned. The Timeout property is only associated with these blocking
functions. If the specified time period expires, the functions return control to the MATLAB command
line.
A timeout is one of the conditions for stopping an acquisition. When a timeout occurs, and the object
is running, the MATLAB file function specified by ErrorFcn is called.
Note The Timeout property is not associated with hardware timeout conditions.
Characteristics
Access Read only while running
Data type double
Values Any positive integer. The default value is 10 seconds.
See Also
Functions
getdata, getsnapshot
Properties
EventLog, ErrorFcn
19-45
19 Properties
TimerFcn
Specify MATLAB file callback function to execute when timer event occurs
Description
The TimerFcn property specifies the MATLAB file callback function to execute when a timer event
occurs. A timer event occurs when the time period specified by the TimerPeriod property expires.
The toolbox measures time relative to when the object is started with the start function. Timer
events stop being generated when the image acquisition object stops running.
Note Some timer events might not be processed if your system is significantly slowed or if the
TimerPeriod value you specify is too small.
Characteristics
Access Read/write
Data type Character vector, function handle, or cell array
Values The default value is an empty matrix ([]).
See Also
Functions
start, stop
Properties
TimerPeriod
19-46
TimerPeriod
TimerPeriod
Specify number of seconds between timer events
Description
The TimerPeriod property specifies the amount of time, in seconds, that must pass before a timer
event is triggered.
The toolbox measures time relative to when the object is started with the start function. Timer
events stop being generated when the image acquisition object stops running.
Note Some timer events might not be processed if your system is significantly slowed or if the
TimerPeriod value you specify is too small.
Characteristics
Access Read only while running
Data type double
Values Any positive value. The minimum value is 0.01 seconds. The default value is 1
(second).
See Also
Functions
start, stop
Properties
EventLog, TimerFcn
19-47
19 Properties
TriggerCondition
Indicate required condition before trigger event occurs
Description
The TriggerCondition property indicates the condition that must be met, via the TriggerSource,
before a trigger event occurs. The trigger conditions that you can specify depend on the value of the
TriggerType property.
You must use the triggerconfig function to set the value of this property.
Characteristics
Access Read only. Use the triggerconfig function to set the value of this property.
Data type Character vector
Values Device specific. Use the triggerinfo function to view a list of valid values to
use with your image acquisition hardware.
See Also
Functions
Properties
TriggerSource, TriggerType
19-48
TriggerFcn
TriggerFcn
Specify MATLAB file callback function to execute when trigger event occurs
Description
The TriggerFcn property specifies the MATLAB file callback function to execute when a trigger
event occurs. The toolbox generates a trigger event when a trigger is executed based on the
configured TriggerType, and data logging is initiated.
Under most circumstances, the MATLAB file callback function is not guaranteed to complete
execution until sometime after the toolbox sets the Logging property to 'on'.
Characteristics
Access Read/write
Data type Character vector, function handle, or cell array
Values The default value is an empty matrix ([]).
See Also
Functions
trigger
Properties
EventLog, Logging
19-49
19 Properties
TriggerFrameDelay
Specify number of frames to skip before acquiring frames after trigger occurs
Description
The TriggerFrameDelay property specifies the number of frames to skip before acquiring frames
after a trigger occurs. The object waits the specified number of frames after the trigger before
starting to log frames.
In this figure, the TriggerFrameDelay is set to 5, so the object lets five frames pass before starting
to acquire frames. The number of frames captured is defined by the FramesPerTrigger property.
Characteristics
Access Read only while running
Data type double
Values Any integer. The default value is 0 (zero).
See Also
Functions
trigger
Properties
FramesPerTrigger
19-50
TriggerRepeat
TriggerRepeat
Specify number of additional times to execute trigger
Description
The TriggerRepeat property specifies the number of additional times you want the object to
execute a trigger. This table describes the behavior for several typical TriggerRepeat values.
Value Behavior
0 (default) Execute the trigger once when the trigger condition is met.
Any positive integer Execute the trigger the specified number of additional times when the
trigger condition is met.
Inf Keep executing the trigger every time the trigger condition is met until the
stop function is called or an error occurs.
To determine how many triggers have executed, check the value of the TriggersExecuted property.
Note If the FramesPerTrigger property is set to Inf, the object ignores the value of the
TriggerRepeat property.
Characteristics
Access Read only while running
Data type double
Values Any nonnegative integer. The default value is 0 (zero).
See Also
Functions
stop, trigger
Properties
19-51
19 Properties
TriggersExecuted
Indicate total number of executed triggers
Description
The TriggersExecuted property indicates the total number of triggers that the video input object
has executed.
Characteristics
Access Read only
Data type double
Values Any nonnegative integer. The default value is 0 (zero).
See Also
Functions
trigger
Properties
EventLog
19-52
TriggerSource
TriggerSource
Indicate hardware source to monitor for trigger conditions
Description
The TriggerSource property indicates the hardware source the image acquisition object monitors
for trigger conditions. When the condition specified in the TriggerCondition property is met, the
object executes the trigger and starts acquiring data.
You use the triggerconfig function to specify this value. The value of the TriggerSource
property is device specific. You specify whatever mechanism a particular device uses to generate
triggers.
For example, for Matrox hardware, the TriggerSource property could have values such as 'Port0'
or 'Port1'. Use the triggerinfo function to view a list of values that are valid for your image
acquisition device.
You must use the triggerconfig function to set the value of this property.
Note The TriggerSource property is only used when the TriggerType property is set to
'hardware'.
Characteristics
Access Read only. Use the triggerconfig function to set the value of this property.
Data type Character vector
Values Device-specific. Use the triggerinfo function to get a list of valid values.
See Also
Functions
Properties
TriggerCondition, TriggerType
19-53
19 Properties
TriggerType
Indicate type of trigger used by video input object
Description
The TriggerType property indicates the type of trigger used by the video input object. Triggers
initiate data acquisition.
You use the triggerconfig function to specify one of the following values for this property.
Characteristics
Default value is enclosed in braces ({}).
Access Read only. Use the triggerconfig function to set the value of this
property.
Data type Character vector
Values [ 'hardware' | {'immediate'} | 'manual' ]
See Also
Functions
Properties
TriggerCondition, TriggerSource
19-54
Type
Type
Identify type of image acquisition object
Description
The Type property identifies the type of image acquisition object. An image acquisition object can be
either one of two types:
Characteristics
Access Read only
Data type Character vector
Values [ 'videoinput' | 'videosource' ] Defined at object creation time
Examples
vid = videoinput('winvideo',1)
vid.Type
ans =
videoinput
src = getselectedsource(vid);
src.Type
ans =
videosource
See Also
Functions
getselectedsource, videoinput
19-55
19 Properties
UserData
Store data to associate with image acquisition object
Description
The UserData property specifies any data that you want to associate with an image acquisition
object.
Note The object does not use the data in UserData directly. However, you can access the data by
referencing the property as you would a field in a MATLAB structure using dot notation.
Characteristics
Access Read/Write
Data type Any
Values User-defined
See Also
Functions
get
19-56
VideoFormat
VideoFormat
Specify video format or name of device configuration file
Description
The VideoFormat property specifies the video format used by the image acquisition device or the
name of a device configuration file, depending on which you specified when you created the object
using the videoinput function.
Image acquisition devices typically support multiple video formats. When you create a video input
object, you can specify the video format that you want the device to use. If you do not specify the
video format as an argument, the videoinput function uses the default format. Use the
imaqhwinfo function to determine which video formats a particular device supports and find out
which format is the default.
As an alternative, you can specify the name of a device configuration file, also known as a camera file
or digitizer configuration format (DCF) file. Some image acquisition devices use these files to store
device configuration information. The videoinput function can use this file to determine the video
format and other configuration information.
Use the imaqhwinfo function to determine if your device supports device configuration files.
Characteristics
Access Read only
Data type Character vector
Values Device-specific. The example describes how to get a list of all the formats
supported by a particular image acquisition device.
Examples
To determine the video formats supported by a device, check the SupportedFormats field in the
device information structure returned by imaqhwinfo.
info = imaqhwinfo('winvideo')
info =
info.DeviceInfo
ans =
DefaultFormat: 'RGB555_128x96'
DeviceFileSupported: 0
19-57
19 Properties
See Also
Functions
imaqhwinfo, videoinput
19-58
VideoResolution
VideoResolution
Indicate width and height of incoming video stream
Description
The VideoResolution property is a two-element vector indicating the width and height in pixels of
the frames in the incoming video stream. VideoResolution is specified as
[Width Height]
Note You specify the video resolution when you create the video input object, by passing in the video
format argument to the videoinput function. If you do not specify a video format, the videoinput
function uses the default video format. Use the imaqhwinfo function to determine which video
formats a particular device supports and find out which format is the default.
Characteristics
Access Read only
Data type Vector of doubles
Values Defined by video format
See Also
Functions
imaqhwinfo, videoinput
Properties
ROIPosition, VideoFormat
19-59
20
Blocks
20 Blocks
Description
The From Video Device block lets you capture image and video data streams from image acquisition
devices, such as cameras and frame grabbers, in order to bring the image data into a Simulink model.
The block also lets you configure and preview the acquisition directly from Simulink.
The From Video Device block opens, initializes, configures, and controls an acquisition device. The
block opens, initializes, and configures only once, at the start of the model execution. While the Read
All Frames option is selected, the block queues incoming image frames in a FIFO (first in, first out)
buffer and delivers one image frame for each simulation time step. If the buffer underflows, the block
waits for up to 10 seconds until a new frame is in the buffer.
The block has no input ports. You can configure the block to have either one output port or three
output ports corresponding to the uncompressed color bands red, green, and blue or Y, Cb, and Cr.
For more information about configuring the output ports, see the “Output” on page 20-2 section.
For an example of how to use this block, see “Save Video Data to a File” on page 9-6.
• The From Video Device block supports the use of Simulink Accelerator mode. This feature speeds
up the execution of Simulink models.
• The From Video Device block supports the use of model referencing. This feature lets your model
include other Simulink models as modular components.
• The From Video Device block supports the use of “Code Generation” on page 20-7 along with
the packNGo function to group required source code and dependent shared libraries.
Ports
Output
Video output signal, specified as an m-by-n-by-3 matrix, where m represents the height of the video
image and n represents the width of the video image.
Dependencies
Data Types: single | double | int8 | uint8 | int16 | uint16 | int32 | uint32
20-2
From Video Device
RGB video output signal, specified as an m-by-n matrix, where m represents the height of the video
image and n represents the width of the video image. R, G, and B are separate output ports that each
have the same dimensions.
Dependencies
• To enable this port, set Output color space to rgb and Ports mode to Separate color
signals.
• To specify the output video signal data type for this port, set Data type.
Data Types: single | double | int8 | uint8 | int16 | uint16 | int32 | uint32
YCbCr video output signal, specified as an m-by-n matrix, where m represents the height of the video
image and n represents the width of the video image. Y, Cb, and Cr are separate output ports that
each have the same dimensions.
Dependencies
• To enable this port, set Output color space to YCbCr and Ports mode to Separate color
signals.
• To specify the output video signal data type for this port, set Data type.
Data Types: single | double | int8 | uint8 | int16 | uint16 | int32 | uint32
Parameters
The following fields appear in the Block Parameters dialog box. If your selected device does not
support a feature, it will not appear in the dialog box.
The image acquisition device to which you want to connect. The items in the list vary, depending on
which devices you have connected to your system. All video capture devices supported by Image
Acquisition Toolbox software are supported by the block.
Shows the video formats supported by the selected device. This list varies with each device. If your
device supports the use of camera files, From camera file is one of the choices in the list.
Dependencies
• To enable the Camera file parameter, set Video format to From camera file. This option only
appears if your selected device supports camera raw image files. Enter the camera file path and
file name, or use the Browse button to locate it.
20-3
20 Blocks
Available input sources for the specified device and format. Click the Edit properties... button to
open the Property Inspector and edit the source properties.
Open the Property Inspector to edit video source device-specific properties, such as brightness and
contrast. The properties that are listed vary by device. Properties that can be edited are indicated by
a pencil icon or a drop-down list in the table. Properties that are grayed out cannot be edited. When
you close the Property Inspector, your edits are saved.
This option only appears if the selected device supports hardware triggering. Select the check box to
enable hardware triggering. After you enable triggering, you can select the Trigger configuration.
Dependencies
• To enable the Trigger configuration parameter, select the Enable hardware triggering
parameter. This option only appears if the selected device supports hardware triggering. The
configuration choices are listed by trigger source/trigger condition. For example, TTL/
fallingEdge means that TTL is the trigger source and the falling edge of the signal is the
condition that triggers the hardware.
Use this field to input a row vector that specifies the region of acquisition in the video image. The
format is [row, column, height, width]. The default values for row and column are 0. The default
values for height and width are set to the maximum allowable value, indicated by the resolution of the
video format. Change the values in this field only if you do not want to capture the full image size.
Use this field to select the color space for devices that support color. If your device supports Bayer
sensor alignment, bayer is also available.
Dependencies
• To enable the Bayer sensor alignment parameter, set Output color space to bayer. This
option is only available if your device supports Bayer sensor alignment. Use this to set the 2-by-2
pixel alignment of the Bayer sensor. Possible sensor alignment options are grbg (default), gbrg,
rggb, and bggr.
Preview the video image. Clicking this button opens the Video Preview window. While preview is
running the image adjusts to changes you make in the parameter dialog box. Use the Video Preview
window to set up your image acquisition in the way you want it to be acquired by the block when you
run the model.
20-4
From Video Device
Specify the sample time of the block during the simulation. The sample time is the rate at which the
block is executed during simulation.
Note The block sample time does not set the frame rate on the device that is used in simulation. The
frame rate is determined by the video format specified (standard format or from a camera file). Some
devices even list frame rate as a device-specific source property. Frame rate is not related to the
Block sample time option in the dialog. The block sample time defines the rate at which the block
executes during simulation time.
This option appears only if your device supports using either one output port or multiple output ports
for the color bands. Use this option to specify either a single output port for all color spaces, or one
port for each band (for example, R, G, and B). When you select One multidimensional signal,
the output signal is combined into one line consisting of signal information for all color signals. Select
Separate color signals if you want to use three ports corresponding to the uncompressed red,
green, and blue color bands. Note that some devices use YCbCr for the separate color signals.
Note The block acquires data in the default ReturnedColorSpace setting for the specified device
and format.
The image data type when the block outputs frames. This data type indicates how image frames are
returned from the block to Simulink. This option supports all MATLAB numeric data types.
Select to capture all available image frames. If you do not select this option, the block takes the latest
snapshot of one frame, which is equivalent to using the getsnapshot function in the toolbox. If you
select this option, the block queues incoming image frames in a FIFO (first in, first out) buffer. The
block still gives you one frame, the oldest from the buffer, every timestep and ensures that no frames
are lost. This option is equivalent to using the getdata function in the toolbox.
Use this option to return skeleton information in Simulink during simulation and code generation. You
can output metadata information in normal, accelerator, and deployed simulation modes. Each
metadata item in the Selected Metadata list becomes an output port on the block.
20-5
20 Blocks
The All Metadata section lists the metadata that is associated with the Kinect depth sensor.
This section is only visible when a Kinect depth sensor is selected. The All Metadata list shows the
available metadata. The Selected Metadata list shows which metadata items are returned to
Simulink. This is empty by default. To use a metadata item, add it from the All Metadata to the
Selected Metadata list by selecting it in the All Metadata list and clicking the Add button (blue
arrow icon). The Remove button (red X icon) removes an item from the Selected Metadata list. You
can also use the Move up and Move down buttons to change the order of items in the Selected
Metadata list. You can select multiple items at once.
20-6
From Video Device
You can see in the example above that three metadata items have been put in the Selected
Metadata list. When you click Apply, output ports are created on the block for these metadata, as
shown here. The first port is the depth frame.
For descriptions and information on these metadata fields and using Kinect for Windows with the
Image Acquisition Toolbox, see “Acquiring Image and Skeletal Data Using Kinect” on page 12-7.
Extended Capabilities
C/C++ Code Generation
Generate C and C++ code using Simulink® Coder™.
See “Code Generation with From Video Device Block” on page 9-4.
See Also
Topics
“Save Video Data to a File” on page 9-6
“Acquiring Image and Skeletal Data Using Kinect” on page 12-7
Introduced in R2007a
20-7