Desktop Java

OpenCV Object Detection Java Swing Viewer

1. Introduction

This post introduces the OpenCV Object Detection Java Swing Viewer, which builds upon the concepts covered in my previous posts (see References 1 and 2).
In those earlier articles, I discussed how to use OpenCV’s inference models for object detection and how to develop a Java Swing-based media viewer that enables users to select an input source—Image, Video, or WebCam—for both viewing and detection.
In particular, this implementation integrates Java Swing UI components with OpenCV’s inference engine to provide real-time object detection capabilities.

2. Setting Up Development Environment

Before getting started, you’ll need to install the OpenCV SDK, Apache NetBeans, and ensure Java is properly set up on your system.
You can download them from the following links:

3. UI Layout Overview

Figure 1 below illustrates the overall layout of the OpenCV Object Detection Java Swing Viewer.

Figure 1. The four main areas of OpenCV Object Detection Java Swing Viewer.

The OpenCV Object Detection Java Swing Viewer is composed of four main areas:

  • Screen Area: Displays the selected input source, such as an image, video file, or webcam stream.
  • User Option Area: Allows the user to select an object detection model and configure related parameters.
  • Status Information Area: Shows paths and details of the selected model, configuration, and detection results.
  • Execution Area: Contains control buttons to start, pause, or stop the detection process.

In the following sections, I will explain each of these areas in more detail.

4. User Option Area

This section allows users to select or enter information for using one of the supported object detection models: Mask R-CNN or YOLO (v3/v4), as introduced in my earlier post on OpenCV Java Object Detection (see References 1 and 2).

4.1 Selecting an Image Source

In the User Option Area, users can choose an image source through the following steps:

  1. Select the source typeImage, Video, or WebCam — from the dropdown menu.
  2. Depending on the selected source:
    • If Image or Video is selected, a file dialog will open, allowing you to choose a compatible file (e.g., PNG or JPEG for images, MP4 for videos).
    • If WebCam is selected, no file dialog will appear, and the application will attempt to access the webcam directly.
  3. The selected image source will be immediately displayed in the Screen Area.

For example, the layout shown below illustrates the scenario where the user selects Video, and a video file is loaded and displayed.

Figure 2. File dialog depending on type in select box

4.2 Selecting an Object Detection Model

The viewer supports two inference models: Mask R-CNN and YOLO (v3/v4).
Since both a model file and a configuration file are required for object detection, users must select and configure these files in advance.

4.2.1 Model Selection Dialog

Before starting object detection, you must choose either Mask R-CNN or YOLO as the model type.
Once a model is selected from the dropdown menu, a configuration dialog will appear, allowing the user to set relevant parameters.

Figure 3. Input Dialog Layout for Model

The dialog layout typically includes the following steps:

  1. Select a model type (Mask R-CNN or YOLO) from the dropdown menu in the main window.
  2. A dialog window immediately appears to prompt for further configuration.
  3. Choose the model and configuration files using the file browser. File filters will limit the selection to supported formats.
  4. Select the inference device — either CPU or GPU. The default setting is CPU.

The following table provides an overview of the supported model file formats:

File FormatDescriptionSupport Model
.pbtextConfiguration file for Mask R-CNNMask R-CNN
.pbModel file for Mask R-CNNMask R-CNN
.cfgConfiguration file for YOLOYOLO
.weightsModel file for YOLOYOLO
Table 1. Model Description and Supported File Formats

An example of how this dialog appears in the main layout is shown below.

Figure 4. overview of the dialog of the model parameters layouts

4.2.2 Additional User Check Options

Additional checkboxes and text fields in the User Option Area allow users to customize the behavior of the object detection process using OpenCV.

Figure 5. Additional User Check Options.

The OpenCV Object Detection Java Swing Viewer provides various configurable options, such as adjusting the detection confidence threshold and displaying detailed detection information on the screen.

Table 2 lists the available user options.

NameFunctionalityComponent Type
ThresholdSet a custom confidence threshold for detection.Text field
RecordingSave the output image or video to a file.Checkbox
Use MaskOverlay the detection mask on the screen.Checkbox
Detect InfoDisplay detection time and related information.Checkbox
Table 2. Additional User Options

5. Status Information Area

The Status Information Area displays various details such as the file paths for the model, configuration file, and selected media source, as well as real-time detection information.
Therefore, this area is intended for reference only and is not editable by the user.

5.1 Layout of Status Information Area

Figure 6 illustrates the conceptual layout of the Status Information Area.

Figure 6. Layout of the Status Information Area.

This table describes each field in the Status Information Area, whereas all components are read-only text fields.

LabelDescriptionComponnent TypeEditable
Detect InfoDisplays information about object detectionText FieldNot editable
MediaFile path of the selected image sourceText FieldNot editable
Config PathConfiguration file path of the modelText FieldNot editable
Model PathModel (DNN) file pathText FieldNot editable
Table 3. Status Information Area Components

6. Execution Area

The Execution Area controls the object detection process using the selected model; therefore, users can start, pause, or stop the detection and rendering of image sources.
For example, a user may pause detection to adjust the confidence threshold or to save a specific frame.

6.1 Layout of Execution Area

This area contains three main buttons including Start, Pause, and Stop.

Figure 7. Layout of the Execution Area.

6.1.1 Description

A table below describes the role of each button in controlling the detection process.

LabelDescriptionComponent Type
DetectStarts the object detection processButton
PausePauses the detection processButton
StopStops the detection processButton
Table 4. Components Description of Execution Area.

7 Screen Area Description

The Screen Area serves as the core display region of the viewer, such as rendering image or video frames selected in the User Option Area or overlaying detected bounding boxes using OpenCV’s DNN module. Moreover, it is tightly integrated with both the Execution Area and the user-configured parameters.

Rendering Steps:

1. Select an input source (Image, Video, or WebCam) to render in the Screen Area.

Figure 8. Rendering an image source.

2. Choose an inference model to detect objects in the displayed frame.

Figure 9. Object detection on the Screen Area.
ActionScreen AreaNote
No User ActionRendering frame of image from the image sourceThe Image source must be one of Image file ,Video file or WebCam.
Click Detect buttonRendering Image with bounding box or confidence.The Mask R-CNN can have mask image of decteced from the model.
Click Pause buttonPause rendering image frame.
Click Stop buttonStop rendering image frame.
Table 5. Interaction Between Execution and Screen Areas

8. Role of services

The following use case diagram illustrates the main functional responsibilities of the application. Thus, the system is divided into two primary operations:

  • Rendering from image source.
  • Running Object Detection.
Figure 10: Functional Role Diagram.

8.1 Rendering Image Sources

Users select one of three input types—Image, Video, or WebCam—which then prompts the viewer to begin rendering frames and initialize the DNN model with the selected files.

Figure 11. Rendering an image source

8.2 Running Object Detection

After rendering begins, users can initiate detection, and during runtime, they can adjust the detection threshold to fine-tune the results.

Figure 12. Flow of object detection process

9. Association Class Diagram

The viewer is implemented as a JFrame-based application. Furthermore, it incorporates a JDialog for model configuration and a JPanel for image rendering, all using Java Swing.

Figure 13: Class Association Diagram

Table 6 shows the list of the main classes and its role of the OpenCV Object Detection Java Swing Viewer:

Class NameRole of the class
CVSwingObjectDetectorMain entry-point class extending JFrame. Hosts the GUI, including the CVRendererPane.
CVRendererPaneJPanel subclass that displays images. Implements IScreenRenderer to handle frame updates.
CVRendererResponsible for media decoding and detection logic. Converts between Mat and BufferedImage.
IObjectDetectorInterface for accessing raw and detected image data.
ObjectDetectorBase class for both MaskRCNNDetector and YoloDetector. Manages DNN configuration and loading.
MaskRCNNDetector, YoloDetectorConcrete implementations that perform detection using OpenCV’s DNN module.
Table 6. Main Classes and Their Roles

10. GUI Design and Implementation

We now use the Apache NetBeans GUI designer to implement the OpenCV Object Detection Java Swing Viewer based on the previously described UI layout in order to build the main interface.

Figure 14. Main Layout Design.

Specifically, the Apache NetBeans GUI builder provides a visual interface for designing the base layout. For more guidance on creating GUI applications with Apache NetBeans, you can refer to my earlier post, “OpenCV-Based Media Java Swing Viewer” (see Reference 1).

10.1 User Option Area Design

The User Option Area allows the user to first select the image source—Image, Video, or WebCam—and then choose the object detection model—Mask R-CNN or YOLO. Consequently, to support this workflow, we designed two dialogs using JDialog corresponding to these selection boxes, which together form the GUI layout shown below.

Figure 15. User Option Area Layout Design.

You can also refer to the class diagram to understand how the classes interact in this implementation.

Figure 16. Class diagram of User Option Area.

The main class, CVSwingObjectDetector, stores the user selections in a singleton class named DetectorInfoMgr; moreover, it uses a thread-safe ConcurrentHashMap to manage the following:

  • Image source path
  • Inference model file
  • Configuration file of the model
  • Current application state

10.2 User Option Area Implementation

There are two combo box types used in the User Option Area:

  • File Type Selection Dialog: Image, Video, and WebCam
  • Model Selection Dialog: Mask R-CNN or YOLO

Here are the basic steps for implementing the User Option Area:

  1. Declare and configure a JFileChooser instance.
  2. Apply file type filters based on the selected input type (e.g., JPG/PNG for images, MP4 for videos).
  3. Store the selected file path and update the corresponding text field such as path for an image or video stream. Then, the DetectorInfoMgr class is responsible for saving the type and path values selected by the user.

Accordingly, the simplified code of the CVSwingObjectDetector class below is:

public class CVSwingObjectDetector extends JFrame  {
    
   ...
	private void initComponents() {

		...

		ComboCVFileType.setModel(
           new DefaultComboBoxModel<>(
             new String[] { "None", "Image", "Movie", "WebCam" }));
		ComboCVFileType.addActionListener(new ActionListener() {
			public void actionPerformed(ActionEvent evt) {
				ComboCVFileTypeActionPerformed(evt);
			}
		});

		...

		ComboDetectModel.setModel(
          new DefaultComboBoxModel<>(new String[] { "None", "MRCNN", "YOLO" }));
		ComboDetectModel.addActionListener(new ActionListener() {
			public void actionPerformed(ActionEvent evt) {
				ComboDetectModelActionPerformed(evt);
			}
		});

		...
	}


    private void ComboCVFileTypeActionPerformed(java.awt.event.ActionEvent evt) {
       ...
       String initDir = DetectorInfoMgr.instance().getSelectedDirectory();
       
       ...
       
       switch(itemName)
       {
           case IDetectInfo.TYPE_IMG:
              ...
            break;
           case IDetectInfo.TYPE_MOV:
              ...
               break;
            case IDetectInfo.TYPE_CAM:
               break;
            default:
               //do nothing
       }
       
        if(itemName.equals(IDetectInfo.TYPE_NONE))
        {
            ...
            DetectorInfoMgr.instance()
             .put(DetectorInfoMgr.RUN_STATUS, DetectorInfoMgr.STATUS_STOP);
            ...
            return;
        }
        else
        {
            DetectorInfoMgr.instance()
             .put(DetectorInfoMgr.RUN_STATUS, DetectorInfoMgr.STATUS_INIT);
        }
        
        switch(itemName)
        {
           case IDetectInfo.TYPE_IMG:
               ...
                
                switch(returnType)
                {
                    case JFileChooser.APPROVE_OPTION:
                        chooseFile = jFileChooser.getSelectedFile();
                        txtMediaFile.setText(chooseFile.getAbsolutePath());
                        
                       DetectorInfoMgr.instance()
                      .put(DetectorInfoMgr.INPUT_TYPE, DetectorInfoMgr.TYPE_IMG);
                        DetectorInfoMgr.instance()
                        .put(DetectorInfoMgr.INPUT_FILE, txtMediaFile.getText());
                       ...
                }
               break;
           case IDetectInfo.TYPE_MOV:
               if(jFileChooser == null) return;
               returnType = jFileChooser.showDialog(this, title);
               
                switch(returnType)
                {
                    case JFileChooser.APPROVE_OPTION:
                        chooseFile = jFileChooser.getSelectedFile();
                        txtMediaFile.setText(chooseFile.getAbsolutePath());
                        
                        DetectorInfoMgr.instance()
                     .put(DetectorInfoMgr.INPUT_TYPE, DetectorInfoMgr.TYPE_MOV);
                        DetectorInfoMgr.instance()
                     .put(DetectorInfoMgr.INPUT_FILE, txtMediaFile.getText());
                        mCvController.startRenderFrame();
                        break;
                    ...
                        //do nothing
                }
               break;
           case IDetectInfo.TYPE_CAM:
               DetectorInfoMgr.instance()
               .put(DetectorInfoMgr.INPUT_TYPE, DetectorInfoMgr.TYPE_CAM);
               DetectorInfoMgr.instance().put(DetectorInfoMgr.INPUT_FILE, "");
               ...
       }
    }

   private void ComboDetectModelActionPerformed(java.awt.event.ActionEvent evt) {
       ...
       DetectorInfoMgr.instance()
        .put(DetectorInfoMgr.RUN_STATUS, DetectorInfoMgr.STATUS_INIT);
       
       switch(itemName)
       {
           case IDetectInfo.TYPE_MRCNN:
               DetectorInfoMgr
               .instance().put(DetectorInfoMgr.MODEL, IDetectInfo.TYPE_MRCNN);
               ...
               break;
           case IDetectInfo.TYPE_YOLO:
              DetectorInfoMgr.instance()
              .put(DetectorInfoMgr.MODEL, IDetectInfo.TYPE_YOLO);
             ...
              break;
           default:
               //do nothing
       }
       
       
    }

   ...
    
    IModelSelect mModelSelected = new IModelSelect()
    {
        @Override
        public void selectModel(String sConfigFile, String sModelFile, String sDeviceType) {
            
            ...
            
            DetectorInfoMgr.instance()
             .put(DetectorInfoMgr.MODEL_PATH, sModelFile);
            DetectorInfoMgr.instance()
             .put(DetectorInfoMgr.CONFIG_PATH, sConfigFile);
            DetectorInfoMgr.instance()
             .put(DetectorInfoMgr.DEVICE, sDeviceType);
            
           ...
        }
    };
    
}

10.3 File Type Selection Dialog

The application supports three image source types: Image, Video, and WebCam when the user selects Image or Video, a file chooser dialog appears. For example, if Video is selected, a file selection dialog is triggered.

Figure 17. Example of the File dialog for Video.

As a continuation, the following code demonstrates how file type selection is handled and how the DetectorInfoMgr instance is updated accordingly.

	private void initComponents() {

		...

		ComboCVFileType.setModel(
           new DefaultComboBoxModel<>(
             new String[] { "None", "Image", "Movie", "WebCam" }));
		ComboCVFileType.addActionListener(new ActionListener() {
			public void actionPerformed(ActionEvent evt) {
				ComboCVFileTypeActionPerformed(evt);
			}
		});

		...
		...
	} 
   private void ComboCVFileTypeActionPerformed(java.awt.event.ActionEvent evt) {                                                
      ...
   //1.Declare and configure a JFileChooser instance.
   File chooseFile = null;
   JFileChooser jFileChooser = null;
   FileNameExtensionFilter extFilter;
   String title = "choose ";
   
//2.Apply file type filters based on the selected input type (e.g., JPG, PNG for Image; MP4 for Video).
   switch(itemName)
   {
	   case IDetectInfo.TYPE_IMG:
		   jFileChooser = new JFileChooser(initDir);
		   extFilter 
            = new FileNameExtensionFilter("images", "jpg", "jpeg", "png");
		   jFileChooser.setFileFilter(extFilter);
		  
		   title += itemName;

		   break;
	   case IDetectInfo.TYPE_MOV:
		   jFileChooser = new JFileChooser(initDir);
		   extFilter = new FileNameExtensionFilter("movies", "mp4", "avi");
			jFileChooser.setFileFilter(extFilter);
		   title += itemName;
		   
		   break;
		case IDetectInfo.TYPE_CAM:
		   break;
		default:
		   //do nothing
   }
   
	...
	//Store the selected file path and update the relevant text field values selected by the user.
	switch(itemName)
	{
	   case IDetectInfo.TYPE_IMG:
		   if(jFileChooser == null) return;
		   int returnType = jFileChooser.showDialog(this, title);
			
			switch(returnType)
			{
				case JFileChooser.APPROVE_OPTION:
					chooseFile = jFileChooser.getSelectedFile();
					txtMediaFile.setText(chooseFile.getAbsolutePath());
					
					DetectorInfoMgr.instance()
                    .put(DetectorInfoMgr.INPUT_TYPE, DetectorInfoMgr.TYPE_IMG);
					DetectorInfoMgr.instance()
                    .put(DetectorInfoMgr.INPUT_FILE, txtMediaFile.getText());
					mCvController.startRenderFrame();
					break;
				case JFileChooser.CANCEL_OPTION:
					...
					break;
				default:
					//do nothing
			}
		   break;
	   case IDetectInfo.TYPE_MOV:
		   if(jFileChooser == null) return;
		   returnType = jFileChooser.showDialog(this, title);
		   
			switch(returnType)
			{
				case JFileChooser.APPROVE_OPTION:
					chooseFile = jFileChooser.getSelectedFile();
					txtMediaFile.setText(chooseFile.getAbsolutePath());
					
					DetectorInfoMgr.instance()
                    .put(DetectorInfoMgr.INPUT_TYPE, DetectorInfoMgr.TYPE_MOV);
					DetectorInfoMgr.instance().put(DetectorInfoMgr.INPUT_FILE, txtMediaFile.getText());
					mCvController.startRenderFrame();
					break;
				case JFileChooser.CANCEL_OPTION:
					 ...
					break;
				default:
					//do nothing
			}
		   break;
	   case IDetectInfo.TYPE_CAM:
		   DetectorInfoMgr.instance().put(DetectorInfoMgr.INPUT_TYPE, DetectorInfoMgr.TYPE_CAM);
		   DetectorInfoMgr.instance().put(DetectorInfoMgr.INPUT_FILE, "");
		   mCvController.startRenderFrame();
		   break;
	   default:
		   //do nothing
   }
}  

10.4 Model Selection Dialog

After selecting an image source, the user must choose one of the supported models—Mask R-CNN or YOLO—and provide both the model file and its configuration file. Additionally, the user can choose the inference device: CPU or GPU (both supported by OpenCV).

Figure 18. Select a DNN model – Mask R-CNN

When the user selects a model from the combo box (JComboBox), the ComboDetectModelActionPerformed method is automatically triggered; consequently, it handles the model selection logic as shown below.

public class CVSwingObjectDetector extends JFrame  {
    
   ...
    
  private void ComboDetectModelActionPerformed(java.awt.event.ActionEvent evt) {
       JComboBox comboBox = null;
       
       if( evt.getSource() instanceof JComboBox)
       {
           comboBox = (JComboBox)evt.getSource();
       }
       
       if(comboBox == null) return;
       
       String itemName = (String)comboBox.getSelectedItem();

       ..
       
       switch(itemName)
       {
           case IDetectInfo.TYPE_MRCNN:
               DetectorInfoMgr.instance().put(DetectorInfoMgr.MODEL, IDetectInfo.TYPE_MRCNN);
               MRCnnDialog jdlgMrcnn = new MRCnnDialog(mDetectFrame, true);
               jdlgMrcnn.setIModelSelect(mModelSelected);
               mSelectDialog = jdlgMrcnn;
               mSelectDialog.setVisible(true);
               break;
           case IDetectInfo.TYPE_YOLO:
              DetectorInfoMgr.instance().put(DetectorInfoMgr.MODEL, IDetectInfo.TYPE_YOLO);
              YoloDialog jdlgYolo = new YoloDialog(mDetectFrame, true);
              jdlgYolo.setIModelSelect(mModelSelected);
              mSelectDialog = jdlgYolo;
              mSelectDialog.setVisible(true);
              break;
           default:
               //do nothing
       }
       
    }
   ...  
    private IModelSelect mModelSelection;
    public void setIModelSelect(IModelSelect modelSelection)
    {
        mModelSelection = modelSelection;
    }
}

10.5 Selection Callback

The class association diagram below illustrates how either YoloDialog or MRCnnDialog is selected based on the ComboDetectModelActionPerformed() method, which is triggered by the combo box.

Figure 18. Dialog action to choose

These dialogs provide a setIModelSelect() method to register the IModelSelect interface, allowing the selected model parameters to be passed back to the main class, which then forwards them to the DetectorInfoMgr using the same callback interface.

Specifically, when YOLO is selected, the YoloDialog appears, gathers input through file pickers and combo boxes, and subsequently returns the collected information to the main class via the IModelSelect callback interface.

package com.tobee.opencv.swing.dlg;

...

public class YoloDialog extends javax.swing.JDialog {

    ...
    
    private IModelSelect mModelSelection;
    public void setIModelSelect(IModelSelect modelSelection)
    {
        mModelSelection = modelSelection;
    }
    ...
}

The process follows these steps:

  1. Determine the selected model (YOLO or Mask R-CNN).
  2. Open the appropriate dialog (YoloDialog or MRCnnDialog).
  3. Let the user choose model/config files and the inference device.
  4. Save the selected settings to DetectorInfoMgr via the IModelSelect callback.

A structure of the YoloDialog class is shown below:

package com.tobee.opencv.swing.dlg;

...

public class YoloDialog extends javax.swing.JDialog {

    /**
     * Creates new form YoloDialog
     */
    public YoloDialog(java.awt.Frame parent, boolean modal) {
        super(parent, modal);
        initComponents();
    }

    /**
     * This method is called from within the constructor to initialize the form.
     * WARNING: Do NOT modify this code. The content of this method is always
     * regenerated by the Form Editor.
     */
    @SuppressWarnings("unchecked")
    // <editor-fold defaultstate="collapsed" desc="Generated Code">//GEN-BEGIN:initComponents
    private void initComponents() {

        ...
        btnConfigYolo.setText("Find");
        btnConfigYolo.addActionListener(new java.awt.event.ActionListener() {
            public void actionPerformed(java.awt.event.ActionEvent evt) {
                settingUpYoloModel(evt);
            }
        });

        btnModelYolo.setText("Find");
        btnModelYolo.addActionListener(new java.awt.event.ActionListener() {
            public void actionPerformed(java.awt.event.ActionEvent evt) {
                settingUpYoloModel(evt);
            }
        });

        btnConfirmYolo.setText("Confirm");
        btnConfirmYolo.addActionListener(new java.awt.event.ActionListener() {
            public void actionPerformed(java.awt.event.ActionEvent evt) {
                settingUpYoloModel(evt);
            }
        });


    }// </editor-fold>//GEN-END:initComponents

    private void settingUpYoloModel(java.awt.event.ActionEvent evt) {//GEN-FIRST:event_settingUpYoloModel
       ...
       
       if(btnFileSelect == btnConfigYolo)
       {
            ...
       }
       else if(btnFileSelect == btnModelYolo)
       {
           ...
       }
       else if(btnFileSelect == btnConfirmYolo)
       {
          mModelSelection.selectModel(txtConfigYolo.getText(), txtModelYolo.getText(), (String)comboDeviceType.getSelectedItem());
       }
    }//GEN-LAST:event_settingUpYoloModel

    private void comboDeviceTypeActionPerformed(java.awt.event.ActionEvent evt) {//GEN-FIRST:event_comboDeviceTypeActionPerformed
       ...
       
       switch(itemName)
       {
           case IDetectInfo.CPU:
               DetectorInfoMgr.instance().put(DetectorInfoMgr.DEVICE, DetectorInfoMgr.CPU);
              
               break;
           case IDetectInfo.GPU:
              DetectorInfoMgr.instance().put(DetectorInfoMgr.DEVICE, DetectorInfoMgr.GPU);
             
              break;
           default:
               //do nothing
       }
    }//GEN-LAST:event_comboDeviceTypeActionPerformed

    
    private IModelSelect mModelSelection;
    public void setIModelSelect(IModelSelect modelSelection)
    {
        mModelSelection = modelSelection;
    }
    ...
}

The following code shows that YoloDialog contains two “Find” buttons and one “Confirm” button, each of which uses an ActionListener to eventually call the settingUpYoloModel() method.

private void initComponents() {

        ...
        btnConfigYolo.setText("Find");
        btnConfigYolo.addActionListener(new java.awt.event.ActionListener() {
            public void actionPerformed(java.awt.event.ActionEvent evt) {
                settingUpYoloModel(evt);
            }
        });

        btnModelYolo.setText("Find");
        btnModelYolo.addActionListener(new java.awt.event.ActionListener() {
            public void actionPerformed(java.awt.event.ActionEvent evt) {
                settingUpYoloModel(evt);
            }
        });

        btnConfirmYolo.setText("Confirm");
        btnConfirmYolo.addActionListener(new java.awt.event.ActionListener() {
            public void actionPerformed(java.awt.event.ActionEvent evt) {
                settingUpYoloModel(evt);
            }
        });

    }// </editor-fold>//GEN-END:initComponents
	
	...
}

In the settingUpYoloModel() method, a JFileChooser dialog is used to select the config file, model file, and device type, which are then saved into their corresponding text fields.

private void settingUpYoloModel(java.awt.event.ActionEvent evt) {
	JButton btnFileSelect = null;
   
   if( evt.getSource() instanceof JButton)
   {
	   btnFileSelect = (JButton)evt.getSource();
   }
   else return;
   
   String initDir = DetectorInfoMgr.instance().getSelectedDirectory();
	
   if(btnFileSelect == btnConfigYolo)
   {
		JFileChooser fileChooser = new JFileChooser();
		fileChooser.setCurrentDirectory(new File(initDir)); // Default directory
		FileNameExtensionFilter filter = new FileNameExtensionFilter(
								"Cofig File (.cfg)", "cfg");
		fileChooser.setFileFilter(filter);
		
		int returnValue = fileChooser.showOpenDialog(null);
		if (returnValue == JFileChooser.APPROVE_OPTION) {
			File selectedFile = fileChooser.getSelectedFile();
			txtConfigYolo.setText(selectedFile.getAbsolutePath());
		}
   }
   else if(btnFileSelect == btnModelYolo)
   {
	   JFileChooser fileChooser = new JFileChooser();
		fileChooser.setCurrentDirectory(new File(initDir)); // Default directory
		FileNameExtensionFilter filter = new FileNameExtensionFilter(
								"Model File (.weights)", "weights");
		fileChooser.setFileFilter(filter);
		
		int returnValue = fileChooser.showOpenDialog(null);
		if (returnValue == JFileChooser.APPROVE_OPTION) {
			File selectedFile = fileChooser.getSelectedFile();
			txtModelYolo.setText(selectedFile.getAbsolutePath());
		}
   }
   ...
}

When the user clicks the “Confirm” button after providing all required information, the callback is triggered, and each field in the Status Information Area is populated with the entered data.

Figure 19. Confirm button action

The selectModel() method is invoked as a callback when the user makes a selection, as shown below:

private void settingUpYoloModel(java.awt.event.ActionEvent evt) {
        JButton btnFileSelect = null;
       
       if( evt.getSource() instanceof JButton)
       {
           btnFileSelect = (JButton)evt.getSource();
       }
       else return;
       
       String initDir = DetectorInfoMgr.instance().getSelectedDirectory();
        
       if(btnFileSelect == btnConfigYolo)
       {
            ...
       }
       else if(btnFileSelect == btnModelYolo)
       {
           ...
       }
       else if(btnFileSelect == btnConfirmYolo)
       {
          mModelSelection.selectModel(txtConfigYolo.getText(), txtModelYolo.getText(), (String)comboDeviceType.getSelectedItem());
       }
    }
	
    ...
}

A similar process is followed in the MRCnnDialog class. Each dialog includes buttons for selecting files and a Confirm button that triggers the callback.

10.1.3 Extra User Options

Additional options are provided to customize the viewer’s behavior. Each option is described below:

  • Threshold: Allows users to adjust the confidence threshold for object detection.
    Figure 20. Handling with threshold
  • Recording: Enables saving rendered frames as image files.
  • Detect Info: Displays detection time information for each frame.
    Figure 21. Display with detection information
  • Use Mask: Enables mask rendering (available only for Mask R-CNN).
Figure 20. Handling with threshold
Figure 21. Display with dectection information

Each option is implemented as either a checkbox or an input field in the GUI.

10.2 Status Information Area

The Status Information Area displays key information such as the image source path, selected model, model configuration, and detection time. This is part of the OpenCV Object Detection Java Swing Viewer.

Figure 22. Status Informaton Area

The CVSwingObjectDetector class contains an anonymous inner class that implements the IModelSelect interface, which is responsible for saving detection-related information—such as the selected model and configuration file—retrieved from dialogs based on the user’s combo box selection.

IModelSelect mModelSelected = new IModelSelect()
{
	@Override
	public void selectModel(String sConfigFile, String sModelFile, String sDeviceType) {
		
		if(ComboCVFileType.getSelectedItem().equals(IDetectInfo.TYPE_NONE))
		{
			ComboDetectModel.setSelectedItem(IDetectInfo.TYPE_NONE);
			txtThreshold.setText("0.5");
			txtConfig.setText("");
			txtModel.setText("");
			txtDetectionInfo.setText("");
			txtMediaFile.setText("");

			...
			
			btnDetectCVFrame.setEnabled(true);
			txtDetectionInfo.setText("Stop ...");
			return;
		}
		else
		{
			DetectorInfoMgr.instance().put(DetectorInfoMgr.RUN_STATUS, DetectorInfoMgr.STATUS_INIT);
		}
		
		txtConfig.setText(sConfigFile);
		txtModel.setText(sModelFile);
		
		...
		txtDeviceType.setText(sDeviceType);
		
		mSelectDialog.setVisible(false);
	   ...
	}
};

10.3 Detection Information Message Bus

While displaying model information is straightforward, displaying detection time is more complex due to the separation of detection logic across multiple classes, which is why a message bus design pattern is adopted.

Figure 23. Message Bus Class Overview

A new class, InferenceMsgBus, is introduced to send detection time messages from the detection classes to the main GUI class, CVSwingObjectDetector, as demonstrated in the following code.

IModelSelect mModelSelected = new IModelSelect()
{
	@Override
	public void selectModel(String sConfigFile, String sModelFile, String sDeviceType) {
		
		
		...
		
		InferenceMsgBus inferenceMsgBus = (InferenceMsgBus)DetectorInfoMgr.instance().get(DetectorInfoMgr.INFERENCE_MSG_BUS);
		inferenceMsgBus.recvInferenceMessage(InferenceMsgEvent.class, iInferenceMsgRecv);
	}
};

Detection classes such as MaskRCNNDetector and YoloDetector are responsible for calculating and publishing detection time.

package com.tobee.opencv.detector;
...


public class MaskRCNNDetector extends ObjectDetector {
    ...
     
    
    public void detectObject(
        Mat frame, Net dnnNet, Map<String, Mat> outputLayers, 
        String outputFile, double threshold, 
        boolean isSaveOuput, boolean isMaskShading
    ) {
        
        ...
        
        for (int i = 0; i < maxObjCnt; i++) {
            ...
        
        // Put efficiency information. The function getPerfProfile returns the overall time for inference(t) 
        // and the timings for each of the layers(in layersTimes)
        MatOfDouble layersTimes = new MatOfDouble();
        double freq = Core.getTickFrequency() / 1000;
        double t = dnnNet.getPerfProfile(layersTimes) / freq;
        String label = String.format("Mask-RCNN, Inference time for a frame : %f ms", t);
        //publish the efficiency infromation here.
        
        ...
        
    }
    ...
}

The CVSwingObjectDetector subscribes to these messages after the model selection combo box has been processed and the dialogs have returned their data.

package com.tobee.opencv.detector;
...


public class MaskRCNNDetector extends ObjectDetector {
    ...
     
    
    public void detectObject(
        Mat frame, Net dnnNet, Map<String, Mat> outputLayers, 
        String outputFile, double threshold, 
        boolean isSaveOuput, boolean isMaskShading
    ) {
        
        ...
        
        for (int i = 0; i < maxObjCnt; i++) {
            ...
        //publish the efficiency infromation here.
		if(DetectorInfoMgr.STATUS_STOP
         .equals(DetectorInfoMgr.instance().get(DetectorInfoMgr.RUN_STATUS)) )
        {
            inferenceMsgBus.sendInferenceMssage(new InferenceMsgEvent("stop"));
        }
        else
        {
            inferenceMsgBus.sendInferenceMssage(new InferenceMsgEvent(label));
        }
        
        
        ...
        
    }
    ...
}

Eventually, the CVSwingObjectDetector receives the detection time messages from the message bus.

10.3.2 Create Message Bus

An instance of InferenceMsgBus is created in the constructor of the ObjectDetector class, which serves as the base class for detection models.

public abstract class ObjectDetector implements IObjectDetector {
	...
	protected static InferenceMsgBus inferenceMsgBus;
	...
	protected ObjectDetector(final DNNOption dnnOption)
	{
		this.dnnOption = dnnOption;
		setupOption();
		setupLabelSet(); 
		initDnnNet();
		if(inferenceMsgBus == null)
		{
			inferenceMsgBus = new InferenceMsgBus();
			DetectorInfoMgr.instance().put(DetectorInfoMgr.INFERENCE_MSG_BUS, inferenceMsgBus);
		}
			
	}
	...
}

This instance is stored in the DetectorInfoMgr under the key "INFERENCE_MSG_BUS" for centralized access.

10.3.3 Subscribing to Messages

An anonymous class implementing the IInferenceMsgRecv interface is used to receive detection time messages.

IInferenceMsgRecv iInferenceMsgRecv = new IInferenceMsgRecv()
{
	@Override
	public void onInferenceMsgRecv(IInferenceMsg message)
	{
		if(chkInfoOnImage.isSelected())
		if (message instanceof InferenceMsgEvent) {
			InferenceMsgEvent event = (InferenceMsgEvent) message;
			
			txtDetectionInfo.setText(String.format("thrshold[%s] : %s", DetectorInfoMgr.instance().get(DetectorInfoMgr.THRESH_HOLD), event.getInferenceMessage()));
		}
	}
};

This interface defines a single method, onInferenceMsgRecv(), and the subscription logic is implemented as follows:

public interface IInferenceMsgRecv {
    void onInferenceMsgRecv(IInferenceMsg message);
}

10.3.4 Message Bus Implementation

The InferenceMsgBus class acts as the central message bus for the OpenCV Object Detection Java Swing Viewer. It manages a map of subscriber lists using the Map<String, List<IInferenceMsgRecv>> structure.

public class InferenceMsgBus {
    private final Map<Class<?>, List<IInferenceMsgRecv>> InferenceMsgMap = new HashMap<>();

    public void recvInferenceMessage(
Class<?> messageType, IInferenceMsgRecv subscriber) 
   {
        List<IInferenceMsgRecv> messageList = InferenceMsgMap.get(messageType);
        if (messageList == null) {
            messageList = new ArrayList<IInferenceMsgRecv>();
            InferenceMsgMap.put(messageType, messageList);
        }
        messageList.add(subscriber);
    }

    public void sendInferenceMssage(IInferenceMsg message) {
        Class<?> messageType = message.getClass();
        List<IInferenceMsgRecv> list = InferenceMsgMap.get(messageType);
        if (list != null) {
            for (IInferenceMsgRecv sub : list) {
                sub.onInferenceMsgRecv(message);
            }
        }
    }
}

recvInferenceMessage() adds subscribers to the map.

public void recvInferenceMessage(Class<?> messageType, IInferenceMsgRecv subscriber) {
	List<IInferenceMsgRecv> messageList = InferenceMsgMap.get(messageType);
	if (messageList == null) {
		messageList = new ArrayList<IInferenceMsgRecv>();
		InferenceMsgMap.put(messageType, messageList);
	}
	messageList.add(subscriber);
}

sendInferenceMessage() dispatches messages to the appropriate subscribers.

public void sendInferenceMssage(IInferenceMsg message) {
	Class<?> messageType = message.getClass();
	List<IInferenceMsgRecv> list = InferenceMsgMap.get(messageType);
	if (list != null) {
		for (IInferenceMsgRecv sub : list) {
			sub.onInferenceMsgRecv(message);
		}
	}
}

10.3.5 Publishing Messages

Messages are published after object detection is completed:

In YOLO, this occurs in the applyNonMaximumSuppression() method.

MatOfInt indices = applyNonMaximumSuppression(boxes, confidences);
...

MatOfDouble layersTimes = new MatOfDouble();
double freq = Core.getTickFrequency() / 1000;
double t = dnnNet.getPerfProfile(layersTimes) / freq;
String label = String.format("%s, Inference time for a frame : %f ms", modelName, t);

if(DetectorInfoMgr.STATUS_STOP.equals(DetectorInfoMgr.instance().get(DetectorInfoMgr.RUN_STATUS)) )
{
	inferenceMsgBus.sendInferenceMssage(new InferenceMsgEvent("stop"));
}
else
{
	inferenceMsgBus.sendInferenceMssage(new InferenceMsgEvent(label));
}

if((Boolean)DetectorInfoMgr.instance().get(DetectorInfoMgr.INFO_ON_IMAGE))
{
	Imgproc.putText(frame, label, new Point(0, 15), 
		Imgproc.FONT_HERSHEY_SIMPLEX, 0.5, new Scalar(0, 0, 0), 2);
}    

...

In Mask R-CNN, this occurs in the detectObject() method.

public void detectObject(
	Mat frame, Net dnnNet, Map<String, Mat> outputLayers, 
	String outputFile, double threshold, boolean isSaveOuput, boolean isMaskShading
) {
	...
	// Put efficiency information. The function getPerfProfile returns the overall time for inference(t) 
	// and the timings for each of the layers(in layersTimes)
	MatOfDouble layersTimes = new MatOfDouble();
	double freq = Core.getTickFrequency() / 1000;
	double t = dnnNet.getPerfProfile(layersTimes) / freq;
	String label = String.format("Mask-RCNN, Inference time for a frame : %f ms", t);
	
	if(DetectorInfoMgr.STATUS_STOP.equals(DetectorInfoMgr.instance().get(DetectorInfoMgr.RUN_STATUS)) )
	{
		inferenceMsgBus.sendInferenceMssage(new InferenceMsgEvent("stop"));
	}
	else
	{
		inferenceMsgBus.sendInferenceMssage(new InferenceMsgEvent(label));
	}
	
	
	if((Boolean)DetectorInfoMgr.instance().get(DetectorInfoMgr.INFO_ON_IMAGE))
	{
		Imgproc.putText(frame, label, new Point(0, 15), 
			Imgproc.FONT_HERSHEY_SIMPLEX, 0.5, new Scalar(0, 0, 0));
	}
	
	...
	
}

10.4 Execution Area

The Execution Area provides control buttons for object detection: Start, Pause, and Stop. The GUI layout for this section is shown below.

Figure 24. Execution Area Overview

10.4.1 Execution Area Components

All components in the Execution Area are buttons that control the main operations of the OpenCV Object Detection Java Swing Viewer.

  1. Detect : Starts object detection (assuming the image source is already displayed).
  2. Stop : Stops the detection process and rendering.
  3. Pause : Temporarily halts detection, allowing parameter adjustments.

10.4.2 Execution Area Implementation

Each button is implemented using a JButton, with an ActionListener attached to handle user interactions.

...

public class CVSwingObjectDetector extends JFrame  {
    
   ...
   
    @SuppressWarnings("unchecked")
    // <editor-fold defaultstate="collapsed" desc="Generated Code">                          
    private void initComponents() {

        btnStopCVFrame = new javax.swing.JButton();
        ...
        btnDetectCVFrame = new javax.swing.JButton();
        ...
        btnDetectPause = new javax.swing.JButton();
        
		...
		
        btnStopCVFrame.setText("Stop");
        btnStopCVFrame.addActionListener(new java.awt.event.ActionListener() {
            public void actionPerformed(java.awt.event.ActionEvent evt) {
                btnStopCVFrameActionPerformed(evt);
            }
        });

        ...

        btnDetectCVFrame.setText("Detect");
        btnDetectCVFrame.addActionListener(new java.awt.event.ActionListener() {
            public void actionPerformed(java.awt.event.ActionEvent evt) {
                displayDetectObjectOnScreen(evt);
            }
        });

        ...
		
        btnDetectPause.setText("Pause");
        btnDetectPause.addActionListener(new java.awt.event.ActionListener() {
            public void actionPerformed(java.awt.event.ActionEvent evt) {
                pauseStopActionPerformed(evt);
            }
        });

        ...
    }// </editor-fold>                        
    
    
    private void btnStopCVFrameActionPerformed(java.awt.event.ActionEvent evt) {                                               
      ...
    }                                              
	
	...
	
  private void displayDetectObjectOnScreen(java.awt.event.ActionEvent evt) {                                             
        
        ...
        
    }    
	
	...
	
    private void pauseStopActionPerformed(java.awt.event.ActionEvent evt) {                                          
		...
    }                                         
    
    
    ...                
}

10.4.3 Processing Logic

The Execution Area validates user inputs and sends control commands to openCVScrollPane (a JPanel that implements the ICvController interface) to start, stop, or pause detection and display operations.

Figure 25. Control with ICvController

Each button triggers its own ActionListener when clicked:

ButtonMethod Name
btnStopCVFramebtnStopCVFrameActionPerformed()
btnDetectCVFramedisplayDetectObjectOnScreen()
btnDetectPausepauseStopActionPerformed()
...
public class CVSwingObjectDetector extends JFrame  {
    
    private final JFrame mDetectFrame;
    private final ICvController mCvController;
	
   ...
   
    @SuppressWarnings("unchecked")
    // <editor-fold defaultstate="collapsed" desc="Generated Code">                          
    private void initComponents() {

        btnStopCVFrame = new javax.swing.JButton();
        ...
        btnDetectCVFrame = new javax.swing.JButton();
        ...
        btnDetectPause = new javax.swing.JButton();
        
		...
		
        btnStopCVFrame.setText("Stop");
        btnStopCVFrame.addActionListener(new java.awt.event.ActionListener() {
            public void actionPerformed(java.awt.event.ActionEvent evt) {
                btnStopCVFrameActionPerformed(evt);
            }
        });

        ...

        btnDetectCVFrame.setText("Detect");
        btnDetectCVFrame.addActionListener(new java.awt.event.ActionListener() {
            public void actionPerformed(java.awt.event.ActionEvent evt) {
                displayDetectObjectOnScreen(evt);
            }
        });

        ...
		
        btnDetectPause.setText("Pause");
        btnDetectPause.addActionListener(new java.awt.event.ActionListener() {
            public void actionPerformed(java.awt.event.ActionEvent evt) {
                pauseStopActionPerformed(evt);
            }
        });

        ...
    }// </editor-fold>                        
    
    
    private void btnStopCVFrameActionPerformed(java.awt.event.ActionEvent evt) {                                               
      mCvController.stopRenderFrame();
      
      DetectorInfoMgr.instance().put(DetectorInfoMgr.RUN_STATUS, DetectorInfoMgr.STATUS_STOP);
      btnDetectCVFrame.setEnabled(true);
      txtDetectionInfo.setText("Stop ...");
    }                                              
	
	...
	
   private void displayDetectObjectOnScreen(java.awt.event.ActionEvent evt) {                                             
        
        if(
            ComboCVFileType.getSelectedItem().equals(IDetectInfo.TYPE_NONE) ||
            ComboDetectModel.getSelectedItem().equals(IDetectInfo.TYPE_NONE) ||   
            txtConfig.getText().equals("") ||
            txtModel.getText().equals("")  ||
            txtMediaFile.getText().equals("")
           )
            {
                JOptionPane.showMessageDialog(
                    null,                      // Parent Component (Sreen center if null)
                    "Select proper options for the file and model type. \r\n Or the correct path of config and model etc.",  // Message 
                    "Information",             // Title of the dialog
                    JOptionPane.INFORMATION_MESSAGE // icon type
                );
               return;
            }
        
        DetectorInfoMgr.instance().put(DetectorInfoMgr.INPUT_TYPE, ComboCVFileType.getSelectedItem());
        //Extra things to save
        DetectorInfoMgr.instance().put(DetectorInfoMgr.THRESH_HOLD, txtThreshold.getText());
        DetectorInfoMgr.instance().put(DetectorInfoMgr.IS_SAVE_OUTPUT, chkRecording.isSelected());
        DetectorInfoMgr.instance().put(DetectorInfoMgr.IS_USE_MASK, chkMaskUse.isSelected());

        mCvController.stopRenderDetectFrame();

        
        // "Image", "Movie", "WebCam"
        if (ComboCVFileType.getSelectedItem().equals(IDetectInfo.TYPE_IMG)) {
            String imagePath = txtMediaFile.getText().trim();

            if (imagePath == null || imagePath.isEmpty()) {
                JOptionPane.showMessageDialog(null, "No Image file found");
                return;
            }
            if (imagePath.endsWith("jpg") || imagePath.endsWith("png") || imagePath.endsWith("jpeg")) {
                mCvController.startRenderDetectFrame();
                DetectorInfoMgr.instance().put(DetectorInfoMgr.RUN_STATUS, DetectorInfoMgr.STATUS_RUN);
            } else {
                JOptionPane.showMessageDialog(null, "Not a image file!!");
            }
            
            DetectorInfoMgr.instance().put(DetectorInfoMgr.INPUT_FILE, imagePath);
        } else if (ComboCVFileType.getSelectedItem().equals(IDetectInfo.TYPE_MOV)) {
            String moviePath = txtMediaFile.getText().trim();

            // if(moviePath == null || moviePath.isEmpty() || moviePath.isBlank())
            if (moviePath == null || moviePath.isEmpty()) {
                JOptionPane.showMessageDialog(null, "No Movie file found");
                return;
            }

            if (moviePath.endsWith("mp4") || moviePath.endsWith("mpg") || moviePath.endsWith("avi")) {
                mCvController.startRenderDetectFrame();
                DetectorInfoMgr.instance().put(DetectorInfoMgr.RUN_STATUS, DetectorInfoMgr.STATUS_RUN);
            } else {
                JOptionPane.showMessageDialog(null, "Not a movie file!!");
            }
            DetectorInfoMgr.instance().put(DetectorInfoMgr.INPUT_FILE, moviePath);
        } else if (ComboCVFileType.getSelectedItem().equals( IDetectInfo.TYPE_CAM)) {
            mCvController.startRenderDetectFrame();
            DetectorInfoMgr.instance().put(DetectorInfoMgr.RUN_STATUS, DetectorInfoMgr.STATUS_RUN);
            DetectorInfoMgr.instance().put(DetectorInfoMgr.INPUT_FILE, "");
        }
        
        DetectorInfoMgr.instance().put(DetectorInfoMgr.INFO_ON_IMAGE, chkInfoOnImage.isSelected());
        
        btnDetectCVFrame.setEnabled(false);
        
    }    
	
	...
	
    private void pauseStopActionPerformed(java.awt.event.ActionEvent evt) {                                          
		mCvController.stopRenderDetectFrame();
		DetectorInfoMgr.instance().put(DetectorInfoMgr.RUN_STATUS, DetectorInfoMgr.STATUS_PAUSE);   
		btnDetectCVFrame.setEnabled(true);
		txtDetectionInfo.setText("Pause ...");
    }                                         
    
    
    ...                
}

1. btnStopCVFrameActionPerformed():
Sets the internal state to STOP, halts rendering, and displays a stop message.

 private void btnStopCVFrameActionPerformed(java.awt.event.ActionEvent evt) { 
  mCvController.stopRenderFrame();
  
  DetectorInfoMgr.instance()
   .put(DetectorInfoMgr.RUN_STATUS, DetectorInfoMgr.STATUS_STOP);
  btnDetectCVFrame.setEnabled(true);
  txtDetectionInfo.setText("Stop ...");
}                      

2. pauseStopActionPerformed():
Sets the state to PAUSE, stops rendering, and displays a pause message.

  private void btnStopCVFrameActionPerformed(java.awt.event.ActionEvent evt) { 
	mCvController.stopRenderFrame();

	DetectorInfoMgr.instance()
    .put(DetectorInfoMgr.RUN_STATUS, DetectorInfoMgr.STATUS_STOP);
	btnDetectCVFrame.setEnabled(true);
	txtDetectionInfo.setText("Stop ...");
}   

3. displayDetectObjectOnScreen():
Validates input parameters and file formats. If everything is valid, saves the detection configuration and starts detection.

  private void displayDetectObjectOnScreen(java.awt.event.ActionEvent evt) 
 {                                             
        
	if(
		ComboCVFileType.getSelectedItem().equals(IDetectInfo.TYPE_NONE) ||
		ComboDetectModel.getSelectedItem().equals(IDetectInfo.TYPE_NONE) ||   
		txtConfig.getText().equals("") ||
		txtModel.getText().equals("")  ||
		txtMediaFile.getText().equals("")
	)
	{
		...
	   return;
	}
	
	DetectorInfoMgr.instance().put(DetectorInfoMgr.INPUT_TYPE, ComboCVFileType.getSelectedItem());
	//Extra things to save
	DetectorInfoMgr.instance().put(DetectorInfoMgr.THRESH_HOLD, txtThreshold.getText());
	DetectorInfoMgr.instance().put(DetectorInfoMgr.IS_SAVE_OUTPUT, chkRecording.isSelected());
	DetectorInfoMgr.instance().put(DetectorInfoMgr.IS_USE_MASK, chkMaskUse.isSelected());

	mCvController.stopRenderDetectFrame();

	
	// "Image", "Movie", "WebCam"
	if (ComboCVFileType.getSelectedItem().equals(IDetectInfo.TYPE_IMG)) {
		String imagePath = txtMediaFile.getText().trim();

		...
		if (imagePath.endsWith("jpg") || 
			imagePath.endsWith("png") || 
			imagePath.endsWith("jpeg")) {
			mCvController.startRenderDetectFrame();
			DetectorInfoMgr.instance().put(DetectorInfoMgr.RUN_STATUS, DetectorInfoMgr.STATUS_RUN);
		} 
		 ...
		
		DetectorInfoMgr.instance().put(DetectorInfoMgr.INPUT_FILE, imagePath);
	} else if (ComboCVFileType.getSelectedItem().equals(IDetectInfo.TYPE_MOV)) {
		...

		if (moviePath.endsWith("mp4") || 
			moviePath.endsWith("mpg") || 
			moviePath.endsWith("avi")) {
			mCvController.startRenderDetectFrame();
			DetectorInfoMgr.instance().put(DetectorInfoMgr.RUN_STATUS, DetectorInfoMgr.STATUS_RUN);
		} 
		...
		DetectorInfoMgr.instance().put(DetectorInfoMgr.INPUT_FILE, moviePath);
	} else if (ComboCVFileType.getSelectedItem().equals( IDetectInfo.TYPE_CAM)) {
		mCvController.startRenderDetectFrame();
		DetectorInfoMgr.instance().put(DetectorInfoMgr.RUN_STATUS, DetectorInfoMgr.STATUS_RUN);
		DetectorInfoMgr.instance().put(DetectorInfoMgr.INPUT_FILE, "");
	}
	
	DetectorInfoMgr.instance().put(DetectorInfoMgr.INFO_ON_IMAGE, chkInfoOnImage.isSelected());
	
	btnDetectCVFrame.setEnabled(false);
	
} 

Detection state is maintained in DetectorInfoMgr, allowing the application to track the current status throughout the detection process.

 private void displayDetectObjectOnScreen(java.awt.event.ActionEvent evt) 
{                                             
	
	...

	
	// "Image", "Movie", "WebCam"
	if (ComboCVFileType.getSelectedItem().equals(IDetectInfo.TYPE_IMG)) {
		...
		if (imagePath.endsWith("jpg") || 
			imagePath.endsWith("png") || 
			imagePath.endsWith("jpeg")) {
			...
			DetectorInfoMgr.instance().put(DetectorInfoMgr.RUN_STATUS, DetectorInfoMgr.STATUS_RUN);
		} 
		 ...
		
	   
	} else if (ComboCVFileType.getSelectedItem().equals(IDetectInfo.TYPE_MOV)) {
		...

		if (moviePath.endsWith("mp4") || moviePath.endsWith("mpg") || moviePath.endsWith("avi")) {
			...
			DetectorInfoMgr.instance().put(DetectorInfoMgr.RUN_STATUS, DetectorInfoMgr.STATUS_RUN);
		} 
		...
		
	} else if (ComboCVFileType.getSelectedItem().equals( IDetectInfo.TYPE_CAM)) {
		...
		DetectorInfoMgr.instance().put(DetectorInfoMgr.RUN_STATUS, DetectorInfoMgr.STATUS_RUN);
	   ...
	}
	
	...
	
}   

11. Screen Area Implementation

The Screen Area is tightly integrated with the Execution Area, as illustrated in the class association diagram.

Figure 26. Class association diagram of the Screen Area

In the OpenCV Object Detection Java Swing Viewer, the CVSwingObjectDetector class instantiates the CVRenderPane class, which is responsible for rendering the Screen Area based on the current configuration stored in DetectorInfoMgr.

CVRenderPane delegates object detection tasks to the abstract ObjectDetector class. The concrete subclasses—MaskRCNNDetector or YoloDetector—are instantiated based on the user’s model selection from the combo box.

In the next sections, we explore the key classes involved in loading and rendering image sources within the application.

11.1 CVRenderPane

CVRenderPane is the main component of the Screen Area. It extends JPanel and implements the ICvController interface. The class performs two main roles:

public class CVRenderPane extends javax.swing.JPanel implements ICvController {
    private static final long serialVersionUID = 1L;
    
    static{
        System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
    }
    
    private BufferedImage cvImageBuffer;
    private static CVRenderer cvRenderer;
    
    @Override
    public void paintComponent(Graphics g) {
        super.paintComponent(g);
        try {
            if (cvImageBuffer != null) {
                Dimension imageD = getSize();

                int paneWidth = imageD.width;
                int paneHeight = imageD.height;

                cvImageBuffer = ImageUtil.resize(cvImageBuffer, paneWidth, paneHeight);

                g.drawImage(cvImageBuffer, 0, 0, null);

                cvImageBuffer.flush();
                    // bufImagew = null;
            }
            else
                System.out.println("buffer is null!!!");

        } catch (Exception e) {
        }
    }
    
    
    /**
     * Creates new form CVRenderPane
     */
    public CVRenderPane() {
        initComponents();

        if(cvRenderer == null)
            cvRenderer = new CVRenderer();
        
        cvRenderer.addScreenRenderer(
            new IScreenRenderer()
            {
                @Override
                public void renderOnScreen(BufferedImage outputBuffer) {
                    if(outputBuffer == null) return;
                    
                    cvImageBuffer = outputBuffer;
                    
                    repaint();
                }
                    
            });
    }
    
	@Override
    public void initDetector() {
        cvRenderer.setVideoCapture();
        cvRenderer.initDetector();
    }
    
    @Override
    public boolean startRenderFrame() {
        cvRenderer.renderDetectedFrame();
        
        return false;
    }
    @Override
     public boolean startRenderDetectFrame() {
        cvRenderer.startScheduledRendering();
        
        return false;
    }
    @Override
     public boolean stopRenderDetectFrame() {
        cvRenderer.stopScheduledRendering();
        return true;
    }
    
    @Override
    public boolean stopRenderFrame() {
        try
        {
            cvRenderer.stopRenderFrame();
        }
        catch(Exception ex)
        {
            //ex.printStackTrace();
        }
       
        return true;
    }
	
    @Override
    public Dimension getPreferredSize() {
        if (cvImageBuffer == null) {
             System.out.println("[DEBUG] cvImageBuffer is null...");
            return super.getPreferredSize();
        } else {
            int w = cvImageBuffer.getWidth();
            int h = cvImageBuffer.getHeight();
            return new Dimension(w, h);
        }
    }
...             
}

1. Rendering the image
It overrides paintComponent() to draw the image on the panel using a BufferedImage named cvImageBuffer

public class CVRenderPane extends javax.swing.JPanel implements ICvController {
    ...
    
    private BufferedImage cvImageBuffer;
    ...
    
    @Override
    public void paintComponent(Graphics g) {
        super.paintComponent(g);
        try {
            if (cvImageBuffer != null) {
                Dimension imageD = getSize();

                int paneWidth = imageD.width;
                int paneHeight = imageD.height;

                cvImageBuffer = ImageUtil.resize(cvImageBuffer, paneWidth, paneHeight);

                g.drawImage(cvImageBuffer, 0, 0, null);

                cvImageBuffer.flush();
                    // bufImagew = null;
            }
            else
                System.out.println("buffer is null!!!");

        } catch (Exception e) {
        }
    }
    
  @Override
    public Dimension getPreferredSize() {
        if (cvImageBuffer == null) {
            return super.getPreferredSize();
        } else {
            int w = cvImageBuffer.getWidth();
            int h = cvImageBuffer.getHeight();
            return new Dimension(w, h);
        }
    }
    ...          
}

2. Handling control commands
As an ICvController, it receives control commands from CVSwingObjectDetector. Furthermore, it registers an IScreenRenderer interface, which allows the CVRenderer to invoke the renderOnScreen() method once object detection is complete.

public class CVRenderPane extends javax.swing.JPanel implements ICvController {
    ...
	private static CVRenderer cvRenderer;
	...
    /**
     * Creates new form CVRenderPane
     */
    public CVRenderPane() {
        initComponents();

        if(cvRenderer == null)
            cvRenderer = new CVRenderer();
        
        cvRenderer.addScreenRenderer(
            new IScreenRenderer()
            {
                @Override
                public void renderOnScreen(BufferedImage outputBuffer) {
                    if(outputBuffer == null) return;
                    
                    cvImageBuffer = outputBuffer;
                    
                    repaint();
                }
                    
            });
    }
    
	@Override
    public void initDetector() {
        cvRenderer.setVideoCapture();
        cvRenderer.initDetector();
    }
    
    @Override
    public boolean startRenderFrame() {
        cvRenderer.renderDetectedFrame();
        
        return false;
    }
    @Override
     public boolean startRenderDetectFrame() {
        cvRenderer.startScheduledRendering();
        
        return false;
    }
    @Override
     public boolean stopRenderDetectFrame() {
        cvRenderer.stopScheduledRendering();
        return true;
    }
    
    @Override
    public boolean stopRenderFrame() {
        try
        {
            cvRenderer.stopRenderFrame();
        }
        catch(Exception ex)
        {
            //ex.printStackTrace();
        }
       
        return true;
    }

...             
}

CVRenderPane class registers IScreenRenderer interface, the renderOnScreen() of the interface calls in the CVRenderer class after the process of object detetion.

/**
 * Creates new form CVRenderPane
 */
public CVRenderPane() {
	initComponents();

	if(cvRenderer == null)
		cvRenderer = new CVRenderer();
	
	cvRenderer.addScreenRenderer(
		new IScreenRenderer()
		{
			@Override
			public void renderOnScreen(BufferedImage outputBuffer) {
				if(outputBuffer == null) return;
				
				cvImageBuffer = outputBuffer;
				
				repaint();
			}
				
		});
}

3. Controlling rendering behavior
As part of ICvController, it accepts control commands from the main GUI (CVSwingObjectDetector). Additionally, CVRenderPane registers an IScreenRenderer interface so that the CVRenderer call its renderOnScreen() after completing object detection. Thus, the detected image is assigned to cvImageBuffer, which triggers rendering on the screen.

Figure 27. Class association diagram focused rendering

11.2 CVRenderer

To summarize, the CVRenderer class serves two primary roles: rendering image sources to the CVRenderPane and handling object detection. As shown below, the class diagram details its interactions with related components.

Figure 28. CVRenderer class diagram in detail

11.2.1 Key Methods in CVRenderer

CVRenderer class uses the instance of Mat class to render the image source from user, which converts to the BufferedImage object and sends it to the CvRendererPane class finally.

Figure 29. CVRenderer GUI

1. ImageSourceReaderable

An anonymous Runnable class that continuously reads image sources (Image, Video, or WebCam) as Mat objects, converts them to BufferedImage, and even renders them.

// read a frame every 33 ms (30 frames/sec)
private final Runnable ImageSourceReaderable = new Runnable() {

	@Override
	public void run() {
		String fileType = (String)DetectorInfoMgr.instance().get(DetectorInfoMgr.INPUT_TYPE);
		
		if(DetectorInfoMgr.TYPE_IMG.equals(fileType))
		{
			Mat frame = Imgcodecs.imread((String)DetectorInfoMgr.instance().get(DetectorInfoMgr.INPUT_FILE));
			//bufImagew = MatToBufferedImage(frame);
			renderOuputToScreen(frame);          
		} else {
			// effectively grab and process a single frame
			Mat frame = readImageSource();
			// convert and show the frame
			//bufImagew = MatToBufferedImage(frame);
			renderOuputToScreen(frame);
		}

	}
};
  • For static images, it uses Imgcodecs.imread().
  • For video or webcam, it uses VideoCapture.read().
private Mat readImageSource() {
	String fileType = (String)DetectorInfoMgr.instance().get(DetectorInfoMgr.INPUT_TYPE);
	
	if(!DetectorInfoMgr.TYPE_IMG.equals(fileType))
	{
		Mat frame = new Mat();
		// check if the capture is open
		if (capture.isOpened()) {
			try {
				// read the current frame
				capture.read(frame);
			} catch (Exception e) {
				// log the error
//                    System.err.println("Exception during the image elaboration: " + e);
			}
		}
 
		if(frame.empty()) 
		{
			resetVideoCapture();
		}
		
		return frame;
	} 
	
	return null;
}

It then calls renderOutputToScreen(), and if the app is in STATUS_RUN, it invokes detectObject() using the selected model.

private void renderOuputToScreen(Mat cvFrame)
{
	//
	if(cvFrame == null || cvFrame.empty())
	{
		return;
	}
	if (timer == null || timer.isShutdown()) {
		System.out.println(String.format("[DEBUG]renderOuputToScreen Timer~!!!! capture?[%b],objectDetector?[%b] ", 
				capture.isOpened(),objectDetector != null));
	}
	if(screenRenderer != null) 
	{
		String currentStatus = (String)DetectorInfoMgr.instance().get(DetectorInfoMgr.RUN_STATUS);
			  
		try {
			 if(objectDetector != null && (currentStatus != null &&  currentStatus.equals(DetectorInfoMgr.STATUS_RUN)))
				objectDetector.detectObject(cvFrame);
			
			BufferedImage imgbuffer = ImageUtil.MatToBufferedImage(cvFrame);
			if(imgbuffer != null) backupBuffer = imgbuffer;
			screenRenderer.renderOnScreen(backupBuffer);
		} catch (Exception ex) {
			//ex.printStackTrace();
			System.err.println(ex.getMessage());
		}
	}
	else
	{
		System.out.println("Screen Renderer is null~!!!!");
	}
}

The code below illustrates how to determine if object detection is in progress:

String currentStatus =   (String)DetectorInfoMgr.instance().get(DetectorInfoMgr.RUN_STATUS);
                  
try {
	 if(objectDetector != null && (currentStatus != null &&  currentStatus.equals(DetectorInfoMgr.STATUS_RUN)))
		objectDetector.detectObject(cvFrame);
	
	BufferedImage imgbuffer = ImageUtil.MatToBufferedImage(cvFrame);
	if(imgbuffer != null) backupBuffer = imgbuffer;
	screenRenderer.renderOnScreen(backupBuffer);
} catch (Exception ex) {
	//ex.printStackTrace();
	System.err.println(ex.getMessage());
}

We can perform object detection using the detectObject method of the ObjectDetector class. To enable this, DetectorInfoMgr.RUN_STATUS must be set to DetectorInfoMgr.STATUS_RUN.

2. initDetector

Initializes the appropriate detector (MaskRCNNDetector or YoloDetector) based on the model type stored in DetectorInfoMgr.MODEL.

public void initDetector() {
	//stopRenderFrame();
	DNNOption dnnOption = new DNNOption();
	
	dnnOption.mediaType = (String)DetectorInfoMgr.instance().get(DetectorInfoMgr.INPUT_TYPE);
	dnnOption.modelName = (String)DetectorInfoMgr.instance().get(DetectorInfoMgr.MODEL);
	
	if(dnnOption.mediaType.equals("None")) return;
	if(dnnOption.modelName.equals("None")) return;
	
	dnnOption.deviceType = (String)DetectorInfoMgr.instance().get(DetectorInfoMgr.DEVICE);
	
	dnnOption.mediaFile = (String)DetectorInfoMgr.instance().get(DetectorInfoMgr.INPUT_FILE);
	dnnOption.modelPath = (String)DetectorInfoMgr.instance().get(DetectorInfoMgr.MODEL_PATH);
	dnnOption.modelConfiguration = (String)DetectorInfoMgr.instance().get(DetectorInfoMgr.CONFIG_PATH);
	
	if(DetectorInfoMgr.TYPE_MRCNN.equals(DetectorInfoMgr.instance().get(DetectorInfoMgr.MODEL)))
	{
		objectDetector = new MaskRCNNDetector(dnnOption);
	}
	else if(DetectorInfoMgr.TYPE_YOLO.equals(DetectorInfoMgr.instance().get(DetectorInfoMgr.MODEL)))
	{
		objectDetector = new YoloDetector(dnnOption);
	}
}

If the model type is TYPE_MRCNN, instantiate MaskRCNNDetector; if it is TYPE_YOLO, instantiate YoloDetector:

if(DetectorInfoMgr.TYPE_MRCNN.equals(DetectorInfoMgr.instance().get(DetectorInfoMgr.MODEL)))
{
	objectDetector = new MaskRCNNDetector(dnnOption);
}
else if(DetectorInfoMgr.TYPE_YOLO.equals(DetectorInfoMgr.instance().get(DetectorInfoMgr.MODEL)))
{
	objectDetector = new YoloDetector(dnnOption);
}

3. resetVideoCapture() / setVideoCapture()

Prepares VideoCapture based on the selected source and starts reading frames if needed.

public void resetVideoCapture()
{
	if (capture.isOpened()) capture.release();
	
	String fileType = (String)DetectorInfoMgr.instance().get(DetectorInfoMgr.INPUT_TYPE);
	
	if(DetectorInfoMgr.TYPE_MOV.equals(fileType))
	{ 
		capture.set(Videoio.CAP_PROP_POS_FRAMES, 0);
		capture.open((String)DetectorInfoMgr.instance().get(DetectorInfoMgr.INPUT_FILE));
	} else if(DetectorInfoMgr.TYPE_CAM.equals(fileType)) 
	{
		capture.open(cameraId);
	}
	else if(DetectorInfoMgr.TYPE_IMG.equals(fileType)) 
	{
		//stopRenderFrame();
	}
}

public void setVideoCapture()
{
	if (capture.isOpened()) return;
	
	String fileType = (String)DetectorInfoMgr.instance().get(DetectorInfoMgr.INPUT_TYPE);
	
	if(DetectorInfoMgr.TYPE_MOV.equals(fileType))
	{ 
		capture.open((String)DetectorInfoMgr.instance().get(DetectorInfoMgr.INPUT_FILE));
	} else if(DetectorInfoMgr.TYPE_CAM.equals(fileType)) 
	{
		capture.open(cameraId);
	}
	else if(DetectorInfoMgr.TYPE_IMG.equals(fileType)) 
	{
	}
}

The key difference is whether to immediately start reading from the first frame.

capture.set(Videoio.CAP_PROP_POS_FRAMES, 0);

4. startScheduledRendering()

The newSingleThreadScheduledExecutor method from the Executors class schedules the ImageSourceReaderable to run every 30 milliseconds.

 // read a frame every 33 ms (30 frames/sec)
public void startScheduledRendering() {
	if (timer == null) {
		timer = Executors.newSingleThreadScheduledExecutor();
		timer.scheduleAtFixedRate(ImageSourceReaderable, 0, 33, TimeUnit.MILLISECONDS);
	} else {
		stopScheduledRendering();

		timer = Executors.newSingleThreadScheduledExecutor();
		timer.scheduleAtFixedRate(ImageSourceReaderable, 0, 33, TimeUnit.MILLISECONDS);
	}
}

5. stopScheduledRendering()

Terminates scheduled rendering tasks.

public void stopScheduledRendering() {
       
	if (timer != null && !timer.isShutdown()) {
		try {
			// stop the timer
			timer.shutdown();
			timer.awaitTermination(33, TimeUnit.MILLISECONDS);
		} catch (InterruptedException e) {
			// log any exception
			System.err.println(
				"Exception in stopping the frame capture, trying to release the camera now... " + e);
		}
	}
}

6. renderDetectedFrame()

Specifically, when the user clicks the Pause button, this method is used to render the current frame along with the detection results.

Figure 30. Pause button action

The code of renderDetectedFrame method is as follows:

public void renderDetectedFrame() {
	setVideoCapture();
	startScheduledRendering();
}

7. stopRenderFrame()

Stops rendering and halts all detection processes.

Figure 31. Stop button action

The code of stopRenderFrame method is as follows:

public void stopRenderFrame() {
	stopScheduledRendering();

	if (capture.isOpened()) {
		// release the camera
		capture.release();
	}
	
	...
}

12. Object Detection

The application supports object detection using YOLO v3, YOLO v4, and Mask R-CNN.

Figure 31. Object Detection related classes

Detection Workflow:

  • User selects an image source and detection model.
  • User clicks the Detect button.
  • detectObject() from ObjectDetector (which implements IObjectDetector) is triggered.
  • The appropriate model class (MaskRCNNDetector or YoloDetector) is instantiated.
  • Detection results are rendered on the screen.
Figure 32. Object Detection on the viewer

The ObjectDetector class is designed as an abstract base class that enforces implementation of detectObject() in all subclasses.

12.1 ObjectDetector class

ObjectDetector is an abstract base class defining a unified interface for detection classes like MaskRCNNDetector and YoloDetector.

Figure 33. Object Detector class diagram
  • Initialize OpenCV Net object for inference.
  • Load class labels.
  • Define the abstract method detectObject().

12.2 MaskRCNNDetector class

This class is instantiated when the user selects Mask R-CNN. It implements the detectObject() method.

Figure 34. MaskRCNNDetector class diagram

The flow as following is for the object detection from the image source as a Mat object.

Figure 35. Changing status on the MaskRCNNDetector class

Detection Flow of Mask R-CNN:

  1. Resize the Mat input to fit Mask R-CNN requirements.
  2. Create a blob using blobFromImage().
  3. Run inference using Net.forward().
  4. Extract detection_out_final and detection_masks.
  5. Render detection results to the screen.

12.3 YoloDetector Class

Similar to its Mask R-CNN counterpart, it overrides the detectObject() method with YOLO-specific logic.

Figure 36. YoloDetector class diagram

Net Figure shows the process of the object detection which happens in the YoloDetectror class.

Figure 37. Changing status on the YoloDetector class

Detection Flow of YOLO :

  1. Resize the input frame.
  2. Create a blob using blobFromImage().
  3. Run Net.forward() to perform detection.
  4. Extract bounding boxes, class IDs, and confidence scores.
  5. Send results to the renderer.

13. Using OpenCV Object Detection Java Swing Viewer

This section outlines how users interact with the application.

13.1 Image Source Selection

Use JComboBox to choose the image source:

  • Image
  • Video
  • WebCam

13.2 Image Mode

Detect objects in static image files.

To use Image Mode, follow these steps:

  1. Select Image from the first dropdown list.
  2. Choose an image file using the file dialog.
  3. Select a model from the second dropdown list.
  4. Specify the model and configuration file for object detection.
  5. Click the Detect button to start detection.
Figure 38. Image Mode
Object Detection on Image Mode

13.3 Video Mode

Detect objects in video frames.

To use Video Mode, follow these steps:

  1. Select Video from the first dropdown list.
  2. Choose an video file using the file dialog.
  3. Select a model from the second dropdown list.
  4. Specify the model and configuration file for object detection.
  5. Click the Detect button to start detection.
Figure 39. Video Mode
Object Detection In Video Mode

13.4 WebCam Mode

Perform real-time detection using a webcam.

To use WebCam Mode, follow these steps:

  1. Select WebCam from the first dropdown list.
  2. Choose a model from the second dropdown list.
  3. Specify the model and configuration file for object detection.
  4. Click the Detect button to start detection.
Figure 40. WebCam Mode
Object Detection on WebCam Mode

14. Conclusion

Throughout this three-part series, we have developed a complete OpenCV-based Object Detection Viewer using Java Swing.

Specifically, we demonstrated how to seamlessly integrate object detection models such as YOLO and Mask R-CNN with various image sources, including static images, videos, and webcam streams. Moreover, the application’s GUI was implemented using Apache NetBeans, while the detection functionality was constructed with a modular Java architecture. Ultimately, this final project brings together all the components described throughout the series.

The full source code is based on the detailed implementation provided in this documentation.

15. Download the Source Code

Download
You can download the full source code of this example here:
OpenCV Object Detection Java Swing Viewer

16. References

1. OpenCV-Based Media Java Swing Viewer

2. OpenCV Java Object Detection

3. OpenCV Library Documentation
OpenCV.org. Open Source Computer Vision Library.
https://docs.opencv.org

4. YOLO: Real-Time Object Detection
Redmon, J., & Farhadi, A. (2018). YOLOv3: An Incremental Improvement. arXiv preprint arXiv:1804.02767.
https://arxiv.org/abs/1804.02767

5. Mask R-CNN for Object Detection and Segmentation
He, K., Gkioxari, G., Dollár, P., & Girshick, R. (2017). Mask R-CNN. Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2961–2969.
https://arxiv.org/abs/1703.06870

6. Java Swing Documentation
Oracle Corporation. Java Platform, Standard Edition – Java Swing.
https://docs.oracle.com/javase/8/docs/technotes/guides/swing/

17. Revision History

1. v1.0.0: Initial release

  • Basic UI layout implemented (MainWindow, YoloDlg, MRCNNDlg)
  • Dialogs for saving model parameters added
  • Initial settings of object detector class configuration supported

2. v1.1.0: Added advanced button actions in the Execution Area

  • Detect, Pause and Stop button action added
  • Added status of the object detection

3. v1.1.1: Fixed rendering screen crash

  • Fixed error when Detect button clicked
  • Improved not displaying image type of the object detection.
  • Add MessageBus patten to display the detection information.

Young Baek

I am a GIS developer and architect living in Korea, with an interest in open-source technologies. I am passionate about GIS technology, mobile technology, networking, and vision AI.
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Back to top button