Human Computer Interaction in Computer Vision
Human Computer Interaction in Computer Vision
https://doi.org/10.1007/s44196-024-00427-6
RESEARCH ARTICLE
Abstract
In view of the current problems of low detection accuracy, poor stability and slow detection speed of intelligent vehicle
violation detection systems, this article will use human–computer interaction and computer vision technology to solve the
existing problems. First, the picture data required for the experiment is collected through the Bit Vehicle model dataset, and
computer vision technology is used for preprocessing. Then, use Kalman filtering to track and study the vehicle to help better
predict the trajectory of the vehicle in the area that needs to be detected; finally, use human–computer interaction technology
to build the interactive interface of the system and improve the operability of the system. The violation detection system
based on computer vision technology has an accuracy of more than 96.86% for the detection of the eight types of violations
extracted, and the average detection is 98%. Through computer vision technology, the system can accurately detect and
identify vehicle violations in real time, effectively improving the efficiency and safety of traffic management. In addition, the
system also pays special attention to the design of human–computer interaction, provides an intuitive and easy-to-use user
interface, and enables traffic managers to easily monitor and manage traffic conditions. This innovative intelligent vehicle
violation detection system is expected to help the development of traffic management technology in the future.
Keywords Vehicle violation detection system · Computer vision · Human–computer interaction · Kalman filtering · Mean
filtering
Vol.:(0123456789)
40 Page 2 of 14 International Journal of Computational Intelligence Systems (2024) 17:40
improve the smooth degree of traffic flow, and alleviate the interaction and computer vision technology to study it, in
urban traffic congestion problem. order to improve the accuracy of detection.
Nowadays, the main means of transportation for peo- In the literature review section, the comparison table of
ple to travel is by car. Cars not only bring many conveni- this literature and other literature is shown in Table 1.
ences to people’s lives, but also increase the probability of Computer vision typically requires extracting and recon-
traffic accidents and violations. Vehicle detection systems structing three-dimensional data from two-dimensional
can effectively solve this problem [2, 3], for which many data, so that computers can understand the environment and
researchers have conducted research. Maha Vishnu, V. C. respond or react. It uses computers to simulate lively objects
proposed a mechanism for detecting violations using traffic for preprocessing, view and analyze the characteristics of
videos, detecting accidents through dynamic traffic signal the target to determine its situation, thereby achieving target
control, and classifying vehicles using flow gradient fea- detection and planning functions [11, 12]. Yang Zi believed
ture histograms [4]. Zhang, Rusheng proposed a new rein- that computer vision technology has been able to detect
forcement learning algorithm for detecting intelligent traffic vehicles on the road. For this purpose, research was con-
signal control systems, and studied the performance of the ducted on vehicle detection in different environments, and it
system under different traffic flows and road network types. was found that due to the variability of road driving environ-
Although the system can effectively reduce the average wait- ment, computer vision technology has effectively improved
ing time of vehicles at intersections, the accuracy of viola- the accuracy of monitoring automobile illegal behavior [13].
tion checks needs to be improved [5]. Liu Shuo used the Kumar, Aman proposed an intelligent traffic violation detec-
YOLOv3 algorithm to detect vehicles in traffic intersection tion system. It helped detect violations in different scenarios
images, improving the model’s detection ability for small and provide corresponding alerts based on the corresponding
target objects such as license plates and faces [6]. Alagar- types of violations [14]. Mahmud Yusuf Tanrikulu Ensur-
samy, Saravanan designed a system to detect and analyze ing sound arrangements when organizing and promoting
driver violations of traffic rules. The proposed system tracks vaccines for people with dementia and establishing support
driver activities and stores violations in a database [7]. Bhat, mechanisms where public health measures designed to con-
Akhilalakshmi T believed that the machine has completed all trol the spread of the virus have profound and often tragic
tasks including automatic vehicle detection and violations. consequences for people with dementia, their families and
Traffic records would be collected through closed circuit caregivers [15]. Tutsoy, Onder established constrained mul-
television recordings and then detected by the system for tidimensional mathematics and meta-heuristic algorithms
violations [8]. Charran, R. Shree proposed a system to auto- based on graph theory to learn the unknown parameters
matically detect car violations and ultimately process tickets of large-scale epidemiological model, with the specified
by capturing violations and corresponding vehicle numbers parameters and the coupling parameters of the optimization
in a database [9]. Ozkul, Mukremin proposed a traffic vio- problem. The results obtained under equal conditions show
lation detection and reporting system. It does not rely on that the mathematical optimization algorithm CM-RLS is
expensive infrastructure or the presence of law enforcement better than the MA algorithm [16]. Arabi, Saeed provided a
personnel. It determines whether these changes comply practical and comprehensive engineering vehicle detection
with the traffic rules encoded in the system by observing solution based on deep learning and computer vision [17].
the changes in nearby vehicles [10]. In summary, research Overall, using computer vision technology can effectively
on vehicle violation detection has achieved some results, but improve the accuracy of detection.
there are still shortcomings in terms of accuracy in violation Among the accidents that occur every year, the propor-
detection. Therefore, this article would use human–computer tion of accidents caused by violations is very high, which
Literature [3] The literature [3] mainly uses video to detect violations. Compared with the research in this article, it lacks interactivity and has
low detection accuracy
Literature [4] The literature [4] is mainly an enhanced learning algorithm that is higher than signal detection. Although it can perform fault
detection, it still has low accuracy and poor interactivity
Literature [5] The literature [5] shows that the YOLOv3 algorithm is used to detect the target of the vehicle. Although the detection accuracy
has been improved, the overall interactivity is not as good as this article
Literature [6] The literature [6] is to detect driver violations and store data, but the cost is relatively high compared with this article
Literature [8] The literature [8] shows that mainly through the traffic violation detection and reporting system, although the cost has been
reduced, the overall detection accuracy is not high
International Journal of Computational Intelligence Systems (2024) 17:40 Page 3 of 14 40
may directly cause financial losses of billions of yuan. In detection, it is necessary to process the front image before
order to strengthen the detection of cars and avoid traffic detecting the image.
accidents, in addition to strengthening legal education for
drivers, a system for detecting illegal vehicles is also needed
to help better correct driver violations. This article would 2.1 Data Collection
use human–computer interaction and computer vision tech-
nology to construct an intelligent vehicle violation detection Data reliability mainly refers to the accuracy and com-
system. It first utilizes self-photography and public data- pleteness of the data. For the vehicle violation detection
bases to collect the image dataset required for the experi- system, the accuracy of the data is very important, because
ment. Then, this article uses computer vision technology to the wrong detection results may lead to misjudgment or
preprocess a portion of the data images. It mainly involves missed judgment. During data acquisition, there may be
noise filtering and image enhancement processing, followed outliers, duplicate data, or incomplete data, requiring data
by the use of Kalman filtering for vehicle tracking research. cleaning to remove these invalid or low-quality data. For
Finally, based on human–computer interaction technology the data in the training set, annotation is required, that
for system interface design, this article effectively improves is, each sample is classified or marked by professionals.
the performance of the violation detection system through The annotation process needs to follow uniform rules and
the above steps. This also improves the accuracy and speed standards to ensure the accuracy and consistency of the
of the system’s detection. data.
The novelty of the intelligent vehicle violation detection There are two sources of data for this article. One is
system based on human–computer interaction and computer online data, and the other is how to divide the size of photos
vision is mainly reflected in the following aspects: 1. Com- taken with a mobile phone into four parts. The datasets used
prehensive use of human–computer interaction and com- are the BIT-Vehicle dataset from Beijing University of Tech-
puter vision technology: the system combines human–com- nology [20] and the Apollo Scape dataset [21]. The paper
puter interaction and computer vision two technologies, extracted 49,223 images of size 1920 × 1080 or 1600 × 1200
so that the system can not only automatically detect and from them, which were taken from cameras at intersections.
identify illegal behaviors, but also provide intuitive monitor- Part of the image data listed in this article is shown in Fig. 1.
ing interface for traffic managers through human–computer As shown in Fig. 1, this article randomly selected a por-
interaction, so as to realize efficient human–machine coop- tion of data from the dataset studied, each with different
eration. 2. Focus on user-friendly design: compared with characteristics. Some of the vehicles in the pictures are ille-
the traditional monitoring system, the system pays more gal, while others are not. Based on these datasets, all the
attention to user-friendly design. Through the human–com- datasets in this study were photos taken by mobile phones
puter interaction technology, users can intuitively view the and other public databases, with a total of 49,521 images and
information of the illegal vehicles, and respond quickly. This 56,372 annotation boxes.
greatly improves the efficiency and convenience of traffic
management. 3. Dynamic adjustment and adaptive learning
ability: the system has the ability of dynamic adjustment and 2.2 Image Preprocessing
adaptive learning, and can automatically adjust the working
mode according to different environments and conditions to 2.2.1 Image Denoising
ensure the accuracy of detection. In addition, the system can
also improve its ability to detect violations through continu- During the recording and transmission process of automo-
ous learning and training. tive digital images, they are affected by factors such as their
own equipment and external noise, resulting in images with
noise effects. Pixel blocks or elements that appear abrupt
2 Vehicle Violation Image Data Based in an image can be understood as being affected by noise.
on Computer Vision‑Related Technology This would seriously affect the quality of car images, making
them unclear, so it is necessary to filter the images. Due to
Computer vision is a general term that involves any visual the difference between the grayscale values of noise in the
content computation [18, 19]. It learns how to help com- image and those of nearby normal pixels, the mean filtering
puters gain a higher level of understanding from digital method utilizes this feature to replace the grayscale values
images or videos, and extract valuable information from of the center pixel with the average of all pixel grayscale
the real world for decision-making purposes. In order to values in the image. This reduces the impact of image noise
better detect violations of intelligent vehicles and improve and achieves the filtering effect [22, 23]. The mathematical
the accuracy and real-time performance of violations expression is
40 Page 4 of 14 International Journal of Computational Intelligence Systems (2024) 17:40
more pixels in image processing and merge the grayscale thing that a violation detection system based on computer
values with smaller pixels, thereby improving the contrast vision needs to do is to detect and recognize each moving
of the image and making it clearer [26]. The comparison object in the image, and then analyze the specific situa-
before and after histogram equalization and the correspond- tion of the moving object according to necessary standards
ing image histograms are shown in Fig. 3. to determine whether further work is needed. Therefore,
Parameter updates or learning rules are a key part in this article would use Kalman filtering to study vehicle
machine learning that describes how a model adjusts its tracking. Based on estimation theory, Kalman filtering
internal parameters to improve performance against new introduces the concept of state equations and establishes
data. In supervised learning, this often involves minimiz- a system state model [27, 28].
ing a loss function that measures the difference between the The Kalman filter is described by a state equation and
model predictions and the actual labels. For the vehicle vio- an observation equation. A discrete dynamic system can
lation detection system, these labels indicate whether there be considered to consist of two systems, and the expres-
is any violation in the image and what type of violation it is. sion is as follows:
Invite professional annotators or teams to mark the collected
vehicle images one by one. The labeling process needs to
Yh = XYh−1 + CFh + Mh−1 (2)
follow the unified labeling criteria to ensure the accuracy Among them, X is the transition matrix used to measure
of each sample label. To ensure the accuracy of annotation, the transition from the state of the system h − 1 to the state
annotated data can be spot-sampled and verified to detect at that time h . Mh−1 is the zero mean used to represent
and correct possible errors. the state model error, and the observation system can be
expressed by the following observation equation, as shown
as follows:
3 Vehicle Tracking on Kalman Filter
Vh = Fh Yh + Xh (3)
The detection and tracking technology of moving objects
In the equation, h is called the observation matrix. Vh
is one of the research hotspots in the fields of digital image
represents the value of the matrix.
processing and recognition, as well as computer vision. It
The principle of Kalman filtering is linear least squares
has many applications in various aspects of human life.
error estimation. Due to the fact that the noise is Gauss-
Computer vision technology has also made great progress
ian white noise, its limitation on target movement is very
in vehicle tracking and has many good benefits. The first
high [29, 30]. The simulation results of Kalman filtering tracking, that is to say, when completing target matching,
trajectory for uniform linear motion are shown in Fig. 4. the vehicle to be matched meets the following conditions:
In order to narrow the search range of the target vehicle in √
the new image, it is necessary to first use a certain algorithm d = (ai − aj )2 + (bi − bj )2 < Q (7)
to estimate the approximate position of the target vehicle
in the new image frame and determine its matching range. (aj , bj ) is the centroid coordinate of the vehicle to be
The Kalman filter tracking model continuously predicts, cor- matched; d is the distance between the centroid of the vehi-
rects, and adjusts target violations through calculations [31]. cle to be matched and the predicted centroid of the target
Due to the small time delay between adjacent frames in the vehicle. For target vehicles that have just entered the image
image, the speed and direction of target motion do not imme- frame and have not yet created a tracking sequence, due to
diately change. Based on this characteristic, the matching of the lack of specific information, this method cannot narrow
the target vehicle is determined as follows: the search range. Instead, feature sets can be directly used
( ) (( ) ( ))
ai = a3 + a3 − a2 + a3 − a2 − a2 − a1 (4)
( ) (( ) ( ))
bi = b3 + b3 − b2 + b3 − b2 − b2 − b1 (5)
√
Q= (ai − a3 )2 + (bi − b3 )2 (6)
utilizes communication networks to collect images from the vehicle’s speed based on driver’s license information. If
automotive data storage system for transmission and storage. the speed of a car exceeds the threshold or is different
The violation detection module detects the vehicle’s from the speed of surrounding cars, it is determined that
speed and violation behavior based on the tracking results the car has violated regulations. The main characteristic
[36, 37]. It can use computer vision technology to locate parameter settings of the corresponding control are shown
and recognize vehicle information, and then calculate the in Table 2:
Table 2 Control property Serial number Control name Caption Identification (ID)
parameter table
1 Group Box 1 Location video playback ID—SteVideo
2 Group Box2 Location information ID—INFORMATION
3 Group Box3 Location photo ID—Pie
4 Group Box4 Location range ID—Scope
5 Edit Control 1 / ID—MainVideo
6 Edit Control 2 / ID—STATE
7 Edit Control 3 / ID—StcPicl
8 Edit Control 4 / ID—StcPic2
9 Edit Control 5 / ID—StcPic3
10 Edit Control 6 / ID—StcPic4
11 Radio Button I Red light ID—RED
12 Radio Button 2 Green light ID—GREEN
International Journal of Computational Intelligence Systems (2024) 17:40 Page 9 of 14 40
4.2 Illegal Parking of Vehicles where the vehicle passes through based on the network. At
the same time, photos of passing through two intersections
At present, there is no unified and effective detection system are stored as law enforcement evidence. The software system
for illegal parking, and research on illegal parking is still processing interface for speeding vehicles is shown in Fig. 8.
ongoing. This article compares the changes in background
pixel values before and after vehicles enters the area of inter-
est, and uses computer vision technology to construct an 5 Performance of Intelligent Vehicle
intelligent vehicle violation detection system. When an ille- Violation Detection System
gal car enters the area of interest and is illegally parked, the
target car would stay for a period of time, so the problem is For intelligent vehicle violation detection systems, config-
transformed into detecting a stationary target. Usually, the uring advanced computers can ensure the success of train-
pixel values of each point in the background image of the ing and the accuracy of results, and training reports require
region of interest can remain unchanged for a long time, but both the memory and computing power of the graphics
the background itself can change in a short period of time card. Therefore, this training should be run on a laboratory
due to environmental influences. Therefore, when an object server. The hardware configuration, system used, and related
moves, the pixel value would undergo a significant change, software environment of the laboratory server are shown in
and this change would be greater than the amplitude change Table 3.
of the background pixel itself [38, 39]. Therefore, when sig- Parameter updates or learning rules are a key part in
nificant changes are detected in pixels within the region of machine learning that describes how a model adjusts its
interest, it can be determined that the target in front has internal parameters to improve performance against new
entered the region of interest. When the background value data. In supervised learning, this often involves minimiz-
pixel changes rapidly and significantly, and returns to the ing a loss function that measures the difference between the
initial background state value in a short period of time, it model predictions and the actual labels. For the vehicle vio-
indicates that a moving target has passed through the region lation detection system, these labels indicate whether there
of interest but has not stopped. When the background value is any violation in the image and what type of violation it is.
changes rapidly and its pixel value remains unchanged Invite professional annotators or teams to mark the collected
for a period of time, it indicates that a moving object has vehicle images one by one. The labeling process needs to
entered and stopped, and there may be a violation of park- follow the unified labeling criteria to ensure the accuracy
ing regulations. of each sample label. To ensure the accuracy of annotation,
annotated data can be spot-sampled and verified to detect
4.3 System Interface Under Human–Computer and correct possible errors.
Interaction This article uses computer vision technology to process
vehicle violation images, and then combines human–com-
Each system is an implementation of human–computer puter interaction to construct a system. The constructed
interaction. The user interface is a platform for sending and system is more excellent than other traditional detection
exchanging information between users and computers. This technologies. In order to capture illegal vehicles, the sys-
platform should not only meet the needs of work information tem can install cameras on the side or above the road for
interaction, but also interact without the need for informa- image collection, and then use a converter in the computer
tion, and the information processing speed should also meet platform to convert it into an image. It is stored in computer
the requirements. The intelligent vehicle violation detection memory. Then, this article uses computer vision technology
system based on human–computer interaction and computer to process the collected video images to see if the vehicle
vision technology is similar to traffic police speed measure- has engaged in illegal behavior, and at the same time, it is
ment for speeding recognition. Each intersection equipped necessary to promptly handle the fault once it is discovered.
with detection equipment is divided into two groups. Based In order to further demonstrate the performance superior-
on the driving distance and required speed of the two inter- ity of the system studied in this article, it is compared with
sections, the starting time for the same vehicle to pass an intelligent vehicle violation detection system based on
through a single intersection machine can be set. The inter- Video Image Processing (VIP) technology, Magnetic Induc-
section compares the difference time with the starting time tion Coil (MIC) technology, and Digital Image Processing
based on the vehicle. If the measured value is greater than (DIP) technology. The specific performance comparison is
the starting time, it is considered that the vehicle is driving shown in Table 4.
normally. If the measured value is less than the starting time, As shown in Table 4, compared with other vehicle viola-
it is considered that the vehicle is speeding and an alarm is tion detection systems, the overall system performance of
sent to the traffic police teams closest to the intersection the vehicle violation detection system constructed in this
40 Page 10 of 14 International Journal of Computational Intelligence Systems (2024) 17:40
Table 3 Software and hardware Serial Hardware and software Classification Specific introduction
configuration of laboratory number environment
computers
1 Software environment Operating system Windows 10
Programming language Python 3.6
Internet connection Bluetooth
Computing unified equip- Cuda 10.1
ment architecture
Image processing library Opencv 4.2.0
Video 4Kx2K 60 Hz encoding
2 Hardware environment Central processing unit Intel (R) Core (TM) i7-8700 K
CPU at 3.7 GHz
Graphics processor NVIDIA GEFORCE GTX 1080
Monitor 2 sensor camera interfaces
article using human–computer interaction and computer stability is also better. The accuracy of warning for illegal
vision technology is more superior, with significant advan- vehicles is also higher. It can be all-weather tested and has
tages. The system studied in this article has a simple operat- excellent scalability.
ing interface and complete functions such as recognition and The training set is a dataset used to train and optimize
tracking of various vehicle violation detection behaviors. It the vehicle violation detection system. It should contain
does not require users to spend a lot of time familiarizing various types of violations to ensure that the system can
themselves and is easy to get started with. At the same time, identify and classify various situations. The data in the
the detection accuracy is higher than other systems, and the training set should be annotated, that is, each sample
International Journal of Computational Intelligence Systems (2024) 17:40 Page 11 of 14 40
Table 4 Comparison of performance between video detection technology and traditional detection technology
Serial number Inspection items Research system of this article VIP MIC DIP
1 Coverage Large detection range, Small detec- Small detection range, 5–10 m Small
150–200 m tion range, detection
50–60 m range,
10–20 m
2 Accuracy High General Low General
3 Interface video display Display General Show carton General
4 System operation complexity Very simple Simple More complicated Simple
5 Operating environment All-weather All-weather Affected by weather All-weather
6 Extensibility Very good General Relatively poor General
7 Construction cycle Short Long Very long Long
8 Maintainability Good General Poor General
9 Stability Good General Poor General
10 Can you reproduce the scene? Can No No No
11 Illegal vehicle alarm Very accurate Accurate Accurate Accurate
In Fig. 9, the x-axis represents the number of violations, other violation detection systems, and the detection speed
while the y-axis represents the accuracy of the detection. is faster. Among them, the detection system built based on
The vehicle violation detection system in this study detects MIC takes much longer than the other three detection sys-
different types of violations with much higher accuracy tems. When the number of detected images is 150 or less,
than other violation detection systems. Among them, the the time required for a VIP based detection system is higher
CV based violation detection system has an accuracy of over than that for a CV based system. However, it is lower than
96.86% for detecting the 8 types of violations extracted. The systems built on MIC and DIP. When the number of detected
accuracy of violation detection systems based on VIP, MIC, images is 160 or more, the time required for a VIP based
and DIP for violation behavior detection is below 91.92%, detection system is higher than that of a CV and DIP based
87.27%, and 92.35%, respectively. The accuracy of the sys- system, but lower than that of a MIC based system. Mean-
tem studied in this article for detecting illegal behavior of while, when the number of detected images is 10, the time
running red lights is 99.69%, approaching 100%. It is 8.81%, required for the system detection studied in this article is
14.51%, and 10.4% higher than the violation detection sys- 5.54 ms. It is 1.58 ms, 4.24 ms, and 3.05 ms lower than the
tems based on VIP, MIC, and DIP, respectively. The average violation detection systems based on VIP, MIC, and DIP,
accuracy of the violation detection system based on CV for respectively. When the number of detected images is 200,
8 types of violations is 98%. It is 7.55%, 12.49%, and 7.87% the time required for the system detection studied in this
higher than the violation detection systems based on VIP, article is 68.39 ms. It is 48.03 ms, 55.17 ms, and 28.73 ms
MIC, and DIP, respectively. lower than the violation detection systems based on VIP,
With the continuous increase in the number of cars, more MIC, and DIP, respectively.
and more image data are generated. It is crucial to quickly
find evidence of vehicle violations from massive image data,
in order to search for and punish driver violations, and to 6 Conclusions
reduce accident rates. The vehicle violation detection system
studied in this article has a faster speed and shorter detection The number of deaths caused by traffic accidents in China
time for image detection. In order to further highlight the each year exceeds hundreds of thousands, ranking among
superiority of the system studied in this article in terms of the top in the world, most of which are caused by driver
image detection speed, this article compares it with the vio- violations. Therefore, if computer vision technology can
lation detection systems of VIP, MIC, and DIP. The specific be used to detect violations of certain vehicles and record
comparison results are shown in Fig. 10. and punish them, it is of great significance for reducing law
In Fig. 10, the x-axis represents the number of images to enforcement, managing the traffic environment, preventing
be detected, while the y-axis represents the time required for accidents, and ensuring pedestrian safety. The intelligent
detection. As shown in Fig. 10, the system studied in this vehicle violation detection system based on human–com-
article takes much less time to detect extracted images than puter interaction and computer vision is a system that detects
and records vehicle violations through computer vision
and human–computer interaction technology. The system
studied in this article utilized computer vision technology
to preprocess the extracted images, and then used Kalman
filtering to track vehicles. Then, it utilized the intelligent
interaction interface of human–computer interaction tech-
nology system to detect vehicle violations, record and count
vehicle violations, and presented the final recorded results
to the administrator. Therefore, the research system in this
article can recognize traffic signs, detect vehicle violations,
and then provide useful information and warnings to drivers
through human–computer interaction. This can effectively
improve traffic safety, reduce manual patrol and monitoring
costs, and improve the accuracy and efficiency of violation
detection. Intelligent vehicle violation detection systems
based on human–computer interaction and computer vision
may encounter many difficulties in the research. Here are
the possible challenges and the corresponding solutions: for
Fig. 10 Comparison of image detection time for different vehicle vio- computer vision systems, there is a lot of annotated data
lation detection systems for model training. If the actual annotation data is limited
International Journal of Computational Intelligence Systems (2024) 17:40 Page 13 of 14 40
available, the dataset can be augmented by using techniques technique for detection and violation of traffic control system.” J.
such as data augmentation. Also, consider using semi-super- Crit. Rev. 7.8, 2874–2879 (2020). https://doi.org/10.31838/jcr.07.
08.473
vised or unsupervised learning to effectively utilize unanno- 8. Bhat, A.T., Rao, M.S., Pai, D.G.: Traffic violation detection in
tated data. To make the system easy to use and understand, India using genetic algorithm. Glob. Trans. Proc. 2(2), 309–314
an efficient and intuitive human–machine interface needs (2021). https://doi.org/10.1016/j.gltp.2021.08.056
to be designed. This can find the optimal design scheme 9. Charran, R. S., Dubey. R. K.: “Two-Wheeler Vehicle Traffic Vio-
lations Detection and Automated Ticketing for Indian Road Sce-
through user research and design thinking. nario.” IEEE Trans. Intellig. Transport. Syst. 23.11, 22002–22007
(2022). https://doi.org/10.1109/TITS.2022.3186679
10. Ozkul, M., Çapuni, I.: Police-less multi-party traffic violation
Funding This work was supported by City University of Seattle. detection and reporting system with privacy preservation. IET
Intellig. Trans. Syst. 12(5), 351–358 (2018). https://doi.org/10.
Data Availability Data are available upon reasonable request. 1049/iet-its.2017.0122
11. Abbas, A. F., Sheikh, U. U., Al-Dhief, F. T., Mohd, M. N. H.:
Declarations “A comprehensive review of vehicle detection using computer
vision.” TELKOMNIKA (Telecommunication Computing Elec-
Conflict of Interest There are no potential competing interests in my tronics and Control) 19.3, 838–850 (2021). https://doi.org/10.
paper. 12928/telkomnika.v19i3.12880
12. Guo, M.-H., Xu, T.X., Liu, J.J., Liu, Z.N., Jiang, P.T., Mu, T.J.,
Open Access This article is licensed under a Creative Commons Attri- et al.: Attention mechanisms in computer vision: a survey. Com-
bution 4.0 International License, which permits use, sharing, adapta- put. Visual Media 8(3), 331–368 (2022). https://doi.org/10.1007/
tion, distribution and reproduction in any medium or format, as long s41095-022-0271-y
as you give appropriate credit to the original author(s) and the source, 13. Yang, Z., Pun-Cheng, L.S.C.: Vehicle detection in intelligent
provide a link to the Creative Commons licence, and indicate if changes transportation systems and its applications under varying envi-
were made. The images or other third party material in this article are ronments: a review. Image Vis. Comput. 6(9), 143–154 (2018).
included in the article's Creative Commons licence, unless indicated https://doi.org/10.1016/j.imavis.2017.09.008
otherwise in a credit line to the material. If material is not included in 14. Kumar, A., Kundu, S., Kumar, S., Tiwari, U. K., Kalra, J.: “S-tvds:
the article's Creative Commons licence and your intended use is not Smart traffic violation detection system for Indian traffic sce-
permitted by statutory regulation or exceeds the permitted use, you will nario.” Int. J. Innovat. Technol. Explor. Eng. (IJITEE) 8.4S3, 6–10
need to obtain permission directly from the copyright holder. To view a (2019). https://doi.org/10.35940/ijitee.D1002.0384S319
copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. 15. Tutsoy, O., Tanrikulu. M. Y.: “Priority and age specific vaccina-
tion algorithm for the pandemic diseases: a comprehensive para-
metric prediction model.” BMC Med. Inform. Decis. Mak. 22.1,
4 (2022). https://doi.org/10.13140/RG.2.2.25044.32646
References 16. Tutsoy, O.: Graph theory based large-scale machine learning with
multi-dimensional constrained optimization approaches for exact
1. Lv, Z., Qiao, L., You, I.: 6G-Enabled network in box for inter- epidemiological modelling of pandemic diseases. IEEE Trans.
net of connected vehicles. IEEE Trans. Intellig. Transport. Syst. Pattern Analysis Mach. Intelligence (2023). https://doi.org/10.
(2020). https://doi.org/10.1109/TITS.2020.3034817 1109/TPAMI.2023.3256421
2. Asadianfam, S., Shamsi, M., Rasouli Kenari, A.: Big data 17. Arabi, S., Haghighat, A., Sharma, A.: A deep-learning-based com-
platform of traffic violation detection system: identifying the puter vision solution for construction vehicle detection. Comput.
risky behaviors of vehicle drivers. Multimedia Tools Appl. Aided Civil Infrastr. Eng. 35(7), 753–767 (2020). https://doi.org/
79(33–34), 24645–24684 (2020). https:// d oi. o rg/ 1 0. 1 007/ 10.1111/mice.12530
s11042-020-09099-8 18. Wiley, V., Lucas, T.: “Computer vision and image processing: a
3. Sahraoui, Y., Kerrache, C. A., Korichi, A., Nour, B., Adnane, A., paper review.” Int. J. Artif. Intellig. Res. 2.1, 29–36 (2018).https://
Hussain, R.: “DeepDist: a deep-learning-based IoV framework for doi.org/10.29099/ijair.v2i1.42
real-time objects and distance violation detection.” IEEE Internet 19. Tian, H., Wang, T., Liu, Y., Qiao, X., Li, Y.: Computer vision
Things Magaz. 33, 30–34 (2020) https://doi.org/10.1109/IOTM. technology in agricultural automation—a review. Inform. Process.
0001.2000116 Agric. 7(1), 1–19 (2020). https://doi.org/10.1016/j.inpa.2019.09.
4. Maha Vishnu, V.C., Rajalakshmi, M., Nedunchezhian, R.: Intel- 006
ligent traffic video surveillance and accident detection system with 20. Qianlong, D., Wei, S.: Model recognition based on improved
dynamic traffic signal control. Cluster Comput. 215, 135–147 sparse stack coding. Comput. Eng. Appl. 56(1), 136–141 (2020).
(2018). https://doi.org/10.1007/s10586-017-0974-5 https://doi.org/10.1109/ACCESS.2020.2997286
5. Zhang, R., Ishikawa, A., Wang, W., Striner, B., Tonguz, O.K.: 21. Huang, X., Wang, P., Cheng, X., Zhou, D., Geng, Q., Yang, R.:
Using reinforcement learning with partial vehicle detection for The apolloscape open dataset for autonomous driving and its
intelligent traffic signal control. IEEE Trans. Intellig. Transport. application. IEEE Trans. Pattern Anal. Mach. Intell. 42(10),
Syst. 22(1), 404–415 (2020). https://doi.org/10.1109/TITS.2019. 2702–2719 (2019). https://d oi.o rg/1 0.1 109/T
PAMI.2 019.2 92646 3
2958859 22. Rakshit, M., Das, S.: An efficient ECG denoising methodology
6. Liu Shuo., Gu Yuhai., Rao Wenjun., Wang Juyuan.: “A method for using empirical mode decomposition and adaptive switching mean
detecting illegal vehicles based on the optimized YOLOv3 algo- filter. Biomed. Signal Process. Control 40(3), 140–148 (2018).
rithm”. J. Chongqing Univer. Technol. (Nat. Sci.) 35.4, 135–141 https://doi.org/10.1016/j.bspc.2017.09.020
(2021). https://doi.org/10.3969/j.issn.1674-8425(z).2021.04.018 23. Shengchun, W., Jin, Li., Shanshan, Hu.: A calculation method for
7. Alagarsamy, S., Ramkumar, S., Kamatchi, K., Shankar, H., floating datum of complex surface area based on non-local mean
Kumar, A., Karthick, S., Kumar, P.: “Designing a advanced filtering. Prog. Geophys. 33(5), 1985–1988 (2018). https://doi.
org/10.6038/pg2018CC0132
40 Page 14 of 14 International Journal of Computational Intelligence Systems (2024) 17:40
24. Dyke, R.M., Hormann, K.: Histogram equalization using a selec- violation detection and traffic flow analysis system. Int. J. Pure
tive filter. Vis. Comput. 39(12), 6221–6235 (2023). https://doi. Appl. Math. 118(20), 319–328 (2018). https://doi.org/10.1007/
org/10.1007/s00371-022-02723-8 s11042-020-09714-8
25. Agrawal, S., Panda, R., Mishro, P. K., Abraham, A.: “A novel joint 34. Agarwal, P., Chopra, K., Kashif, M., Kumari, V.: Implementing
histogram equalization based image contrast enhancement.” J. ALPR for detection of traffic violations: a step towards sustain-
King Saud University Comput. Inform. Sci. 34.4, 1172–1182 ability. Procedia Comput. Sci. 13(2), 738–743 (2018). https://doi.
(2022). https://doi.org/10.1016/j.jksuci.2019.05.010 org/10.1016/j.procs.2018.05.085
26. Vijayalakshmi, D., Nath, M.K.: A novel contrast enhancement 35. Santhosh, K.K., Dogra, D.P., Roy, P.P.: Anomaly detection in road
technique using gradient-based joint histogram equalization. Cir- traffic using visual surveillance: A survey. ACM Comput. Surveys
cuits Syst. Signal Process. 40(8), 3929–3967 (2021). https://doi. (CSUR) 53(6), 1–26 (2020). https://doi.org/10.1145/3417989
org/10.1007/s00034-021-01655-3 36. Kousar, S., Aslam, F., Kausar, N., Pamucar, D., Addis, G.M.:
27. Pei, Y., Biswas, S., Fussell, D.S., Pingali, K.: An elementary intro- Fault diagnosis in regenerative braking system of hybrid electric
duction to Kalman filtering. Commun. ACM 62(11), 122–133 vehicles by using semigroup of finite-state deterministic fully
(2019). https://doi.org/10.1145/3363294 intuitionistic fuzzy automata. Comput. Intellig. Neurosci. (2022).
28. Fang, H., Tian, N., Wang, Y., Zhou, M., Haile, M.A.: Nonlinear https://doi.org/10.1155/2022/3684727
Bayesian estimation: From Kalman filtering to a broader horizon. 37. Liu, Y., Zhong, S., Kausar, N., Zhang, C., Mohammadzadeh, A.,
IEEE/CAA J. Automat. Sinica 5(2), 401–417 (2018). https://doi. Pamucar, D.: “A stable fuzzy-based computational model and con-
org/10.1109/JAS.2017.7510808 trol for inductions motors.” Cmes-Comput. Model. Eng. Sci. 138,
29. Liu, S., Wang, Z., Chen, Y., Wei, G.: Protocol-based unscented 793–812 (2024). https://doi.org/10.32604/cmes.2023.028175
Kalman filtering in the presence of stochastic uncertainties. IEEE 38. Rafiq, N., Yaqoob, N., Kausar, N., Shams, M., Mir, N.A., Gaba,
Trans. Autom. Control 65(3), 1303–1309 (2019). https://doi.org/ Y.U., Khan, N.: Computer-based fuzzy numerical method for solv-
10.1109/TAC.2019.2929817 ing engineering and real-world applications. Math. Prob. Eng.
30. Huang, Y., Zhang, Y., Zhao, Y., Shi, P., Chambers, J.A.: A novel (2021). https://doi.org/10.1155/2021/6916282
outlier-robust Kalman filtering framework based on statistical 39. Shams, M., Rafiq, N., Kausar, N., Mir, N. A., Alalyani, A.: “Com-
similarity measure. IEEE Trans. Autom. Control 66(6), 2677– puter oriented numerical scheme for solving engineering prob-
2692 (2020). https://doi.org/10.1109/TAC.2020.3011443 lems.” Comput. Syst. Sci. Eng. 42.2, 689–701 (2022). https://doi.
31. Xiangyu, K., Xiaopeng, Z., Xuanyong, Z., et al.: Adaptive org/10.32604/csse.2022.022269.
dynamic state estimation of distribution network based on inter-
acting multiple model [J]. IEEE Trans. Sustain. Energy APR Publisher's Note Springer Nature remains neutral with regard to
13(2), 643–652 (2022) jurisdictional claims in published maps and institutional affiliations.
32. Go, M.J., Park, M., Yeo, J.: “Detecting vehicles that are illegally
driving on road shoulders using faster R-CNN.” J. Korea Instit.
Intellig. Trans. Syst. 21.1, 105–122 (2022). https://doi.org/10.
12815/kits.2022.21.1.105
33. Saritha, M., Rajalakshmi, S., Angel Deborah, S., Milton, R.S.,
Thirumla Devi, S., Vrithika, M., et al.: RFID-based traffic