AI Module 5 Notes
AI Module 5 Notes
net/publication/384429783
CITATIONS READS
0 49
1 author:
Jayaram M.A
RASTA - Center for Road Technology VOLVO Construction Equipment Campus
196 PUBLICATIONS 1,506 CITATIONS
SEE PROFILE
All content following this page was uploaded by Jayaram M.A on 28 September 2024.
COMPUTER VISION
RASTA
Center for Road Technology
VOLVO-CE CAPMUS
Bengaluru
Dr.M.A.Jayaram
Professor
CONTENTS
Introduction to Computer Vision
Phases of CV
Image Acquisition Methods
Tools for Image Acquision
CV methods
Intuitionistic Understanding of Related algorithms
Applications of CV in Civil Engineering
Module End Questions
References
AI and Applications in Civil Engineering
Module 5
Computer Vision and Applications
1. Introduction
Computer vision (CV) is a branch of AI and also an interdisciplinary field founded on science.
It is supported by several other technologies like digital signal processing, artificial
intelligence, machine learning, and higher order mathematics. It is all about making computers
to get the grasp or understand the visual data in terms of images or videos as inputs. The level
of understanding vaguely mimics that of human visual system. As of today, the range of
applications of computer vison could be far from anybody’s imagination. In a nutshell, the
predominant applications include traffic monitoring and management, surgical automation,
automation in industry, automated driving, eye and head tracking in e-commerce, inspections
and monitoring during manufacturing stages, sports analysis, gesture recognition, general
scientific vision systems, object recognition, people tracking, safety monitoring, security and
biometrics, three-dimensional modelling, web and cloud applications, and many more.
In recent years, computer vision is associated closely with architectural engineering and civil
engineering. This trend is fast growing particularly in civil engineering related activities. A
plethora of applications relate to construction and management of the infrastructural facility. It
is well known that the construction industry is marred by poor construction quality,
mismanagement at construction sites, exorbitant overhead costs, injuries and health issues of
construction workers, improper measurements of structural distress, mismatching in schedules,
and other clutters, disorders that are very intrinsic. Justifiably to address this, the researchers
have explored and have been exploring the potentials of computer vision to effectively deal
with performance monitoring, resources management and safety assurances.
In the last one-and-a-half-decade copious literature is accrued. A sizable volume of literature
hinges on automated health monitoring of structures. The heart of the matter is CV is applied
for the tasks that are directed towards construction, monitoring of structural health, movement
of materials, anomaly detection and many more. A graphical abstract of CV and their various
applications are provided in figure 5.1.
2. Steps of CV
CV simply told is precisely a seven-step procedure segregated under three phases. A schematic
diagram is depicted in Fig. 5.2. The first phase involves preliminary processing of the image
or video. Feature extraction focuses on reducing the vast amount of digital information from
the image to relevant features. This is accomplished through dimensionality reduction
techniques such as principal component analysis (PCA), independent component analysis
(ICA), linear discriminant analysis (LDA), locally linear embedding (LLE), and t-distributed
stochastic neighbor embedding (t-SNE). Depth perception, or stereopsis, involves using
multiple cameras to capture the depth of objects within the scene. By matching objects in two
images and analyzing the differences in their relative positions, the depth of the objects can be
estimated. When more than two cameras are involved, the process is referred to as multi-view
stereopsis.
In the next phase, advanced image processing techniques are applied. During segmentation, the
image is divided into distinct regions based on pixel intensity, allowing for the identification
of damaged structural components, the measurement of cracks, and the detection of harmful
materials. Image tracking involves detecting specific regions of interest and monitoring these
regions over time for potential changes. This technique can identify symmetry, hidden patterns,
and various differences between the region of interest and other parts of the image. Visual
tracking further enables the monitoring of object movement (such as people, materials, and
construction equipment) and provides estimates of future positions of the tracked targets.
The third and final phase involves higher-level processing of images or videos. Image
registration identifies spatial transformations occurring within a set of images, ensuring they
align within a common reference frame. This step is crucial when combining data from multiple
images and is particularly time-intensive when working with 3D images. The final step is object
recognition, where the system identifies objects, people, or entities and analyzes intricate
details. After recognition, the objects are classified or labeled based on pixel intensities and
their distribution patterns.
Fig. 5.1 : Graphical abstract of CV and its applications in civil Engineering [7]
3. Acquisition of Image
The visual data may be availed or acquired in three forms, 2D images which includes video,
3D images, and point clouds to capture objects to represent objects or space. In the fields of
image processing and computer vision, image acquisition refers to the process of obtaining an
image from a source, typically hardware devices such as cameras or sensors. This step is
crucial, as it initiates the entire workflow; without capturing an image, the system cannot
perform any further processing. The image obtained at this stage is typically raw and has not
undergone any processing. As the quality of data is very crucial to object registration and
recognition, the parameters and their thresholds are clearly defined in most of the CV tasks and
particularly with reference to cloud points. The specifications for the capturing devises are
readily available in references with related real time applications. Given the current day
sophistication, a smart phone of high resolution would suffice for preliminary data collection.
A broad overview of data acquisition methods is given in the following paragraphs
The specifications for the capturing devises are readily available in references with related real
time applications. Given the current day sophistication, a smart phone of high resolution would
suffice for preliminary data collection. A broad overview of data acquisition methods used by
researchers and the benefits and lacunas are presented in Table 5.1. The details of the processes
involved are elaborated in the references mentioned
Table 5.1 .Visual data formats and related equipment.
Visual Data Format Capturing Devices
Image 2D /Video Digital camera of high resolution, camcorder, digital video camera
smart phone, surveyance camera. Ex. Apple i pod, iphone, LGE Nexus
5 etc. . ..
Cloud Points Laser Scanner. Ex. Triamble GX 3D spatial camera, Leica Geosystems
3D laser etc..
environmental conditions. Some have recommended for the augmenting intelligent algorithms
to deal with unforeseen or uncontrollable parameters. They are shown in figure 5.3 and figure
5.4.
Fig. 5.3 Mobile optic sensor Fig. 5.4 Static optic sensor
of visual information. The images obtained through this are of moderate quality owing to
possible vibrations and dislocations. Thus, the images demand huge preprocessing. Shown in
figure 5.7.
infrared cameras, LiDAR, or other spectral sensors, to produce a more detailed and accurate
representation of the subject. A typical block diagram of such a system is shown in figure 5.10.
Hybrid image acquisition systems are widely used in fields such as remote sensing, medical
imaging, autonomous driving, and construction monitoring, where integrating data from
multiple sensors provides more comprehensive and actionable information.
track the extent of construction zones. Figure 5.15 shows the segmentation of various objects
in an image carrying construction scene.
• The algorithm then examines the neighbouring pixels of the seed point. If these
neighbouring pixels have similar characteristics (such as similar intensity or
colour), they are added to the region.
• This process continues iteratively, checking the neighbours of each newly
added pixel, until no more pixels can be added (i.e., the region stops growing).
Harries Corner Detection: The Harris Corner Detector is used to find corners (or interest points)
in an image. Corners are points where the intensity changes
significantly in multiple directions, which makes them ideal features
for tracking, recognition, or matching tasks.
Convolutional Layer: The convolutional layer is tasked with extracting features from the input
image. It applies a filter or kernel to the image, performing a convolution operation that
identifies and extracts specific features.
Pooling Layer: The pooling layer reduces the spatial dimensions of the feature maps generated
by the convolutional layer. Through down sampling, it minimizes the size of the feature maps,
lowering computational demands.
Activation Layer: The activation layer applies a non-linear activation function, like ReLU, to
the output of the pooling layer. This introduces non-linearity into the model, enabling it to learn
more complex patterns from the input data.
Fully Connected Layer: This layer connects all the neurons from the previous layer to all
neurons in the next, forming a traditional neural network. It combines the features learned in
earlier layers to make a prediction.
Normalization Layer: The normalization layer performs operations, such as batch or layer
normalization, to stabilize the activations within each layer, ensuring better conditioning and
reducing the risk of overfitting.
Dropout Layer: To prevent overfitting, the dropout layer randomly deactivates neurons during
training. This encourages the model to generalize well to new, unseen data, rather than simply
memorizing the training set.
Dense Layer: After features have been extracted by the convolutional and pooling layers, the
dense layer combines them to make a final prediction. Typically, the last layer in a CNN, it
receives flattened activations from previous layers, performs a weighted sum, and applies an
activation function to generate the final output. Figure 5.17 provides a schematic view of the
CNN.
For example, CNNs could be used for classifying types of construction materials (such as
identifying different types of concrete or bricks from images) or classifying land-use types
from aerial images for environmental impact assessments.
Image registration refers to the process of aligning two or more images of the same scene,
captured at different times, from varying angles, or using different sensors, so that they
overlap precisely. This is crucial in areas such as medical imaging, satellite image analysis,
and computer vision, where comparing or merging images from multiple sources is
necessary.
When images are taken under different conditions—such as at different times, angles, or with
different sensors—they may not be perfectly aligned. Image registration adjusts them to a
common coordinate system, ensuring accurate comparison and integration of data across
multiple images.
On the other hand, image recognition is the task of identifying objects, people, scenes, or
features within an image. It classifies images into pre-defined categories based on the patterns
and features present.
With a broad range of applications, image recognition is used in facial recognition, object
detection, autonomous vehicles, and medical imaging. It essentially allows machines to "see"
and interpret images similarly to how humans do.
infrastructure using UAVs has been developed and reported. The work has elaborated the
modalities of using UAV systems for collecting data and also evaluated its supremacy over
traditional methods of surveying.
The system has been evaluated for its performance in sites where excavation and earthmoving
equipment are predominant activities. Photogrammetric technique is largely implemented for
surveying, and it is popular too. A framework that can assess structural condition of the bridges
and to support the maintenance of the same as per the priory plan, this framework uses UAV
data. Surveying methods have been developed to identify the materials of construction and also
to monitor the progress of the construction work.
6.5 Deformation and Vibration Monitoring
CV techniques have found monitoring deformations in structures. In one such application, the
deformation of the soil field surrounding the pile foundation has been measured. The concept
of optical extensometer has been utilized to measure the average strain of a deformed pile
foundation under horizontal load. An improved feature point tracking algorithm has been
implemented for the purpose. The measurements were found to be precise when compared with
conventional measurements. A CV based system for displacement measurement has been
developed and reported. This system is capable of dealing with illumination and fog related
issues. The proposed system leverages on high-resolution imaging by including contextual,
spatial and temporal aspects. Series of experiments of measurements were done on two –span
and three-lane bridges. The system has proved to be efficient in overcoming the fog and
illumination related issues. In a realistic implementation, a high-speed digital camera is used
to capture high amplitude vibrations of buildings in terms of inter-story drifts and the connected
personal computer captured the concurrent signals.
The presence of faults was detected by analyzing signals. Roving camera was used to capture
the response of a model bridge subjected to live loads. Indicators were developed to assess the
global displacement magnitude. The results are said to be encouraging. The significant take
away from this method is avoidance of cumbersome accelerometer cluster setup with
synchronized cameras.
6.6. Infrastructure inspection and monitoring
Infrastructure inspection involves two phases, deploying unmanned arial vehicles for image
data acquisition and processing the data to enable inspection. With the rapid growth of drone
industry, UAVs are becoming viable option for remote collection of data and they are longer
the thing of future. There are three primary approaches to damage detection from visual data:
heuristic-based methods, deep learning techniques, and change detection.
Traditionally, infrastructure monitoring involved measuring physical parameters such as strain,
displacement, acceleration, and natural frequency, achieved by placing wireless sensors at key
locations. However, sensor installation and maintenance can be costly. With the advancement
of computer vision (CV), non-contact methods for capturing data have become increasingly
popular. Algorithms can now establish optical flow, which identifies pixel displacement
between two images. Using this method, displacements and resulting strains can be determined
by comparing images of a critical location taken at different times. This approach, known as
Digital Image Correlation (DIC), has been applied to measure the deflection of a highway
bridge deck under static loading from 32-ton trucks. Numerous applications, particularly in
employed to assess the extent of corrosion. A widely used method combines wavelets with
principal component analysis (PCA). Additionally, robotics and smartphone-based
maintenance systems have been developed to detect corrosion through image analysis.
D. Detection of Cracks in Pavements
sizable number of CV based applications pertaining to detection of defects in pavements have
been reported. A comprehensive review on defect detection in asphaltic pavements is found in
reference [44]. In most of the applications, heuristic feature extraction, binary pattern-based
algorithm, and Gabor filtering methods have been used to detect the defects [45-48].
6.7 Applications in other Allied Areas
There is a wide array of allied applications of computer vision (CV), though such applications
are rarely discussed in the literature. These applications are often unique and discrete in nature.
For example, to determine the load rating of bridges, a cost-effective, practical, and novel field-
testing system utilizing portable cameras has been used to assess lateral live load distribution
factors in highway bridges. This approach has proven advantageous for two main reasons:
unlike conventional methods, traffic does not need to be halted, and it provides accurate values
for load distribution factors.
Additionally, a video image feature processing system has been proposed for measuring the
dynamic response of cables and determining cable tension. The algorithm developed accounts
for all types of loading conditions, and the feature-based image processing is integrated for
cable tension monitoring and dynamic analysis. In another related application, a non-contact,
vision-based system has been explored for similar purposes.
A non-contact, video-based sensor has been developed to measure cable tension forces,
eliminating the need for traditional sensors and the associated risks and challenges of
installation. This system was applied to measure tensile forces in the cables supporting a roof
structure, and the results showed good agreement between the forces measured by the vision-
based sensors and those recorded by load cells. This innovation offers a low-cost and
convenient solution for both long- and short-term monitoring of cable-supported structures.
In another interesting application of computer vision, a system was developed to determine the
proximity between workers and construction equipment. Built on Convolutional Neural
Network (CNN)-based image analysis, the system processes closed-circuit video footage to
ensure safety on construction sites.
References
1) D.A. , J. Ponce, Computer vision: a modern approach, Prentice Hall Professional Technical Reference,
Upper Saddle River, 2002.
2) David Lowe, The computer Vision Industry, available: https://www.cs.ubc.ca/_lowe/vision.html,
accessed on 11 September 22.
3) J. Yang,(2015) Construction performance monitoring via still images, time-lapse photos, and video
streams: now, tomorrow, and the future. Adv Eng Inform,2015; 29(2) :211-224,
https://doi.org/10.1016/j.aei.2015.01.011.
4) K.K. Han, M. Goparvar Fard,(2017) Potential of big visual data and building information modelling for
construction performance analytics: an exploratory study, Autom Constr 73 (2017) 184–198,
https://doi.org/10.1016/j.autcon.2016.11.004.
5) D. Wang, F. Dai, X. Ning, Risk assessment of work-related musculoskeletal disorders in construction:
state-of-the-art review, J Constr Eng Manag 141 (6) (2015) 040150086,
https://doi.org/10.1061/(ASCE)co.1943-7862.0000979.
6) J. Seo, S. Han, S. Lee, H. Kim, Computer vision techniques for construction safety and health monitoring,
Adv Eng Inform 29 (2) (2015) 239–251, https://doi.org/10.1016/j.aei.2015.02.001.
7) J. Teizer, Status quo and open challenges in vision-based sensing and tracking of temporary resources on
infrastructure construction sites, Adv Eng Inform 29 (2) (2015) 225–238,
https://doi.org/10.1016/j.aei.2015.03.006.
8) M.A. Jayaram. (2023) Computer vision applications in construction material and structural health
monitoring: A scoping Review, Materials Today: Proceedings,2023,ISSN 2214-
7853,https://doi.org/10.1016/j.matpr.2023.06.031.
9) Zeliski, R. (2010). Computer Vision: Algorithms and Applications. Springer.
10) Hartley, R., & Zisserman, A. (2003). Multiple View Geometry in Computer Vision. Cambridge University
Press.
11) Ren, S., He, K., Girshick, R., & Sun, J. (2015). "Faster R-CNN: Towards Real-Time Object Detection
with Region Proposal Networks." IEEE Transactions on Pattern Analysis and Machine Intelligence,
39(6), 1137-1149.
12) S. Han, S. Lee, A vision-based motion capture and recognition framework for behavior-based safety
management, Autom. Constr. 35 (2013) 131–141,https://doi.org/10.1016/j.autcon.2013.05.001.
13) N. Silberman, D. Hoiem, P. Kohli, R. Fergus, Indoor segmentation and support inference from RGBD
images, in: A. Fitzgibbon, S. Lazebnik, P. Perona, Y. Sato,C. Schmid (Eds.), Computer Vision – ECCV
2012. DOI:10.1007/978-3-642-33715-4_54.
14) W.R. Abdulmajeed, R.Z. Mansoor, Implementing kinect sensor for building 3D maps of indoor
environments, International Journal of Computer Applications , 2014;86(8):1-8.Doi:10-5120/15005-
3182.
15) S. McMahon, N. Sünderhauf, B. Upcroft, M. Milford, Multimodal trip hazard affordance detection on
construction sites, IEEE Robotics and Automation Letters 3 (1) (2017) 1–7. 10.48550/arXiv.1706.06718.
16) H. Son, H. Seong, H. Choi, C. Kim, Real-time vision-based warning system for prevention of collisions
between workers and heavy equipment, J. Comput.Civ. Eng 33 (5) (2019) 1–10,
https://doi.org/10.1061/(asce)cp.1943-5487.0000845.
17) Q. Wang, Automatic checks from 3D point cloud data for safety regulation compliance for scaffold work
platforms, Autom. Constr 104 (2019) 38–51,https://doi.org/10.1016/j.autcon.2019.04.008.
18) K.K. Han, M. Golparvar-Fard, Appearance-based material classification for monitoring of operation-
level construction progress using 4D BIM and site photologs, Automation in Construction 53 (2015) 44–
57, https://doi.org/10.1016/j.autcon.2015.02.007.
19) X. Luo, H. Li, H. Wang, Z. Wu, F. Dai, D. Cao, Vision-based detection and visualization of dynamic
workspaces, Autom. Constr 104 (2019) 1–13,https://doi.org/10.1016/j.autcon.2019.04.001.
20) L. Ding, W. Fang, H. Luo, P.E. Love, B. Zhong, X. Ouyang, A deep hybrid learning model to detect
unsafe behavior: integrating convolution neural networks and long short-term memory, Autom. Constr
86 (2018) 118–124, https://doi.org/10.1016/J.AUTCON.2017.11.002.
21) W. Fang, L. Ding, H. Luo, P.E. Love, Falls from heights: a computer vision based approach for safety
harness detection, Autom. Constr. 91 (2018) 53–61,https://doi.org/10.1016/J.AUTCON.2018.02.018.
22) X. Luo, H. Li, D. Cao, Y. Yu, X. Yang, T. Huang, Towards efficient and objective work sampling:
recognizing workers’ activities in site surveillance videos with two stream convolutional networks,
Autom. Constr. 94 (2018) 360–370,https://doi.org/10.1016/J.AUTCON.2018.07.011.
23) J. Chen, Y. Fang, Y.K. Cho, Performance evaluation of 3D descriptors for object recognition in
construction applications, Autom. Constr. 86 (2018) 44–
52,https://doi.org/10.1016/j.autcon.2017.10.033.
24) H. Bae, M. Golparvar-Fard, J. White, High-precision vision-based mobile augmented reality system for
context-aware architectural, engineering, construction and facility management (AEC/FM) applications,
Visualization in Engineering 1 (1) (2013) 3–10, https://doi.org/10.1186/2213-7459-1-3.
25) V. Pa˘tra˘ucean, I. Armeni, M. Nahangi, J. Yeung, I. Brilakis, C. Haas, State of research in automatic as-
built modelling, Adv. Eng. Inform. 29 (2) (2015) 162–171, https://doi.org/10.1016/j.aei.2015.01.001.
26) Y. Ham, K.K. Han, J.J. Lin, M. Golparvar-Fard, Visual monitoring of civil infrastructure systems via
camera-equipped unmanned aerial vehicles (UAVs): A review of related works, Visualization in
Engineering. 4 (1) (2016)1–12.
27) S. Siebert, J. Teizer, Mobile 3D mapping for surveying earthwork projects using an Unmanned
AerialVehicle (UAV) system, Autom. Constr. 41 (2014) 1–14,
https://doi.org/10.22260/ISARC2013/0154.
28) S. Kang, M.W. Park, W. Suh, Feasibility study of the unmanned-aerial-vehicle radiofrequency
identification system for localizing construction materials on large-scale open sites, Sensors and
Materials 31 (5) (2019) 1449–1465, https://doi.org/10.18494/SAM.2019.2266.
29) Kun Zhou1, Linhua Chen1, Shanshan Yu, Vision based deformation measurement Pile soil testing, In
proceedings of MATEC Web of Conferences, 2019;275:1-9. DOI:10.1051/matecconf/201927503009.
30) Chuan-Zhi Dong , Ozan Celik, F. Necati Catbas , Eugene OBrien , Su Taylor, A Robust Vision-Based
Method for Displacement Measurement under Adverse Environmental Factors Using Spatio-Temporal
Context Learning and Taylor Approximation, Sensors, 2019;19:1-22. doi: 10.3390/s19143197.
31) Wei Wang, Yu Shao, Building Vibration Monitoring Based on Digital Optical Cameras, Journal of Vibro
Engineering, 23(6) (2021) 1383-1394. https://doi.org/10.21595/jve.2021.21999.
32) D. Lydon, M. Lydon, R. Kromanis, ChuanZhi Dong, S.u. Necati Catbas, Taylor,, Bridge Damage
Detection Approach Using a Roving Camera Technique, Sensors 21 (2021) 1–22,
https://doi.org/10.3390/s21041246.
33) B.F. Spencer Jr, Vedhus Hoskere, Yasutaka Narazak, Advances in Computer Vision-Based Civil
Infrastructure Inspection and Monitoring, Engineering 5(2019) 199–222,
https://doi.org/10.1016/j.eng.2018.11.030.
34) M.R. Jahanshahi, J.S. Kelly, S.F. Masri, G.S. Sukhatme, A survey and evaluation of promising
approaches for automatic image-based defect detection of bridge structures, Struct Infrastruct Eng 5 (6)
(2009) 455–486, https://doi.org/10.1080/15732470801945930.
35) T. Dawood, Z. Zhu, T. Zayed, Machine vision-based model for spalling detection and quantification in
subway networks, Automation in Construction 81 (2017) 149–160,
https://doi.org/10.1016/j.autcon.2017.06.008.
36) C.M. Yeum, S.J. Dyke, Vision-based automated crack detection for bridge inspection, Comput Civ
Infrastruct Eng. 30 (10) (2015) 759–770, https://doi.org/10.1111/mice.12141.
37) M.R. Jahanshahi, F.C. Chen, C. Joffe, S.F. Masri, Vision-based quantitative assessment of microcracks
on reactor internal components of nuclear power plants, Struct Infrastruct Eng. 13 (8) (2017) 1–8.
38) S.K. Ahuja, M.K. Shukla, A survey of computer vision-based corrosion detection approaches. In:
Satapathy S, Joshi A, editors. Information and communication technology for intelligent systems (ICTIS
2017), 2018; 2:Springer.55-63.
39) S. Ghanta, T. Karp, S. Lee, Wavelet domain detection of rust in steel bridge images. In: Proceedings of
2011 IEEE International Conference on Acoustics, Speech and Signal Processing; 2011 May 22–27;
Prague, Czech Republic.Piscataway: IEEE.1033-36.
40) M.R. Jahanshahi, S.F. Masri, Parametric performance evaluation of wavelet based corrosion detection
algorithms for condition assessment of civil infrastructure systems, J Comput Civ Eng. 27 (3) (2013) 45–
57, https://doi. org/10.1061/(ASCE)CP.1943-5487.0000225.
41) S. Jang, H. Jo, S. Cho, K. Mechitov, J.A. Rice, S.H. Sim, et al., Structural health monitoring of a cable-
stayed bridge using smart sensor technology: deployment and evaluation, Smart Struct Syst. 6 (5) (2010)
439–459,https://doi.org/10.1117/12.852272.
42) J.A. Rice, K. Mechitov, S.H. Sim, T. Nagayama, S. Jang, R. Kim, Flexible smart sensor framework for
autonomous structural health monitoring, Smart Struct, Syst 6 (6) (2010) 423–438.
10.12989/sss.2010.6.5_6.423.
43) Y.F. Liu, S.J. Cho, B.F. Spencer, Jr., J.S. Fan, Concrete crack assessment using digital image processing
and 3D scene reconstruction, J Comput Civ Eng, 2016;30(1):DOI:10.1061/(ASCE)CP.1943-
5487.0000446.
44) C. Koch, I. Brilakis, Pothole detection in asphalt pavement images, Adv Eng Inform. 25 (3) (2011) 507–
515, https://doi.org/10.1016/j.aei.2011.01.002.
45) M. Salman, S. Mathavan, K. Kamal, M. Rahman, Pavement crack detection using the Gabor filter. In:
Proceedings of the 16th International IEEE Conference on Intelligent Transportation Systems, Oct 6–9,
2013, Hague, Netherlands.2039–44.
46) Y. Hu, C. Zhao, A novel LBP based methods for pavement crack detection, J Pattern Recognition Res. 5
(1) (2010) 140–147.
47) Q. Zou, Y. Cao, Q. Li, Q. Mao, S. Wang, Crack Tree: automatic crack detection from pavement images,
Pattern Recognit Lett 33 (3) (2012) 227–238, https://doi.org/10.1016/j.patrec.2011.11.004.
48) L. Li, L. Sun, S. Tan, G. Ning, An efficient way in image preprocessing for pavement crack images. In:
Proceedings of the 12th COTA International Conference of Transportation Professionals, Aug 3–6;
Beijing, China. 2012,3095-3103.
49) S. Yoneyama, A. Kitagawa, S. Iwata, K. Tani, H. Kikuta, Bridge deflection measurement using digital
image correlation, Exp Tech l31(1) (2007) 34-40.
50) S. Yoneyama, H. Ueda, Bridge deflection measurement using digital image correlation with camera
movement correction, Mater Trans 53 (2) (2012),285–290, https://doi.org/10.2320/MATERTRANS.I-
M2011843.
51) M.N. Helfrick, C. Niezrecki, P. Avitabile, T. Schmidt, 3D digital image correlation methods for full-field
vibration measurement, Mech Syst Signal Process 25 (3) (2011) 917–927,
https://doi.org/10.1016/j.ymssp.2010.08.013.
52) R. Ghorbani, F. Matta, M.A. Sutton, Full-field deformation measurement and crack mapping on confined
masonry walls using digital image correlation,Exp Mech. 55 (1) (2015) 227–243,
https://doi.org/10.1007/s11340-014-9906-y.
53) Chuan Zhi Dong, Selcuk Bas, F. Necati Catbas, Portable monitoring approach using cameras and
computer vision for bridge load rating in smart cities,Journal of Civil Structural Health Monitoring,10
(2020) 1001-1021.https://doi.org/10.1007/s13349-020-00431-2.
54) C. Chu, F. Ghrib, S. Cheng, Cable tension monitoring through feature-based video image Processing,
Journal of Civil Structural Health Monitoring 11 (2021) 69–84, https://doi.org/10.1007/s13349-020-
00438-9.
55) D. Feng, T. Scarangello, M.Q. Feng, Q.i. Yea, Cable tension force estimate using novel noncontact
vision-based sensor, Measurement 99 (2017) 44–52. 10.1016%2Fj.measurement.2016.12.020.
56) Y.-S. Shin, J. Kim, A Vision-Based Collision Monitoring System for Proximity of Construction Workers
to Trucks Enhanced by Posture-Dependent Perception and Truck Bodies’ Occupied Space, Sustainability
14 (2022) 1–13, https://doi. org/10.3390/su14137934.